Terminal attractors for addressable memory in neural networks
NASA Technical Reports Server (NTRS)
Zak, Michail
1988-01-01
A new type of attractors - terminal attractors - for an addressable memory in neural networks operating in continuous time is introduced. These attractors represent singular solutions of the dynamical system. They intersect (or envelope) the families of regular solutions while each regular solution approaches the terminal attractor in a finite time period. It is shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the weight matrix.
Terminal attractors in neural networks
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.
Noise Tolerance of Attractor and Feedforward Memory Models
Lim, Sukbin; Goldman, Mark S.
2017-01-01
In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset. Here, we compare the performance of two prominent classes of memory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can out-perform feedforward ones. Under a simple model in which neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli. PMID:22091664
Roudi, Yasser; Latham, Peter E
2007-09-01
A fundamental problem in neuroscience is understanding how working memory--the ability to store information at intermediate timescales, like tens of seconds--is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons.
NASA Astrophysics Data System (ADS)
Roach, James; Sander, Leonard; Zochowski, Michal
Auto-associative memory is the ability to retrieve a pattern from a small fraction of the pattern and is an important function of neural networks. Within this context, memories that are stored within the synaptic strengths of networks act as dynamical attractors for network firing patterns. In networks with many encoded memories, some attractors will be stronger than others. This presents the problem of how networks switch between attractors depending on the situation. We suggest that regulation of neuronal spike-frequency adaptation (SFA) provides a universal mechanism for network-wide attractor selectivity. Here we demonstrate in a Hopfield type attractor network that neurons minimal SFA will reliably activate in the pattern corresponding to a local attractor and that a moderate increase in SFA leads to the network to converge to the strongest attractor state. Furthermore, we show that on long time scales SFA allows for temporal sequences of activation to emerge. Finally, using a model of cholinergic modulation within the cortex we argue that dynamic regulation of attractor preference by SFA could be critical for the role of acetylcholine in attention or for arousal states in general. This work was supported by: NSF Graduate Research Fellowship Program under Grant No. DGE 1256260 (JPR), NSF CMMI 1029388 (MRZ) and NSF PoLS 1058034 (MRZ & LMS).
Two Unipolar Terminal-Attractor-Based Associative Memories
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Wu, Chwan-Hwa
1995-01-01
Two unipolar mathematical models of electronic neural network functioning as terminal-attractor-based associative memory (TABAM) developed. Models comprise sets of equations describing interactions between time-varying inputs and outputs of neural-network memory, regarded as dynamical system. Simplifies design and operation of optoelectronic processor to implement TABAM performing associative recall of images. TABAM concept described in "Optoelectronic Terminal-Attractor-Based Associative Memory" (NPO-18790). Experimental optoelectronic apparatus that performed associative recall of binary images described in "Optoelectronic Inner-Product Neural Associative Memory" (NPO-18491).
Roudi, Yasser; Latham, Peter E
2007-01-01
A fundamental problem in neuroscience is understanding how working memory—the ability to store information at intermediate timescales, like tens of seconds—is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons. PMID:17845070
Attractor neural networks with resource-efficient synaptic connectivity
NASA Astrophysics Data System (ADS)
Pehlevan, Cengiz; Sengupta, Anirvan
Memories are thought to be stored in the attractor states of recurrent neural networks. Here we explore how resource constraints interplay with memory storage function to shape synaptic connectivity of attractor networks. We propose that given a set of memories, in the form of population activity patterns, the neural circuit choses a synaptic connectivity configuration that minimizes a resource usage cost. We argue that the total synaptic weight (l1-norm) in the network measures the resource cost because synaptic weight is correlated with synaptic volume, which is a limited resource, and is proportional to neurotransmitter release and post-synaptic current, both of which cost energy. Using numerical simulations and replica theory, we characterize optimal connectivity profiles in resource-efficient attractor networks. Our theory explains several experimental observations on cortical connectivity profiles, 1) connectivity is sparse, because synapses are costly, 2) bidirectional connections are overrepresented and 3) are stronger, because attractor states need strong recurrence.
Chartier, Sylvain; Proulx, Robert
2005-11-01
This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.
Optoelectronic Terminal-Attractor-Based Associative Memory
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Barhen, Jacob; Farhat, Nabil H.
1994-01-01
Report presents theoretical and experimental study of optically and electronically addressable optical implementation of artificial neural network that performs associative recall. Shows by computer simulation that terminal-attractor-based associative memory can have perfect convergence in associative retrieval and increased storage capacity. Spurious states reduced by exploiting terminal attractors.
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
NASA Astrophysics Data System (ADS)
Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; Del Giudice, Paolo
2015-10-01
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems.
Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo
2015-10-14
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a 'basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.
Memory Retrieval Time and Memory Capacity of the CA3 Network: Role of Gamma Frequency Oscillations
ERIC Educational Resources Information Center
de Almeida, Licurgo; Idiart, Marco; Lisman, John E.
2007-01-01
The existence of recurrent synaptic connections in CA3 led to the hypothesis that CA3 is an autoassociative network similar to the Hopfield networks studied by theorists. CA3 undergoes gamma frequency periodic inhibition that prevents a persistent attractor state. This argues against the analogy to Hopfield nets, in which an attractor state can be…
Statistics and dynamics of attractor networks with inter-correlated patterns
NASA Astrophysics Data System (ADS)
Kropff, E.
2007-02-01
In an embodied feature representation view, the semantic memory represents concepts in the brain by the associated activation of the features that describe it, each one of them processed in a differentiated region of the cortex. This system has been modeled with a Potts attractor network. Several studies of feature representation show that the correlation between patterns plays a crucial role in semantic memory. The present work focuses on two aspects of the effect of correlations in attractor networks. In first place, it assesses how a Potts network can store a set of patterns with non-trivial correlations between them. This is done through a simple and biologically plausible modification to the classical learning rule. In second place, it studies the complexity of latching transitions between attractor states, and how this complexity can be controlled.
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo
2015-01-01
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases. PMID:26463272
Non-Equlibrium Driven Dynamics of Continuous Attractors in Place Cell Networks
NASA Astrophysics Data System (ADS)
Zhong, Weishun; Kim, Hyun Jin; Schwab, David; Murugan, Arvind
Attractors have found much use in neuroscience as a means of information processing and decision making. Examples include associative memory with point and continuous attractors, spatial navigation and planning using place cell networks, dynamic pattern recognition among others. The functional use of such attractors requires the action of spatially and temporally varying external driving signals and yet, most theoretical work on attractors has been in the limit of small or no drive. We take steps towards understanding the non-equilibrium driven dynamics of continuous attractors in place cell networks. We establish an `equivalence principle' that relates fluctuations under a time-dependent external force to equilibrium fluctuations in a `co-moving' frame with only static forces, much like in Newtonian physics. Consequently, we analytically derive a network's capacity to encode multiple attractors as a function of the driving signal size and rate of change.
Oscillations in Spurious States of the Associative Memory Model with Synaptic Depression
NASA Astrophysics Data System (ADS)
Murata, Shin; Otsubo, Yosuke; Nagata, Kenji; Okada, Masato
2014-12-01
The associative memory model is a typical neural network model that can store discretely distributed fixed-point attractors as memory patterns. When the network stores the memory patterns extensively, however, the model has other attractors besides the memory patterns. These attractors are called spurious memories. Both spurious states and memory states are in equilibrium, so there is little difference between their dynamics. Recent physiological experiments have shown that the short-term dynamic synapse called synaptic depression decreases its efficacy of transmission to postsynaptic neurons according to the activities of presynaptic neurons. Previous studies revealed that synaptic depression destabilizes the memory states when the number of memory patterns is finite. However, it is very difficult to study the dynamical properties of the spurious states if the number of memory patterns is proportional to the number of neurons. We investigate the effect of synaptic depression on spurious states by Monte Carlo simulation. The results demonstrate that synaptic depression does not affect the memory states but mainly destabilizes the spurious states and induces periodic oscillations.
Breeding novel solutions in the brain: a model of Darwinian neurodynamics.
Szilágyi, András; Zachar, István; Fedor, Anna; de Vladar, Harold P; Szathmáry, Eörs
2016-01-01
Background : The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods : We combine known components of the brain - recurrent neural networks (acting as attractors), the action selection loop and implicit working memory - to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results : We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions : Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.
Zhang, Kechen
2016-01-01
The problem of how the hippocampus encodes both spatial and nonspatial information at the cellular network level remains largely unresolved. Spatial memory is widely modeled through the theoretical framework of attractor networks, but standard computational models can only represent spaces that are much smaller than the natural habitat of an animal. We propose that hippocampal networks are built on a basic unit called a “megamap,” or a cognitive attractor map in which place cells are flexibly recombined to represent a large space. Its inherent flexibility gives the megamap a huge representational capacity and enables the hippocampus to simultaneously represent multiple learned memories and naturally carry nonspatial information at no additional cost. On the other hand, the megamap is dynamically stable, because the underlying network of place cells robustly encodes any location in a large environment given a weak or incomplete input signal from the upstream entorhinal cortex. Our results suggest a general computational strategy by which a hippocampal network enjoys the stability of attractor dynamics without sacrificing the flexibility needed to represent a complex, changing world. PMID:27193320
Nowicki, Dimitri; Siegelmann, Hava
2010-01-01
This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013
Robust sequential working memory recall in heterogeneous cognitive networks
Rabinovich, Mikhail I.; Sokolov, Yury; Kozma, Robert
2014-01-01
Psychiatric disorders are often caused by partial heterogeneous disinhibition in cognitive networks, controlling sequential and spatial working memory (SWM). Such dynamic connectivity changes suggest that the normal relationship between the neuronal components within the network deteriorates. As a result, competitive network dynamics is qualitatively altered. This dynamics defines the robust recall of the sequential information from memory and, thus, the SWM capacity. To understand pathological and non-pathological bifurcations of the sequential memory dynamics, here we investigate the model of recurrent inhibitory-excitatory networks with heterogeneous inhibition. We consider the ensemble of units with all-to-all inhibitory connections, in which the connection strengths are monotonically distributed at some interval. Based on computer experiments and studying the Lyapunov exponents, we observed and analyzed the new phenomenon—clustered sequential dynamics. The results are interpreted in the context of the winnerless competition principle. Accordingly, clustered sequential dynamics is represented in the phase space of the model by two weakly interacting quasi-attractors. One of them is similar to the sequential heteroclinic chain—the regular image of SWM, while the other is a quasi-chaotic attractor. Coexistence of these quasi-attractors means that the recall of the normal information sequence is intermittently interrupted by episodes with chaotic dynamics. We indicate potential dynamic ways for augmenting damaged working memory and other cognitive functions. PMID:25452717
An Attractor Network in the Hippocampus: Theory and Neurophysiology
ERIC Educational Resources Information Center
Rolls, Edmund T.
2007-01-01
A quantitative computational theory of the operation of the CA3 system as an attractor or autoassociation network is described. Based on the proposal that CA3-CA3 autoassociative networks are important for episodic or event memory in which space is a component (place in rodents and spatial view in primates), it has been shown behaviorally that the…
ERIC Educational Resources Information Center
Lerner, Itamar; Bentin, Shlomo; Shriki, Oren
2012-01-01
Localist models of spreading activation (SA) and models assuming distributed representations offer very different takes on semantic priming, a widely investigated paradigm in word recognition and semantic memory research. In this study, we implemented SA in an attractor neural network model with distributed representations and created a unified…
Emergence of low noise frustrated states in E/I balanced neural networks.
Recio, I; Torres, J J
2016-12-01
We study emerging phenomena in binary neural networks where, with a probability c synaptic intensities are chosen according with a Hebbian prescription, and with probability (1-c) there is an extra random contribution to synaptic weights. This new term, randomly taken from a Gaussian bimodal distribution, balances the synaptic population in the network so that one has 80%-20% relation in E/I population ratio, mimicking the balance observed in mammals cortex. For some regions of the relevant parameters, our system depicts standard memory (at low temperature) and non-memory attractors (at high temperature). However, as c decreases and the level of the underlying noise also decreases below a certain temperature T t , a kind of memory-frustrated state, which resembles spin-glass behavior, sharply emerges. Contrary to what occurs in Hopfield-like neural networks, the frustrated state appears here even in the limit of the loading parameter α→0. Moreover, we observed that the frustrated state in fact corresponds to two states of non-vanishing activity uncorrelated with stored memories, associated, respectively, to a high activity or Up state and to a low activity or Down state. Using a linear stability analysis, we found regions in the space of relevant parameters for locally stable steady states and demonstrated that frustrated states coexist with memory attractors below T t . Then, multistability between memory and frustrated states is present for relatively small c, and metastability of memory attractors can emerge as c decreases even more. We studied our system using standard mean-field techniques and with Monte Carlo simulations, obtaining a perfect agreement between theory and simulations. Our study can be useful to explain the role of synapse heterogeneity on the emergence of stable Up and Down states not associated to memory attractors, and to explore the conditions to induce transitions among them, as in sleep-wake transitions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mapping attractor fields in face space: the atypicality bias in face recognition.
Tanaka, J; Giles, M; Kremen, S; Simon, V
1998-09-01
A familiar face can be recognized across many changes in the stimulus input. In this research, the many-to-one mapping of face stimuli to a single face memory is referred to as a face memory's 'attractor field'. According to the attractor field approach, a face memory will be activated by any stimuli falling within the boundaries of its attractor field. It was predicted that by virtue of its location in a multi-dimensional face space, the attractor field of an atypical face will be larger than the attractor field of a typical face. To test this prediction, subjects make likeness judgments to morphed faces that contained a 50/50 contribution from an atypical and a typical parent face. The main result of four experiments was that the morph face was judged to bear a stronger resemblance to the atypical face parent than the typical face parent. The computational basis of the atypicality bias was demonstrated in a neural network simulation where morph inputs of atypical and typical representations elicited stronger activation of atypical output units than of typical output units. Together, the behavioral and simulation evidence supports the view that the attractor fields of atypical faces span over a broader region of face space that the attractor fields of typical faces.
Dempere-Marco, Laura; Melcher, David P; Deco, Gustavo
2012-01-01
The study of working memory capacity is of outmost importance in cognitive psychology as working memory is at the basis of general cognitive function. Although the working memory capacity limit has been thoroughly studied, its origin still remains a matter of strong debate. Only recently has the role of visual saliency in modulating working memory storage capacity been assessed experimentally and proved to provide valuable insights into working memory function. In the computational arena, attractor networks have successfully accounted for psychophysical and neurophysiological data in numerous working memory tasks given their ability to produce a sustained elevated firing rate during a delay period. Here we investigate the mechanisms underlying working memory capacity by means of a biophysically-realistic attractor network with spiking neurons while accounting for two recent experimental observations: 1) the presence of a visually salient item reduces the number of items that can be held in working memory, and 2) visually salient items are commonly kept in memory at the cost of not keeping as many non-salient items. Our model suggests that working memory capacity is determined by two fundamental processes: encoding of visual items into working memory and maintenance of the encoded items upon their removal from the visual display. While maintenance critically depends on the constraints that lateral inhibition imposes to the mnemonic activity, encoding is limited by the ability of the stimulated neural assemblies to reach a sufficiently high level of excitation, a process governed by the dynamics of competition and cooperation among neuronal pools. Encoding is therefore contingent upon the visual working memory task and has led us to introduce the concept of effective working memory capacity (eWMC) in contrast to the maximal upper capacity limit only reached under ideal conditions.
Dempere-Marco, Laura; Melcher, David P.; Deco, Gustavo
2012-01-01
The study of working memory capacity is of outmost importance in cognitive psychology as working memory is at the basis of general cognitive function. Although the working memory capacity limit has been thoroughly studied, its origin still remains a matter of strong debate. Only recently has the role of visual saliency in modulating working memory storage capacity been assessed experimentally and proved to provide valuable insights into working memory function. In the computational arena, attractor networks have successfully accounted for psychophysical and neurophysiological data in numerous working memory tasks given their ability to produce a sustained elevated firing rate during a delay period. Here we investigate the mechanisms underlying working memory capacity by means of a biophysically-realistic attractor network with spiking neurons while accounting for two recent experimental observations: 1) the presence of a visually salient item reduces the number of items that can be held in working memory, and 2) visually salient items are commonly kept in memory at the cost of not keeping as many non-salient items. Our model suggests that working memory capacity is determined by two fundamental processes: encoding of visual items into working memory and maintenance of the encoded items upon their removal from the visual display. While maintenance critically depends on the constraints that lateral inhibition imposes to the mnemonic activity, encoding is limited by the ability of the stimulated neural assemblies to reach a sufficiently high level of excitation, a process governed by the dynamics of competition and cooperation among neuronal pools. Encoding is therefore contingent upon the visual working memory task and has led us to introduce the concept of effective working memory capacity (eWMC) in contrast to the maximal upper capacity limit only reached under ideal conditions. PMID:22952608
Lerner, Itamar; Bentin, Shlomo; Shriki, Oren
2012-01-01
Localist models of spreading activation (SA) and models assuming distributed-representations offer very different takes on semantic priming, a widely investigated paradigm in word recognition and semantic memory research. In the present study we implemented SA in an attractor neural network model with distributed representations and created a unified framework for the two approaches. Our models assumes a synaptic depression mechanism leading to autonomous transitions between encoded memory patterns (latching dynamics), which account for the major characteristics of automatic semantic priming in humans. Using computer simulations we demonstrated how findings that challenged attractor-based networks in the past, such as mediated and asymmetric priming, are a natural consequence of our present model’s dynamics. Puzzling results regarding backward priming were also given a straightforward explanation. In addition, the current model addresses some of the differences between semantic and associative relatedness and explains how these differences interact with stimulus onset asynchrony in priming experiments. PMID:23094718
Memory recall and spike-frequency adaptation
NASA Astrophysics Data System (ADS)
Roach, James P.; Sander, Leonard M.; Zochowski, Michal R.
2016-05-01
The brain can reproduce memories from partial data; this ability is critical for memory recall. The process of memory recall has been studied using autoassociative networks such as the Hopfield model. This kind of model reliably converges to stored patterns that contain the memory. However, it is unclear how the behavior is controlled by the brain so that after convergence to one configuration, it can proceed with recognition of another one. In the Hopfield model, this happens only through unrealistic changes of an effective global temperature that destabilizes all stored configurations. Here we show that spike-frequency adaptation (SFA), a common mechanism affecting neuron activation in the brain, can provide state-dependent control of pattern retrieval. We demonstrate this in a Hopfield network modified to include SFA, and also in a model network of biophysical neurons. In both cases, SFA allows for selective stabilization of attractors with different basins of attraction, and also for temporal dynamics of attractor switching that is not possible in standard autoassociative schemes. The dynamics of our models give a plausible account of different sorts of memory retrieval.
Ghosts in the machine: memory interference from the previous trial.
Papadimitriou, Charalampos; Ferdoash, Afreen; Snyder, Lawrence H
2015-01-15
Previous memoranda can interfere with the memorization or storage of new information, a concept known as proactive interference. Studies of proactive interference typically use categorical memoranda and match-to-sample tasks with categorical measures such as the proportion of correct to incorrect responses. In this study we instead train five macaques in a spatial memory task with continuous memoranda and responses, allowing us to more finely probe working memory circuits. We first ask whether the memoranda from the previous trial result in proactive interference in an oculomotor delayed response task. We then characterize the spatial and temporal profile of this interference and ask whether this profile can be predicted by an attractor network model of working memory. We find that memory in the current trial shows a bias toward the location of the memorandum of the previous trial. The magnitude of this bias increases with the duration of the memory period within which it is measured. Our simulations using standard attractor network models of working memory show that these models easily replicate the spatial profile of the bias. However, unlike the behavioral findings, these attractor models show an increase in bias with the duration of the previous rather than the current memory period. To model a bias that increases with current trial duration we posit two separate memory stores, a rapidly decaying visual store that resists proactive interference effects and a sustained memory store that is susceptible to proactive interference. Copyright © 2015 the American Physiological Society.
Ghosts in the machine: memory interference from the previous trial
Ferdoash, Afreen; Snyder, Lawrence H.
2014-01-01
Previous memoranda can interfere with the memorization or storage of new information, a concept known as proactive interference. Studies of proactive interference typically use categorical memoranda and match-to-sample tasks with categorical measures such as the proportion of correct to incorrect responses. In this study we instead train five macaques in a spatial memory task with continuous memoranda and responses, allowing us to more finely probe working memory circuits. We first ask whether the memoranda from the previous trial result in proactive interference in an oculomotor delayed response task. We then characterize the spatial and temporal profile of this interference and ask whether this profile can be predicted by an attractor network model of working memory. We find that memory in the current trial shows a bias toward the location of the memorandum of the previous trial. The magnitude of this bias increases with the duration of the memory period within which it is measured. Our simulations using standard attractor network models of working memory show that these models easily replicate the spatial profile of the bias. However, unlike the behavioral findings, these attractor models show an increase in bias with the duration of the previous rather than the current memory period. To model a bias that increases with current trial duration we posit two separate memory stores, a rapidly decaying visual store that resists proactive interference effects and a sustained memory store that is susceptible to proactive interference. PMID:25376781
Conceptual Hierarchies in a Flat Attractor Network
O’Connor, Christopher M.; Cree, George S.; McRae, Ken
2009-01-01
The structure of people’s conceptual knowledge of concrete nouns has traditionally been viewed as hierarchical (Collins & Quillian, 1969). For example, superordinate concepts (vegetable) are assumed to reside at a higher level than basic-level concepts (carrot). A feature-based attractor network with a single layer of semantic features developed representations of both basic-level and superordinate concepts. No hierarchical structure was built into the network. In Experiment and Simulation 1, the graded structure of categories (typicality ratings) is accounted for by the flat attractor-network. Experiment and Simulation 2 show that, as with basic-level concepts, such a network predicts feature verification latencies for superordinate concepts (vegetable
Computations in the deep vs superficial layers of the cerebral cortex.
Rolls, Edmund T; Mills, W Patrick C
2017-11-01
A fundamental question is how the cerebral neocortex operates functionally, computationally. The cerebral neocortex with its superficial and deep layers and highly developed recurrent collateral systems that provide a basis for memory-related processing might perform somewhat different computations in the superficial and deep layers. Here we take into account the quantitative connectivity within and between laminae. Using integrate-and-fire neuronal network simulations that incorporate this connectivity, we first show that attractor networks implemented in the deep layers that are activated by the superficial layers could be partly independent in that the deep layers might have a different time course, which might because of adaptation be more transient and useful for outputs from the neocortex. In contrast the superficial layers could implement more prolonged firing, useful for slow learning and for short-term memory. Second, we show that a different type of computation could in principle be performed in the superficial and deep layers, by showing that the superficial layers could operate as a discrete attractor network useful for categorisation and feeding information forward up a cortical hierarchy, whereas the deep layers could operate as a continuous attractor network useful for providing a spatially and temporally smooth output to output systems in the brain. A key advance is that we draw attention to the functions of the recurrent collateral connections between cortical pyramidal cells, often omitted in canonical models of the neocortex, and address principles of operation of the neocortex by which the superficial and deep layers might be specialized for different types of attractor-related memory functions implemented by the recurrent collaterals. Copyright © 2017 Elsevier Inc. All rights reserved.
Unipolar Terminal-Attractor Based Neural Associative Memory with Adaptive Threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1996-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner-product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
Unipolar terminal-attractor based neural associative memory with adaptive threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1993-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
Stringer, Simon M; Rolls, Edmund T
2006-12-01
A key issue is how networks in the brain learn to perform path integration, that is update a represented position using a velocity signal. Using head direction cells as an example, we show that a competitive network could self-organize to learn to respond to combinations of head direction and angular head rotation velocity. These combination cells can then be used to drive a continuous attractor network to the next head direction based on the incoming rotation signal. An associative synaptic modification rule with a short term memory trace enables preceding combination cell activity during training to be associated with the next position in the continuous attractor network. The network accounts for the presence of neurons found in the brain that respond to combinations of head direction and angular head rotation velocity. Analogous networks in the hippocampal system could self-organize to perform path integration of place and spatial view representations.
Cusps enable line attractors for neural computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Zhuocheng; Zhang, Jiwei; Sornborger, Andrew T.
Here, line attractors in neuronal networks have been suggested to be the basis of many brain functions, such as working memory, oculomotor control, head movement, locomotion, and sensory processing. In this paper, we make the connection between line attractors and pulse gating in feed-forward neuronal networks. In this context, because of their neutral stability along a one-dimensional manifold, line attractors are associated with a time-translational invariance that allows graded information to be propagated from one neuronal population to the next. To understand how pulse-gating manifests itself in a high-dimensional, nonlinear, feedforward integrate-and-fire network, we use a Fokker-Planck approach to analyzemore » system dynamics. We make a connection between pulse-gated propagation in the Fokker-Planck and population-averaged mean-field (firing rate) models, and then identify an approximate line attractor in state space as the essential structure underlying graded information propagation. An analysis of the line attractor shows that it consists of three fixed points: a central saddle with an unstable manifold along the line and stable manifolds orthogonal to the line, which is surrounded on either side by stable fixed points. Along the manifold defined by the fixed points, slow dynamics give rise to a ghost. We show that this line attractor arises at a cusp catastrophe, where a fold bifurcation develops as a function of synaptic noise; and that the ghost dynamics near the fold of the cusp underly the robustness of the line attractor. Understanding the dynamical aspects of this cusp catastrophe allows us to show how line attractors can persist in biologically realistic neuronal networks and how the interplay of pulse gating, synaptic coupling, and neuronal stochasticity can be used to enable attracting one-dimensional manifolds and, thus, dynamically control the processing of graded information.« less
Cusps enable line attractors for neural computation
NASA Astrophysics Data System (ADS)
Xiao, Zhuocheng; Zhang, Jiwei; Sornborger, Andrew T.; Tao, Louis
2017-11-01
Line attractors in neuronal networks have been suggested to be the basis of many brain functions, such as working memory, oculomotor control, head movement, locomotion, and sensory processing. In this paper, we make the connection between line attractors and pulse gating in feed-forward neuronal networks. In this context, because of their neutral stability along a one-dimensional manifold, line attractors are associated with a time-translational invariance that allows graded information to be propagated from one neuronal population to the next. To understand how pulse-gating manifests itself in a high-dimensional, nonlinear, feedforward integrate-and-fire network, we use a Fokker-Planck approach to analyze system dynamics. We make a connection between pulse-gated propagation in the Fokker-Planck and population-averaged mean-field (firing rate) models, and then identify an approximate line attractor in state space as the essential structure underlying graded information propagation. An analysis of the line attractor shows that it consists of three fixed points: a central saddle with an unstable manifold along the line and stable manifolds orthogonal to the line, which is surrounded on either side by stable fixed points. Along the manifold defined by the fixed points, slow dynamics give rise to a ghost. We show that this line attractor arises at a cusp catastrophe, where a fold bifurcation develops as a function of synaptic noise; and that the ghost dynamics near the fold of the cusp underly the robustness of the line attractor. Understanding the dynamical aspects of this cusp catastrophe allows us to show how line attractors can persist in biologically realistic neuronal networks and how the interplay of pulse gating, synaptic coupling, and neuronal stochasticity can be used to enable attracting one-dimensional manifolds and, thus, dynamically control the processing of graded information.
Cusps enable line attractors for neural computation
Xiao, Zhuocheng; Zhang, Jiwei; Sornborger, Andrew T.; ...
2017-11-07
Here, line attractors in neuronal networks have been suggested to be the basis of many brain functions, such as working memory, oculomotor control, head movement, locomotion, and sensory processing. In this paper, we make the connection between line attractors and pulse gating in feed-forward neuronal networks. In this context, because of their neutral stability along a one-dimensional manifold, line attractors are associated with a time-translational invariance that allows graded information to be propagated from one neuronal population to the next. To understand how pulse-gating manifests itself in a high-dimensional, nonlinear, feedforward integrate-and-fire network, we use a Fokker-Planck approach to analyzemore » system dynamics. We make a connection between pulse-gated propagation in the Fokker-Planck and population-averaged mean-field (firing rate) models, and then identify an approximate line attractor in state space as the essential structure underlying graded information propagation. An analysis of the line attractor shows that it consists of three fixed points: a central saddle with an unstable manifold along the line and stable manifolds orthogonal to the line, which is surrounded on either side by stable fixed points. Along the manifold defined by the fixed points, slow dynamics give rise to a ghost. We show that this line attractor arises at a cusp catastrophe, where a fold bifurcation develops as a function of synaptic noise; and that the ghost dynamics near the fold of the cusp underly the robustness of the line attractor. Understanding the dynamical aspects of this cusp catastrophe allows us to show how line attractors can persist in biologically realistic neuronal networks and how the interplay of pulse gating, synaptic coupling, and neuronal stochasticity can be used to enable attracting one-dimensional manifolds and, thus, dynamically control the processing of graded information.« less
Neural network modeling of associative memory: Beyond the Hopfield model
NASA Astrophysics Data System (ADS)
Dasgupta, Chandan
1992-07-01
A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying dynamics are used to store and associatively recall information, are described. In the first class of models, a hierarchical structure is used to store an exponentially large number of strongly correlated memories. The second class of models uses limit cycles to store and retrieve individual memories. A neurobiologically plausible network that generates low-amplitude periodic variations of activity, similar to the oscillations observed in electroencephalographic recordings, is also described. Results obtained from analytic and numerical studies of the properties of these networks are discussed.
Ben Abdallah, Emna; Folschette, Maxime; Roux, Olivier; Magnin, Morgan
2017-01-01
This paper addresses the problem of finding attractors in biological regulatory networks. We focus here on non-deterministic synchronous and asynchronous multi-valued networks, modeled using automata networks (AN). AN is a general and well-suited formalism to study complex interactions between different components (genes, proteins,...). An attractor is a minimal trap domain, that is, a part of the state-transition graph that cannot be escaped. Such structures are terminal components of the dynamics and take the form of steady states (singleton) or complex compositions of cycles (non-singleton). Studying the effect of a disease or a mutation on an organism requires finding the attractors in the model to understand the long-term behaviors. We present a computational logical method based on answer set programming (ASP) to identify all attractors. Performed without any network reduction, the method can be applied on any dynamical semantics. In this paper, we present the two most widespread non-deterministic semantics: the asynchronous and the synchronous updating modes. The logical approach goes through a complete enumeration of the states of the network in order to find the attractors without the necessity to construct the whole state-transition graph. We realize extensive computational experiments which show good performance and fit the expected theoretical results in the literature. The originality of our approach lies on the exhaustive enumeration of all possible (sets of) states verifying the properties of an attractor thanks to the use of ASP. Our method is applied to non-deterministic semantics in two different schemes (asynchronous and synchronous). The merits of our methods are illustrated by applying them to biological examples of various sizes and comparing the results with some existing approaches. It turns out that our approach succeeds to exhaustively enumerate on a desktop computer, in a large model (100 components), all existing attractors up to a given size (20 states). This size is only limited by memory and computation time.
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences
Lindén, Henrik; Lansner, Anders
2016-01-01
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. PMID:27213810
Brain mechanisms for perceptual and reward-related decision-making.
Deco, Gustavo; Rolls, Edmund T; Albantakis, Larissa; Romo, Ranulfo
2013-04-01
Phenomenological models of decision-making, including the drift-diffusion and race models, are compared with mechanistic, biologically plausible models, such as integrate-and-fire attractor neuronal network models. The attractor network models show how decision confidence is an emergent property; and make testable predictions about the neural processes (including neuronal activity and fMRI signals) involved in decision-making which indicate that the medial prefrontal cortex is involved in reward value-based decision-making. Synaptic facilitation in these models can help to account for sequential vibrotactile decision-making, and for how postponed decision-related responses are made. The randomness in the neuronal spiking-related noise that makes the decision-making probabilistic is shown to be increased by the graded firing rate representations found in the brain, to be decreased by the diluted connectivity, and still to be significant in biologically large networks with thousands of synapses onto each neuron. The stability of these systems is shown to be influenced in different ways by glutamatergic and GABAergic efficacy, leading to a new field of dynamical neuropsychiatry with applications to understanding schizophrenia and obsessive-compulsive disorder. The noise in these systems is shown to be advantageous, and to apply to similar attractor networks involved in short-term memory, long-term memory, attention, and associative thought processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Szejka, Agnes; Drossel, Barbara
2010-02-01
We study the evolution of Boolean networks as model systems for gene regulation. Inspired by biological networks, we select simultaneously for robust attractors and for the ability to respond to external inputs by changing the attractor. Mutations change the connections between the nodes and the update functions. In order to investigate the influence of the type of update functions, we perform our simulations with canalizing as well as with threshold functions. We compare the properties of the fitness landscapes that result for different versions of the selection criterion and the update functions. We find that for all studied cases the fitness landscape has a plateau with maximum fitness resulting in the fact that structurally very different networks are able to fulfill the same task and are connected by neutral paths in network (“genotype”) space. We find furthermore a connection between the attractor length and the mutational robustness, and an extremely long memory of the initial evolutionary stage.
Rolls, Edmund T
2017-05-01
The art of memory (ars memoriae) used since classical times includes using a well-known scene to associate each view or part of the scene with a different item in a speech. This memory technique is also known as the "method of loci." The new theory is proposed that this type of memory is implemented in the CA3 region of the hippocampus where there are spatial view cells in primates that allow a particular view to be associated with a particular object in an event or episodic memory. Given that the CA3 cells with their extensive recurrent collateral system connecting different CA3 cells, and associative synaptic modifiability, form an autoassociation or attractor network, the spatial view cells with their approximately Gaussian view fields become linked in a continuous attractor network. As the view space is traversed continuously (e.g., by self-motion or imagined self-motion across the scene), the views are therefore successively recalled in the correct order, with no view missing, and with low interference between the items to be recalled. Given that each spatial view has been associated with a different discrete item, the items are recalled in the correct order, with none missing. This is the first neuroscience theory of ars memoriae. The theory provides a foundation for understanding how a key feature of ars memoriae, the ability to use a spatial scene to encode a sequence of items to be remembered, is implemented. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Long-Term Memory Stabilized by Noise-Induced Rehearsal
Wei, Yi
2014-01-01
Cortical networks can maintain memories for decades despite the short lifetime of synaptic strengths. Can a neural network store long-lasting memories in unstable synapses? Here, we study the effects of ongoing spike-timing-dependent plasticity (STDP) on the stability of memory patterns stored in synapses of an attractor neural network. We show that certain classes of STDP rules can stabilize all stored memory patterns despite a short lifetime of synapses. In our model, unstructured neural noise, after passing through the recurrent network connections, carries the imprint of all memory patterns in temporal correlations. STDP, combined with these correlations, leads to reinforcement of all stored patterns, even those that are never explicitly visited. Our findings may provide the functional reason for irregular spiking displayed by cortical neurons and justify models of system memory consolidation. Therefore, we propose that irregular neural activity is the feature that helps cortical networks maintain stable connections. PMID:25411507
A Mismatch-Based Model for Memory Reconsolidation and Extinction in Attractor Networks
Amaral, Olavo B.
2011-01-01
The processes of memory reconsolidation and extinction have received increasing attention in recent experimental research, as their potential clinical applications begin to be uncovered. A number of studies suggest that amnestic drugs injected after reexposure to a learning context can disrupt either of the two processes, depending on the behavioral protocol employed. Hypothesizing that reconsolidation represents updating of a memory trace in the hippocampus, while extinction represents formation of a new trace, we have built a neural network model in which either simple retrieval, reconsolidation or extinction of a stored attractor can occur upon contextual reexposure, depending on the similarity between the representations of the original learning and reexposure sessions. This is achieved by assuming that independent mechanisms mediate Hebbian-like synaptic strengthening and mismatch-driven labilization of synaptic changes, with protein synthesis inhibition preferentially affecting the former. Our framework provides a unified mechanistic explanation for experimental data showing (a) the effect of reexposure duration on the occurrence of reconsolidation or extinction and (b) the requirement of memory updating during reexposure to drive reconsolidation. PMID:21826231
Brain-Based Devices for Neuromorphic Computer Systems
2013-07-01
and Deco, G. (2012). Effective Visual Working Memory Capacity: An Emergent Effect from the Neural Dynamics in an Attractor Network. PLoS ONE 7, e42719...models, apply them to a recognition task, and to demonstrate a working memory . In the course of this work a new analytical method for spiking data was...4 3.4 Spiking Neural Model Simulation of Working Memory ..................................... 5 3.5 A Novel Method for Analysis
A quantitative theory of the functions of the hippocampal CA3 network in memory
Rolls, Edmund T.
2013-01-01
A quantitative computational theory of the operation of the hippocampal CA3 system as an autoassociation or attractor network used in episodic memory system is described. In this theory, the CA3 system operates as a single attractor or autoassociation network to enable rapid, one-trial, associations between any spatial location (place in rodents, or spatial view in primates) and an object or reward, and to provide for completion of the whole memory during recall from any part. The theory is extended to associations between time and object or reward to implement temporal order memory, also important in episodic memory. The dentate gyrus (DG) performs pattern separation by competitive learning to produce sparse representations suitable for setting up new representations in CA3 during learning, producing for example neurons with place-like fields from entorhinal cortex grid cells. The dentate granule cells produce by the very small number of mossy fiber (MF) connections to CA3 a randomizing pattern separation effect important during learning but not recall that separates out the patterns represented by CA3 firing to be very different from each other, which is optimal for an unstructured episodic memory system in which each memory must be kept distinct from other memories. The direct perforant path (pp) input to CA3 is quantitatively appropriate to provide the cue for recall in CA3, but not for learning. Tests of the theory including hippocampal subregion analyses and hippocampal NMDA receptor knockouts are described, and support the theory. PMID:23805074
Long-term memory stabilized by noise-induced rehearsal.
Wei, Yi; Koulakov, Alexei A
2014-11-19
Cortical networks can maintain memories for decades despite the short lifetime of synaptic strengths. Can a neural network store long-lasting memories in unstable synapses? Here, we study the effects of ongoing spike-timing-dependent plasticity (STDP) on the stability of memory patterns stored in synapses of an attractor neural network. We show that certain classes of STDP rules can stabilize all stored memory patterns despite a short lifetime of synapses. In our model, unstructured neural noise, after passing through the recurrent network connections, carries the imprint of all memory patterns in temporal correlations. STDP, combined with these correlations, leads to reinforcement of all stored patterns, even those that are never explicitly visited. Our findings may provide the functional reason for irregular spiking displayed by cortical neurons and justify models of system memory consolidation. Therefore, we propose that irregular neural activity is the feature that helps cortical networks maintain stable connections. Copyright © 2014 the authors 0270-6474/14/3415804-12$15.00/0.
Elements of the cellular metabolic structure
De la Fuente, Ildefonso M.
2015-01-01
A large number of studies have demonstrated the existence of metabolic covalent modifications in different molecular structures, which are able to store biochemical information that is not encoded by DNA. Some of these covalent mark patterns can be transmitted across generations (epigenetic changes). Recently, the emergence of Hopfield-like attractor dynamics has been observed in self-organized enzymatic networks, which have the capacity to store functional catalytic patterns that can be correctly recovered by specific input stimuli. Hopfield-like metabolic dynamics are stable and can be maintained as a long-term biochemical memory. In addition, specific molecular information can be transferred from the functional dynamics of the metabolic networks to the enzymatic activity involved in covalent post-translational modulation, so that determined functional memory can be embedded in multiple stable molecular marks. The metabolic dynamics governed by Hopfield-type attractors (functional processes), as well as the enzymatic covalent modifications of specific molecules (structural dynamic processes) seem to represent the two stages of the dynamical memory of cellular metabolism (metabolic memory). Epigenetic processes appear to be the structural manifestation of this cellular metabolic memory. Here, a new framework for molecular information storage in the cell is presented, which is characterized by two functionally and molecularly interrelated systems: a dynamic, flexible and adaptive system (metabolic memory) and an essentially conservative system (genetic memory). The molecular information of both systems seems to coordinate the physiological development of the whole cell. PMID:25988183
You, Hongzhi; Wang, Da-Hui
2017-01-01
Neural networks configured with winner-take-all (WTA) competition and N-methyl-D-aspartate receptor (NMDAR)-mediated synaptic dynamics are endowed with various dynamic characteristics of attractors underlying many cognitive functions. This paper presents a novel method for neuromorphic implementation of a two-variable WTA circuit with NMDARs aimed at implementing decision-making, working memory and hysteresis in visual perceptions. The method proposed is a dynamical system approach of circuit synthesis based on a biophysically plausible WTA model. Notably, slow and non-linear temporal dynamics of NMDAR-mediated synapses was generated. Circuit simulations in Cadence reproduced ramping neural activities observed in electrophysiological recordings in experiments of decision-making, the sustained activities observed in the prefrontal cortex during working memory, and classical hysteresis behavior during visual discrimination tasks. Furthermore, theoretical analysis of the dynamical system approach illuminated the underlying mechanisms of decision-making, memory capacity and hysteresis loops. The consistence between the circuit simulations and theoretical analysis demonstrated that the WTA circuit with NMDARs was able to capture the attractor dynamics underlying these cognitive functions. Their physical implementations as elementary modules are promising for assembly into integrated neuromorphic cognitive systems. PMID:28223913
You, Hongzhi; Wang, Da-Hui
2017-01-01
Neural networks configured with winner-take-all (WTA) competition and N-methyl-D-aspartate receptor (NMDAR)-mediated synaptic dynamics are endowed with various dynamic characteristics of attractors underlying many cognitive functions. This paper presents a novel method for neuromorphic implementation of a two-variable WTA circuit with NMDARs aimed at implementing decision-making, working memory and hysteresis in visual perceptions. The method proposed is a dynamical system approach of circuit synthesis based on a biophysically plausible WTA model. Notably, slow and non-linear temporal dynamics of NMDAR-mediated synapses was generated. Circuit simulations in Cadence reproduced ramping neural activities observed in electrophysiological recordings in experiments of decision-making, the sustained activities observed in the prefrontal cortex during working memory, and classical hysteresis behavior during visual discrimination tasks. Furthermore, theoretical analysis of the dynamical system approach illuminated the underlying mechanisms of decision-making, memory capacity and hysteresis loops. The consistence between the circuit simulations and theoretical analysis demonstrated that the WTA circuit with NMDARs was able to capture the attractor dynamics underlying these cognitive functions. Their physical implementations as elementary modules are promising for assembly into integrated neuromorphic cognitive systems.
Spatiotemporal discrimination in neural networks with short-term synaptic plasticity
NASA Astrophysics Data System (ADS)
Shlaer, Benjamin; Miller, Paul
2015-03-01
Cells in recurrently connected neural networks exhibit bistability, which allows for stimulus information to persist in a circuit even after stimulus offset, i.e. short-term memory. However, such a system does not have enough hysteresis to encode temporal information about the stimuli. The biophysically described phenomenon of synaptic depression decreases synaptic transmission strengths due to increased presynaptic activity. This short-term reduction in synaptic strengths can destabilize attractor states in excitatory recurrent neural networks, causing the network to move along stimulus dependent dynamical trajectories. Such a network can successfully separate amplitudes and durations of stimuli from the number of successive stimuli. Stimulus number, duration and intensity encoding in randomly connected attractor networks with synaptic depression. Front. Comput. Neurosci. 7:59., and so provides a strong candidate network for the encoding of spatiotemporal information. Here we explicitly demonstrate the capability of a recurrent neural network with short-term synaptic depression to discriminate between the temporal sequences in which spatial stimuli are presented.
Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N
2015-04-28
Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.
Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.
Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu
2017-10-01
This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.
How to Compress Sequential Memory Patterns into Periodic Oscillations: General Reduction Rules
Zhang, Kechen
2017-01-01
A neural network with symmetric reciprocal connections always admits a Lyapunov function, whose minima correspond to the memory states stored in the network. Networks with suitable asymmetric connections can store and retrieve a sequence of memory patterns, but the dynamics of these networks cannot be characterized as readily as that of the symmetric networks due to the lack of established general methods. Here, a reduction method is developed for a class of asymmetric attractor networks that store sequences of activity patterns as associative memories, as in a Hopfield network. The method projects the original activity pattern of the network to a low-dimensional space such that sequential memory retrievals in the original network correspond to periodic oscillations in the reduced system. The reduced system is self-contained and provides quantitative information about the stability and speed of sequential memory retrievals in the original network. The time evolution of the overlaps between the network state and the stored memory patterns can also be determined from extended reduced systems. The reduction procedure can be summarized by a few reduction rules, which are applied to several network models, including coupled networks and networks with time-delayed connections, and the analytical solutions of the reduced systems are confirmed by numerical simulations of the original networks. Finally, a local learning rule that provides an approximation to the connection weights involving the pseudoinverse is also presented. PMID:24877729
Complex networks with large numbers of labelable attractors
NASA Astrophysics Data System (ADS)
Mi, Yuanyuan; Zhang, Lisheng; Huang, Xiaodong; Qian, Yu; Hu, Gang; Liao, Xuhong
2011-09-01
Information storage in many functional subsystems of the brain is regarded by theoretical neuroscientists to be related to attractors of neural networks. The number of attractors is large and each attractor can be temporarily represented or suppressed easily by corresponding external stimulus. In this letter, we discover that complex networks consisting of excitable nodes have similar fascinating properties of coexistence of large numbers of oscillatory attractors, most of which can be labeled with a few nodes. According to a simple labeling rule, different attractors can be identified and the number of labelable attractors can be predicted from the analysis of network topology. With the cues of the labeling association, these attractors can be conveniently retrieved or suppressed on purpose.
Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI.
Giulioni, Massimiliano; Camilleri, Patrick; Mattia, Maurizio; Dante, Vittorio; Braun, Jochen; Del Giudice, Paolo
2011-01-01
We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of "high" and "low"-firing activity. Depending on the overall excitability, transitions to the "high" state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the "high" state retains a "working memory" of a stimulus until well after its release. In the latter case, "high" states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated "corrupted" "high" states comprising neurons of both excitatory populations. Within a "basin of attraction," the network dynamics "corrects" such states and re-establishes the prototypical "high" state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.
Counting and classifying attractors in high dimensional dynamical systems.
Bagley, R J; Glass, L
1996-12-07
Randomly connected Boolean networks have been used as mathematical models of neural, genetic, and immune systems. A key quantity of such networks is the number of basins of attraction in the state space. The number of basins of attraction changes as a function of the size of the network, its connectivity and its transition rules. In discrete networks, a simple count of the number of attractors does not reveal the combinatorial structure of the attractors. These points are illustrated in a reexamination of dynamics in a class of random Boolean networks considered previously by Kauffman. We also consider comparisons between dynamics in discrete networks and continuous analogues. A continuous analogue of a discrete network may have a different number of attractors for many different reasons. Some attractors in discrete networks may be associated with unstable dynamics, and several different attractors in a discrete network may be associated with a single attractor in the continuous case. Special problems in determining attractors in continuous systems arise when there is aperiodic dynamics associated with quasiperiodicity of deterministic chaos.
Inhibition delay increases neural network capacity through Stirling transform.
Nogaret, Alain; King, Alastair
2018-03-01
Inhibitory neural networks are found to encode high volumes of information through delayed inhibition. We show that inhibition delay increases storage capacity through a Stirling transform of the minimum capacity which stabilizes locally coherent oscillations. We obtain both the exact and asymptotic formulas for the total number of dynamic attractors. Our results predict a (ln2)^{-N}-fold increase in capacity for an N-neuron network and demonstrate high-density associative memories which host a maximum number of oscillations in analog neural devices.
Inhibition delay increases neural network capacity through Stirling transform
NASA Astrophysics Data System (ADS)
Nogaret, Alain; King, Alastair
2018-03-01
Inhibitory neural networks are found to encode high volumes of information through delayed inhibition. We show that inhibition delay increases storage capacity through a Stirling transform of the minimum capacity which stabilizes locally coherent oscillations. We obtain both the exact and asymptotic formulas for the total number of dynamic attractors. Our results predict a (ln2) -N-fold increase in capacity for an N -neuron network and demonstrate high-density associative memories which host a maximum number of oscillations in analog neural devices.
NASA Astrophysics Data System (ADS)
Wuensche, Andrew
DDLab is interactive graphics software for creating, visualizing, and analyzing many aspects of Cellular Automata, Random Boolean Networks, and Discrete Dynamical Networks in general and studying their behavior, both from the time-series perspective — space-time patterns, and from the state-space perspective — attractor basins. DDLab is relevant to research, applications, and education in the fields of complexity, self-organization, emergent phenomena, chaos, collision-based computing, neural networks, content addressable memory, genetic regulatory networks, dynamical encryption, generative art and music, and the study of the abstract mathematical/physical/dynamical phenomena in their own right.
Robust autoassociative memory with coupled networks of Kuramoto-type oscillators
NASA Astrophysics Data System (ADS)
Heger, Daniel; Krischer, Katharina
2016-08-01
Uncertain recognition success, unfavorable scaling of connection complexity, or dependence on complex external input impair the usefulness of current oscillatory neural networks for pattern recognition or restrict technical realizations to small networks. We propose a network architecture of coupled oscillators for pattern recognition which shows none of the mentioned flaws. Furthermore we illustrate the recognition process with simulation results and analyze the dynamics analytically: Possible output patterns are isolated attractors of the system. Additionally, simple criteria for recognition success are derived from a lower bound on the basins of attraction.
Memory and pattern storage in neural networks with activity dependent synapses
NASA Astrophysics Data System (ADS)
Mejias, J. F.; Torres, J. J.
2009-01-01
We present recently obtained results on the influence of the interplay between several activity dependent synaptic mechanisms, such as short-term depression and facilitation, on the maximum memory storage capacity in an attractor neural network [1]. In contrast with the case of synaptic depression, which drastically reduces the capacity of the network to store and retrieve activity patterns [2], synaptic facilitation is able to enhance the memory capacity in different situations. In particular, we find that a convenient balance between depression and facilitation can enhance the memory capacity, reaching maximal values similar to those obtained with static synapses, that is, without activity-dependent processes. We also argue, employing simple arguments, that this level of balance is compatible with experimental data recorded from some cortical areas, where depression and facilitation may play an important role for both memory-oriented tasks and information processing. We conclude that depressing synapses with a certain level of facilitation allow to recover the good retrieval properties of networks with static synapses while maintaining the nonlinear properties of dynamic synapses, convenient for information processing and coding.
NASA Astrophysics Data System (ADS)
Wang, Jin
Cognitive behaviors are determined by underlying neural networks. Many brain functions, such as learning and memory, can be described by attractor dynamics. We developed a theoretical framework for global dynamics by quantifying the landscape associated with the steady state probability distributions and steady state curl flux, measuring the degree of non-equilibrium through detailed balance breaking. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. Both landscape and flux determine the kinetic paths and speed of decision making. The kinetics and global stability of decision making are explored by quantifying the landscape topography through the barrier heights and the mean first passage time. The theoretical predictions are in agreement with experimental observations: more errors occur under time pressure. We quantitatively explored two mechanisms of the speed-accuracy tradeoff with speed emphasis and further uncovered the tradeoffs among speed, accuracy, and energy cost. Our results show an optimal balance among speed, accuracy, and the energy cost in decision making. We uncovered possible mechanisms of changes of mind and how mind changes improve performance in decision processes. Our landscape approach can help facilitate an understanding of the underlying physical mechanisms of cognitive processes and identify the key elements in neural networks.
Persistently active neurons in human medial frontal and medial temporal lobe support working memory
Kamiński, J; Sullivan, S; Chung, JM; Ross, IB; Mamelak, AN; Rutishauser, U
2017-01-01
Persistent neural activity is a putative mechanism for the maintenance of working memories. Persistent activity relies on the activity of a distributed network of areas, but the differential contribution of each area remains unclear. We recorded single neurons in the human medial frontal cortex and the medial temporal lobe while subjects held up to three items in memory. We found persistently active neurons in both areas. Persistent activity of hippocampal and amygdala neurons was stimulus-specific, formed stable attractors, and was predictive of memory content. Medial frontal cortex persistent activity, on the other hand, was modulated by memory load and task set but was not stimulus-specific. Trial-by-trial variability in persistent activity in both areas was related to memory strength, because it predicted the speed and accuracy by which stimuli were remembered. This work reveals, in humans, direct evidence for a distributed network of persistently active neurons supporting working memory maintenance. PMID:28218914
Reactivation in Working Memory: An Attractor Network Model of Free Recall
Lansner, Anders; Marklund, Petter; Sikström, Sverker; Nilsson, Lars-Göran
2013-01-01
The dynamic nature of human working memory, the general-purpose system for processing continuous input, while keeping no longer externally available information active in the background, is well captured in immediate free recall of supraspan word-lists. Free recall tasks produce several benchmark memory phenomena, like the U-shaped serial position curve, reflecting enhanced memory for early and late list items. To account for empirical data, including primacy and recency as well as contiguity effects, we propose here a neurobiologically based neural network model that unifies short- and long-term forms of memory and challenges both the standard view of working memory as persistent activity and dual-store accounts of free recall. Rapidly expressed and volatile synaptic plasticity, modulated intrinsic excitability, and spike-frequency adaptation are suggested as key cellular mechanisms underlying working memory encoding, reactivation and recall. Recent findings on the synaptic and molecular mechanisms behind early LTP and on spiking activity during delayed-match-to-sample tasks support this view. PMID:24023690
Reactivation in working memory: an attractor network model of free recall.
Lansner, Anders; Marklund, Petter; Sikström, Sverker; Nilsson, Lars-Göran
2013-01-01
The dynamic nature of human working memory, the general-purpose system for processing continuous input, while keeping no longer externally available information active in the background, is well captured in immediate free recall of supraspan word-lists. Free recall tasks produce several benchmark memory phenomena, like the U-shaped serial position curve, reflecting enhanced memory for early and late list items. To account for empirical data, including primacy and recency as well as contiguity effects, we propose here a neurobiologically based neural network model that unifies short- and long-term forms of memory and challenges both the standard view of working memory as persistent activity and dual-store accounts of free recall. Rapidly expressed and volatile synaptic plasticity, modulated intrinsic excitability, and spike-frequency adaptation are suggested as key cellular mechanisms underlying working memory encoding, reactivation and recall. Recent findings on the synaptic and molecular mechanisms behind early LTP and on spiking activity during delayed-match-to-sample tasks support this view.
General method to find the attractors of discrete dynamic models of biological systems.
Gan, Xiao; Albert, Réka
2018-04-01
Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.
General method to find the attractors of discrete dynamic models of biological systems
NASA Astrophysics Data System (ADS)
Gan, Xiao; Albert, Réka
2018-04-01
Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.
Optimal region of latching activity in an adaptive Potts model for networks of neurons
NASA Astrophysics Data System (ADS)
Abdollah-nia, Mohammad-Farshad; Saeedghalati, Mohammadkarim; Abbassian, Abdolhossein
2012-02-01
In statistical mechanics, the Potts model is a model for interacting spins with more than two discrete states. Neural networks which exhibit features of learning and associative memory can also be modeled by a system of Potts spins. A spontaneous behavior of hopping from one discrete attractor state to another (referred to as latching) has been proposed to be associated with higher cognitive functions. Here we propose a model in which both the stochastic dynamics of Potts models and an adaptive potential function are present. A latching dynamics is observed in a limited region of the noise(temperature)-adaptation parameter space. We hence suggest noise as a fundamental factor in such alternations alongside adaptation. From a dynamical systems point of view, the noise-adaptation alternations may be the underlying mechanism for multi-stability in attractor-based models. An optimality criterion for realistic models is finally inferred.
Unraveling chaotic attractors by complex networks and measurements of stock market complexity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Hongduo; Li, Ying, E-mail: mnsliy@mail.sysu.edu.cn
2014-03-15
We present a novel method for measuring the complexity of a time series by unraveling a chaotic attractor modeled on complex networks. The complexity index R, which can potentially be exploited for prediction, has a similar meaning to the Kolmogorov complexity (calculated from the Lempel–Ziv complexity), and is an appropriate measure of a series' complexity. The proposed method is used to research the complexity of the world's major capital markets. None of these markets are completely random, and they have different degrees of complexity, both over the entire length of their time series and at a level of detail. However,more » developing markets differ significantly from mature markets. Specifically, the complexity of mature stock markets is stronger and more stable over time, whereas developing markets exhibit relatively low and unstable complexity over certain time periods, implying a stronger long-term price memory process.« less
Ghosts in the Machine II: Neural Correlates of Memory Interference from the Previous Trial.
Papadimitriou, Charalampos; White, Robert L; Snyder, Lawrence H
2017-04-01
Previous memoranda interfere with working memory. For example, spatial memories are biased toward locations memorized on the previous trial. We predicted, based on attractor network models of memory, that activity in the frontal eye fields (FEFs) encoding a previous target location can persist into the subsequent trial and that this ghost will then bias the readout of the current target. Contrary to this prediction, we find that FEF memory representations appear biased away from (not toward) the previous target location. The behavioral and neural data can be reconciled by a model in which receptive fields of memory neurons converge toward remembered locations, much as receptive fields converge toward attended locations. Convergence increases the resources available to encode the relevant memoranda and decreases overall error in the network, but the residual convergence from the previous trial can give rise to an attractive behavioral bias on the next trial. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Information flow in layered networks of non-monotonic units
NASA Astrophysics Data System (ADS)
Schittler Neves, Fabio; Martim Schubert, Benno; Erichsen, Rubem, Jr.
2015-07-01
Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information.
Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.
Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros
2018-05-01
We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.
Noise in Attractor Networks in the Brain Produced by Graded Firing Rate Representations
Webb, Tristan J.; Rolls, Edmund T.; Deco, Gustavo; Feng, Jianfeng
2011-01-01
Representations in the cortex are often distributed with graded firing rates in the neuronal populations. The firing rate probability distribution of each neuron to a set of stimuli is often exponential or gamma. In processes in the brain, such as decision-making, that are influenced by the noise produced by the close to random spike timings of each neuron for a given mean rate, the noise with this graded type of representation may be larger than with the binary firing rate distribution that is usually investigated. In integrate-and-fire simulations of an attractor decision-making network, we show that the noise is indeed greater for a given sparseness of the representation for graded, exponential, than for binary firing rate distributions. The greater noise was measured by faster escaping times from the spontaneous firing rate state when the decision cues are applied, and this corresponds to faster decision or reaction times. The greater noise was also evident as less stability of the spontaneous firing state before the decision cues are applied. The implication is that spiking-related noise will continue to be a factor that influences processes such as decision-making, signal detection, short-term memory, and memory recall even with the quite large networks found in the cerebral cortex. In these networks there are several thousand recurrent collateral synapses onto each neuron. The greater noise with graded firing rate distributions has the advantage that it can increase the speed of operation of cortical circuitry. PMID:21931607
Stochastic Dynamics Underlying Cognitive Stability and Flexibility
Ueltzhöffer, Kai; Armbruster-Genç, Diana J. N.; Fiebach, Christian J.
2015-01-01
Cognitive stability and flexibility are core functions in the successful pursuit of behavioral goals. While there is evidence for a common frontoparietal network underlying both functions and for a key role of dopamine in the modulation of flexible versus stable behavior, the exact neurocomputational mechanisms underlying those executive functions and their adaptation to environmental demands are still unclear. In this work we study the neurocomputational mechanisms underlying cue based task switching (flexibility) and distractor inhibition (stability) in a paradigm specifically designed to probe both functions. We develop a physiologically plausible, explicit model of neural networks that maintain the currently active task rule in working memory and implement the decision process. We simplify the four-choice decision network to a nonlinear drift-diffusion process that we canonically derive from a generic winner-take-all network model. By fitting our model to the behavioral data of individual subjects, we can reproduce their full behavior in terms of decisions and reaction time distributions in baseline as well as distractor inhibition and switch conditions. Furthermore, we predict the individual hemodynamic response timecourse of the rule-representing network and localize it to a frontoparietal network including the inferior frontal junction area and the intraparietal sulcus, using functional magnetic resonance imaging. This refines the understanding of task-switch-related frontoparietal brain activity as reflecting attractor-like working memory representations of task rules. Finally, we estimate the subject-specific stability of the rule-representing attractor states in terms of the minimal action associated with a transition between different rule states in the phase-space of the fitted models. This stability measure correlates with switching-specific thalamocorticostriatal activation, i.e., with a system associated with flexible working memory updating and dopaminergic modulation of cognitive flexibility. These results show that stochastic dynamical systems can implement the basic computations underlying cognitive stability and flexibility and explain neurobiological bases of individual differences. PMID:26068119
Changes of mind in an attractor network of decision-making.
Albantakis, Larissa; Deco, Gustavo
2011-06-01
Attractor networks successfully account for psychophysical and neurophysiological data in various decision-making tasks. Especially their ability to model persistent activity, a property of many neurons involved in decision-making, distinguishes them from other approaches. Stable decision attractors are, however, counterintuitive to changes of mind. Here we demonstrate that a biophysically-realistic attractor network with spiking neurons, in its itinerant transients towards the choice attractors, can replicate changes of mind observed recently during a two-alternative random-dot motion (RDM) task. Based on the assumption that the brain continues to evaluate available evidence after the initiation of a decision, the network predicts neural activity during changes of mind and accurately simulates reaction times, performance and percentage of changes dependent on difficulty. Moreover, the model suggests a low decision threshold and high incoming activity that drives the brain region involved in the decision-making process into a dynamical regime close to a bifurcation, which up to now lacked evidence for physiological relevance. Thereby, we further affirmed the general conformance of attractor networks with higher level neural processes and offer experimental predictions to distinguish nonlinear attractor from linear diffusion models.
Effect of synapse dilution on the memory retrieval in structured attractor neural networks
NASA Astrophysics Data System (ADS)
Brunel, N.
1993-08-01
We investigate a simple model of structured attractor neural network (ANN). In this network a module codes for the category of the stored information, while another group of neurons codes for the remaining information. The probability distribution of stabilities of the patterns and the prototypes of the categories are calculated, for two different synaptic structures. The stability of the prototypes is shown to increase when the fraction of neurons coding for the category goes down. Then the effect of synapse destruction on the retrieval is studied in two opposite situations : first analytically in sparsely connected networks, then numerically in completely connected ones. In both cases the behaviour of the structured network and that of the usual homogeneous networks are compared. When lesions increase, two transitions are shown to appear in the behaviour of the structured network when one of the patterns is presented to the network. After the first transition the network recognizes the category of the pattern but not the individual pattern. After the second transition the network recognizes nothing. These effects are similar to syndromes caused by lesions in the central visual system, namely prosopagnosia and agnosia. In both types of networks (structured or homogeneous) the stability of the prototype is greater than the stability of individual patterns, however the first transition, for completely connected networks, occurs only when the network is structured.
Local community detection as pattern restoration by attractor dynamics of recurrent neural networks.
Okamoto, Hiroshi
2016-08-01
Densely connected parts in networks are referred to as "communities". Community structure is a hallmark of a variety of real-world networks. Individual communities in networks form functional modules of complex systems described by networks. Therefore, finding communities in networks is essential to approaching and understanding complex systems described by networks. In fact, network science has made a great deal of effort to develop effective and efficient methods for detecting communities in networks. Here we put forward a type of community detection, which has been little examined so far but will be practically useful. Suppose that we are given a set of source nodes that includes some (but not all) of "true" members of a particular community; suppose also that the set includes some nodes that are not the members of this community (i.e., "false" members of the community). We propose to detect the community from this "imperfect" and "inaccurate" set of source nodes using attractor dynamics of recurrent neural networks. Community detection by the proposed method can be viewed as restoration of the original pattern from a deteriorated pattern, which is analogous to cue-triggered recall of short-term memory in the brain. We demonstrate the effectiveness of the proposed method using synthetic networks and real social networks for which correct communities are known. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Balaguer-Ballester, Emili; Seamans, Jeremy K.; Phillips, Anthony G.; Durstewitz, Daniel
2015-01-01
Modulation of neural activity by monoamine neurotransmitters is thought to play an essential role in shaping computational neurodynamics in the neocortex, especially in prefrontal regions. Computational theories propose that monoamines may exert bidirectional (concentration-dependent) effects on cognition by altering prefrontal cortical attractor dynamics according to an inverted U-shaped function. To date, this hypothesis has not been addressed directly, in part because of the absence of appropriate statistical methods required to assess attractor-like behavior in vivo. The present study used a combination of advanced multivariate statistical, time series analysis, and machine learning methods to assess dynamic changes in network activity from multiple single-unit recordings from the medial prefrontal cortex (mPFC) of rats while the animals performed a foraging task guided by working memory after pretreatment with different doses of d-amphetamine (AMPH), which increases monoamine efflux in the mPFC. A dose-dependent, bidirectional effect of AMPH on neural dynamics in the mPFC was observed. Specifically, a 1.0 mg/kg dose of AMPH accentuated separation between task-epoch-specific population states and convergence toward these states. In contrast, a 3.3 mg/kg dose diminished separation and convergence toward task-epoch-specific population states, which was paralleled by deficits in cognitive performance. These results support the computationally derived hypothesis that moderate increases in monoamine efflux would enhance attractor stability, whereas high frontal monoamine levels would severely diminish it. Furthermore, they are consistent with the proposed inverted U-shaped and concentration-dependent modulation of cortical efficiency by monoamines. PMID:26180194
Attractor controllability of Boolean networks by flipping a subset of their nodes
NASA Astrophysics Data System (ADS)
Rafimanzelat, Mohammad Reza; Bahrami, Fariba
2018-04-01
The controllability analysis of Boolean networks (BNs), as models of biomolecular regulatory networks, has drawn the attention of researchers in recent years. In this paper, we aim at governing the steady-state behavior of BNs using an intervention method which can easily be applied to most real system, which can be modeled as BNs, particularly to biomolecular regulatory networks. To this end, we introduce the concept of attractor controllability of a BN by flipping a subset of its nodes, as the possibility of making a BN converge from any of its attractors to any other one, by one-time flipping members of a subset of BN nodes. Our approach is based on the algebraic state-space representation of BNs using semi-tensor product of matrices. After introducing some new matrix tools, we use them to derive necessary and sufficient conditions for the attractor controllability of BNs. A forward search algorithm is then suggested to identify the minimal perturbation set for attractor controllability of a BN. Next, a lower bound is derived for the cardinality of this set. Two new indices are also proposed for quantifying the attractor controllability of a BN and the influence of each network variable on the attractor controllability of the network and the relationship between them is revealed. Finally, we confirm the efficiency of the proposed approach by applying it to the BN models of some real biomolecular networks.
Holding multiple items in short term memory: a neural mechanism.
Rolls, Edmund T; Dempere-Marco, Laura; Deco, Gustavo
2013-01-01
Human short term memory has a capacity of several items maintained simultaneously. We show how the number of short term memory representations that an attractor network modeling a cortical local network can simultaneously maintain active is increased by using synaptic facilitation of the type found in the prefrontal cortex. We have been able to maintain 9 short term memories active simultaneously in integrate-and-fire simulations where the proportion of neurons in each population, the sparseness, is 0.1, and have confirmed the stability of such a system with mean field analyses. Without synaptic facilitation the system can maintain many fewer memories active in the same network. The system operates because of the effectively increased synaptic strengths formed by the synaptic facilitation just for those pools to which the cue is applied, and then maintenance of this synaptic facilitation in just those pools when the cue is removed by the continuing neuronal firing in those pools. The findings have implications for understanding how several items can be maintained simultaneously in short term memory, how this may be relevant to the implementation of language in the brain, and suggest new approaches to understanding and treating the decline in short term memory that can occur with normal aging.
Holding Multiple Items in Short Term Memory: A Neural Mechanism
Rolls, Edmund T.; Dempere-Marco, Laura; Deco, Gustavo
2013-01-01
Human short term memory has a capacity of several items maintained simultaneously. We show how the number of short term memory representations that an attractor network modeling a cortical local network can simultaneously maintain active is increased by using synaptic facilitation of the type found in the prefrontal cortex. We have been able to maintain 9 short term memories active simultaneously in integrate-and-fire simulations where the proportion of neurons in each population, the sparseness, is 0.1, and have confirmed the stability of such a system with mean field analyses. Without synaptic facilitation the system can maintain many fewer memories active in the same network. The system operates because of the effectively increased synaptic strengths formed by the synaptic facilitation just for those pools to which the cue is applied, and then maintenance of this synaptic facilitation in just those pools when the cue is removed by the continuing neuronal firing in those pools. The findings have implications for understanding how several items can be maintained simultaneously in short term memory, how this may be relevant to the implementation of language in the brain, and suggest new approaches to understanding and treating the decline in short term memory that can occur with normal aging. PMID:23613789
Anishchenko, Anastasia; Treves, Alessandro
2006-10-01
The metric structure of synaptic connections is obviously an important factor in shaping the properties of neural networks, in particular the capacity to retrieve memories, with which are endowed autoassociative nets operating via attractor dynamics. Qualitatively, some real networks in the brain could be characterized as 'small worlds', in the sense that the structure of their connections is intermediate between the extremes of an orderly geometric arrangement and of a geometry-independent random mesh. Small worlds can be defined more precisely in terms of their mean path length and clustering coefficient; but is such a precise description useful for a better understanding of how the type of connectivity affects memory retrieval? We have simulated an autoassociative memory network of integrate-and-fire units, positioned on a ring, with the network connectivity varied parametrically between ordered and random. We find that the network retrieves previously stored memory patterns when the connectivity is close to random, and displays the characteristic behavior of ordered nets (localized 'bumps' of activity) when the connectivity is close to ordered. Recent analytical work shows that these two behaviors can coexist in a network of simple threshold-linear units, leading to localized retrieval states. We find that they tend to be mutually exclusive behaviors, however, with our integrate-and-fire units. Moreover, the transition between the two occurs for values of the connectivity parameter which are not simply related to the notion of small worlds.
Hopfield's Model of Patterns Recognition and Laws of Artistic Perception
NASA Astrophysics Data System (ADS)
Yevin, Igor; Koblyakov, Alexander
The model of patterns recognition or attractor network model of associative memory, offered by J.Hopfield 1982, is the most known model in theoretical neuroscience. This paper aims to show, that such well-known laws of art perception as the Wundt curve, perception of visual ambiguity in art, and also the model perception of musical tonalities are nothing else than special cases of the Hopfield’s model of patterns recognition.
Modeling and controlling the two-phase dynamics of the p53 network: a Boolean network approach
NASA Astrophysics Data System (ADS)
Lin, Guo-Qiang; Ao, Bin; Chen, Jia-Wei; Wang, Wen-Xu; Di, Zeng-Ru
2014-12-01
Although much empirical evidence has demonstrated that p53 plays a key role in tumor suppression, the dynamics and function of the regulatory network centered on p53 have not yet been fully understood. Here, we develop a Boolean network model to reproduce the two-phase dynamics of the p53 network in response to DNA damage. In particular, we map the fates of cells into two types of Boolean attractors, and we find that the apoptosis attractor does not exist for minor DNA damage, reflecting that the cell is reparable. As the amount of DNA damage increases, the basin of the repair attractor shrinks, accompanied by the rising of the apoptosis attractor and the expansion of its basin, indicating that the cell becomes more irreparable with more DNA damage. For severe DNA damage, the repair attractor vanishes, and the apoptosis attractor dominates the state space, accounting for the exclusive fate of death. Based on the Boolean network model, we explore the significance of links, in terms of the sensitivity of the two-phase dynamics, to perturbing the weights of links and removing them. We find that the links are either critical or ordinary, rather than redundant. This implies that the p53 network is irreducible, but tolerant of small mutations at some ordinary links, and this can be interpreted with evolutionary theory. We further devised practical control schemes for steering the system into the apoptosis attractor in the presence of DNA damage by pinning the state of a single node or perturbing the weight of a single link. Our approach offers insights into understanding and controlling the p53 network, which is of paramount importance for medical treatment and genetic engineering.
Accurate path integration in continuous attractor network models of grid cells.
Burak, Yoram; Fiete, Ila R
2009-02-01
Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.
Corneille, Olivier; Hugenberg, Kurt; Potter, Timothy
2007-09-01
A new model of mental representation is applied to social cognition: the attractor field model. Using the model, the authors predicted and found a perceptual advantage but a memory disadvantage for faces displaying evaluatively congruent expressions. In Experiment 1, participants completed a same/different perceptual discrimination task involving morphed pairs of angry-to-happy Black and White faces. Pairs of faces displaying evaluatively incongruent expressions (i.e., happy Black, angry White) were more likely to be labeled as similar and were less likely to be accurately discriminated from one another than faces displaying evaluatively congruent expressions (i.e., angry Black, happy White). Experiment 2 replicated this finding and showed that objective discriminability of stimuli moderated the impact of attractor field effects on perceptual discrimination accuracy. In Experiment 3, participants completed a recognition task for angry and happy Black and White faces. Consistent with the attractor field model, memory accuracy was better for faces displaying evaluatively incongruent expressions. Theoretical and practical implications of these findings are discussed. (c) 2007 APA, all rights reserved
Models of Innate Neural Attractors and Their Applications for Neural Information Processing
Solovyeva, Ksenia P.; Karandashev, Iakov M.; Zhavoronkov, Alex; Dunin-Barkowski, Witali L.
2016-01-01
In this work we reveal and explore a new class of attractor neural networks, based on inborn connections provided by model molecular markers, the molecular marker based attractor neural networks (MMBANN). Each set of markers has a metric, which is used to make connections between neurons containing the markers. We have explored conditions for the existence of attractor states, critical relations between their parameters and the spectrum of single neuron models, which can implement the MMBANN. Besides, we describe functional models (perceptron and SOM), which obtain significant advantages over the traditional implementation of these models, while using MMBANN. In particular, a perceptron, based on MMBANN, gets specificity gain in orders of error probabilities values, MMBANN SOM obtains real neurophysiological meaning, the number of possible grandma cells increases 1000-fold with MMBANN. MMBANN have sets of attractor states, which can serve as finite grids for representation of variables in computations. These grids may show dimensions of d = 0, 1, 2,…. We work with static and dynamic attractor neural networks of the dimensions d = 0 and 1. We also argue that the number of dimensions which can be represented by attractors of activities of neural networks with the number of elements N = 104 does not exceed 8. PMID:26778977
Sun, Mengyang; Cheng, Xianrui; Socolar, Joshua E S
2013-06-01
A common approach to the modeling of gene regulatory networks is to represent activating or repressing interactions using ordinary differential equations for target gene concentrations that include Hill function dependences on regulator gene concentrations. An alternative formulation represents the same interactions using Boolean logic with time delays associated with each network link. We consider the attractors that emerge from the two types of models in the case of a simple but nontrivial network: a figure-8 network with one positive and one negative feedback loop. We show that the different modeling approaches give rise to the same qualitative set of attractors with the exception of a possible fixed point in the ordinary differential equation model in which concentrations sit at intermediate values. The properties of the attractors are most easily understood from the Boolean perspective, suggesting that time-delay Boolean modeling is a useful tool for understanding the logic of regulatory networks.
Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI
Giulioni, Massimiliano; Camilleri, Patrick; Mattia, Maurizio; Dante, Vittorio; Braun, Jochen; Del Giudice, Paolo
2011-01-01
We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of “high” and “low”-firing activity. Depending on the overall excitability, transitions to the “high” state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the “high” state retains a “working memory” of a stimulus until well after its release. In the latter case, “high” states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated “corrupted” “high” states comprising neurons of both excitatory populations. Within a “basin of attraction,” the network dynamics “corrects” such states and re-establishes the prototypical “high” state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons. PMID:22347151
Mind-to-mind heteroclinic coordination: Model of sequential episodic memory initiation.
Afraimovich, V S; Zaks, M A; Rabinovich, M I
2018-05-01
Retrieval of episodic memory is a dynamical process in the large scale brain networks. In social groups, the neural patterns, associated with specific events directly experienced by single members, are encoded, recalled, and shared by all participants. Here, we construct and study the dynamical model for the formation and maintaining of episodic memory in small ensembles of interacting minds. We prove that the unconventional dynamical attractor of this process-the nonsmooth heteroclinic torus-is structurally stable within the Lotka-Volterra-like sets of equations. Dynamics on this torus combines the absence of chaos with asymptotic instability of every separate trajectory; its adequate quantitative characteristics are length-related Lyapunov exponents. Variation of the coupling strength between the participants results in different types of sequential switching between metastable states; we interpret them as stages in formation and modification of the episodic memory.
Mind-to-mind heteroclinic coordination: Model of sequential episodic memory initiation
NASA Astrophysics Data System (ADS)
Afraimovich, V. S.; Zaks, M. A.; Rabinovich, M. I.
2018-05-01
Retrieval of episodic memory is a dynamical process in the large scale brain networks. In social groups, the neural patterns, associated with specific events directly experienced by single members, are encoded, recalled, and shared by all participants. Here, we construct and study the dynamical model for the formation and maintaining of episodic memory in small ensembles of interacting minds. We prove that the unconventional dynamical attractor of this process—the nonsmooth heteroclinic torus—is structurally stable within the Lotka-Volterra-like sets of equations. Dynamics on this torus combines the absence of chaos with asymptotic instability of every separate trajectory; its adequate quantitative characteristics are length-related Lyapunov exponents. Variation of the coupling strength between the participants results in different types of sequential switching between metastable states; we interpret them as stages in formation and modification of the episodic memory.
A snapshot attractor view of the advection of inertial particles in the presence of history force
NASA Astrophysics Data System (ADS)
Guseva, Ksenia; Daitche, Anton; Tél, Tamás
2017-06-01
We analyse the effect of the Basset history force on the sedimentation or rising of inertial particles in a two-dimensional convection flow. We find that the concept of snapshot attractors is useful to understand the extraordinary slow convergence due to long-term memory: an ensemble of particles converges exponentially fast towards a snapshot attractor, and this attractor undergoes a slow drift for long times. We demonstrate for the case of a periodic attractor that the drift of the snapshot attractor can be well characterized both in the space of the fluid and in the velocity space. For the case of quasiperiodic and chaotic dynamics we propose the use of the average settling velocity of the ensemble as a distinctive measure to characterize the snapshot attractor and the time scale separation corresponding to the convergence towards the snapshot attractor and its own slow dynamics.
Nonequilibrium landscape theory of neural networks.
Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin
2013-11-05
The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.
Nonequilibrium landscape theory of neural networks
Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin
2013-01-01
The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451
NASA Astrophysics Data System (ADS)
Han, Yan; Kun, Zhang; Jin, Wang
2016-07-01
Cognitive behaviors are determined by underlying neural networks. Many brain functions, such as learning and memory, have been successfully described by attractor dynamics. For decision making in the brain, a quantitative description of global attractor landscapes has not yet been completely given. Here, we developed a theoretical framework to quantify the landscape associated with the steady state probability distributions and associated steady state curl flux, measuring the degree of non-equilibrium through the degree of detailed balance breaking for decision making. We quantified the decision-making processes with optimal paths from the undecided attractor states to the decided attractor states, which are identified as basins of attractions, on the landscape. Both landscape and flux determine the kinetic paths and speed. The kinetics and global stability of decision making are explored by quantifying the landscape topography through the barrier heights and the mean first passage time. Our theoretical predictions are in agreement with experimental observations: more errors occur under time pressure. We quantitatively explored two mechanisms of the speed-accuracy tradeoff with speed emphasis and further uncovered the tradeoffs among speed, accuracy, and energy cost. Our results imply that there is an optimal balance among speed, accuracy, and the energy cost in decision making. We uncovered the possible mechanisms of changes of mind and how mind changes improve performance in decision processes. Our landscape approach can help facilitate an understanding of the underlying physical mechanisms of cognitive processes and identify the key factors in the corresponding neural networks. Project supported by the National Natural Science Foundation of China (Grant Nos. 21190040, 91430217, and 11305176).
An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks
Cabessa, Jérémie; Villa, Alessandro E. P.
2014-01-01
We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866
From Cellular Attractor Selection to Adaptive Signal Control for Traffic Networks
Tian, Daxin; Zhou, Jianshan; Sheng, Zhengguo; Wang, Yunpeng; Ma, Jianming
2016-01-01
The management of varying traffic flows essentially depends on signal controls at intersections. However, design an optimal control that considers the dynamic nature of a traffic network and coordinates all intersections simultaneously in a centralized manner is computationally challenging. Inspired by the stable gene expressions of Escherichia coli in response to environmental changes, we explore the robustness and adaptability performance of signalized intersections by incorporating a biological mechanism in their control policies, specifically, the evolution of each intersection is induced by the dynamics governing an adaptive attractor selection in cells. We employ a mathematical model to capture such biological attractor selection and derive a generic, adaptive and distributed control algorithm which is capable of dynamically adapting signal operations for the entire dynamical traffic network. We show that the proposed scheme based on attractor selection can not only promote the balance of traffic loads on each link of the network but also allows the global network to accommodate dynamical traffic demands. Our work demonstrates the potential of bio-inspired intelligence emerging from cells and provides a deep understanding of adaptive attractor selection-based control formation that is useful to support the designs of adaptive optimization and control in other domains. PMID:26972968
From Cellular Attractor Selection to Adaptive Signal Control for Traffic Networks.
Tian, Daxin; Zhou, Jianshan; Sheng, Zhengguo; Wang, Yunpeng; Ma, Jianming
2016-03-14
The management of varying traffic flows essentially depends on signal controls at intersections. However, design an optimal control that considers the dynamic nature of a traffic network and coordinates all intersections simultaneously in a centralized manner is computationally challenging. Inspired by the stable gene expressions of Escherichia coli in response to environmental changes, we explore the robustness and adaptability performance of signalized intersections by incorporating a biological mechanism in their control policies, specifically, the evolution of each intersection is induced by the dynamics governing an adaptive attractor selection in cells. We employ a mathematical model to capture such biological attractor selection and derive a generic, adaptive and distributed control algorithm which is capable of dynamically adapting signal operations for the entire dynamical traffic network. We show that the proposed scheme based on attractor selection can not only promote the balance of traffic loads on each link of the network but also allows the global network to accommodate dynamical traffic demands. Our work demonstrates the potential of bio-inspired intelligence emerging from cells and provides a deep understanding of adaptive attractor selection-based control formation that is useful to support the designs of adaptive optimization and control in other domains.
A New Role for Attentional Corticopetal Acetylcholine in Cortical Memory Dynamics
NASA Astrophysics Data System (ADS)
Fujii, Hiroshi; Kanamaru, Takashi; Aihara, Kazuyuki; Tsuda, Ichiro
2011-09-01
Although the role of corticopetal acetylcholine (ACh) in higher cognitive functions is increasingly recognized, the questions as (1) how ACh works in attention(s), memory dynamics and cortical state transitions, and also (2) why and how loss of ACh is involved in dysfunctions such as visual hallucinations in dementia with Lewy bodies and deficit of attention(s), are not well understood. From the perspective of a dynamical systems viewpoint, we hypothesize that transient ACh released under top-down attention serves to temporarily invoke attractor-like memories, while a background level of ACh reverses this process returning the dynamical nature of the memory structure back to attractor ruins (quasi-attractors). In fact, transient ACh loosens inhibitions of py ramidal neurons (PYRs) by P V+ fas t spiking (FS) i nterneurons, while a baseline ACh recovers inhibitory actions of P V+ FS. Attentional A Ch thus dynamically modifies brain's connectivity. Th e core of this process is in the depression of GABAergic inhibitory currents in PYRs due to muscarinic (probably M2 subtype) presyn aptic effects on GABAergic synapses of PV+ FS neurons
Permitted and forbidden sets in symmetric threshold-linear networks.
Hahnloser, Richard H R; Seung, H Sebastian; Slotine, Jean-Jacques
2003-03-01
The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.
Stabilization of perturbed Boolean network attractors through compensatory interactions
2014-01-01
Background Understanding and ameliorating the effects of network damage are of significant interest, due in part to the variety of applications in which network damage is relevant. For example, the effects of genetic mutations can cascade through within-cell signaling and regulatory networks and alter the behavior of cells, possibly leading to a wide variety of diseases. The typical approach to mitigating network perturbations is to consider the compensatory activation or deactivation of system components. Here, we propose a complementary approach wherein interactions are instead modified to alter key regulatory functions and prevent the network damage from triggering a deregulatory cascade. Results We implement this approach in a Boolean dynamic framework, which has been shown to effectively model the behavior of biological regulatory and signaling networks. We show that the method can stabilize any single state (e.g., fixed point attractors or time-averaged representations of multi-state attractors) to be an attractor of the repaired network. We show that the approach is minimalistic in that few modifications are required to provide stability to a chosen attractor and specific in that interventions do not have undesired effects on the attractor. We apply the approach to random Boolean networks, and further show that the method can in some cases successfully repair synchronous limit cycles. We also apply the methodology to case studies from drought-induced signaling in plants and T-LGL leukemia and find that it is successful in both stabilizing desired behavior and in eliminating undesired outcomes. Code is made freely available through the software package BooleanNet. Conclusions The methodology introduced in this report offers a complementary way to manipulating node expression levels. A comprehensive approach to evaluating network manipulation should take an "all of the above" perspective; we anticipate that theoretical studies of interaction modification, coupled with empirical advances, will ultimately provide researchers with greater flexibility in influencing system behavior. PMID:24885780
Automatic Screening for Perturbations in Boolean Networks.
Schwab, Julian D; Kestler, Hans A
2018-01-01
A common approach to address biological questions in systems biology is to simulate regulatory mechanisms using dynamic models. Among others, Boolean networks can be used to model the dynamics of regulatory processes in biology. Boolean network models allow simulating the qualitative behavior of the modeled processes. A central objective in the simulation of Boolean networks is the computation of their long-term behavior-so-called attractors. These attractors are of special interest as they can often be linked to biologically relevant behaviors. Changing internal and external conditions can influence the long-term behavior of the Boolean network model. Perturbation of a Boolean network by stripping a component of the system or simulating a surplus of another element can lead to different attractors. Apparently, the number of possible perturbations and combinations of perturbations increases exponentially with the size of the network. Manually screening a set of possible components for combinations that have a desired effect on the long-term behavior can be very time consuming if not impossible. We developed a method to automatically screen for perturbations that lead to a user-specified change in the network's functioning. This method is implemented in the visual simulation framework ViSiBool utilizing satisfiability (SAT) solvers for fast exhaustive attractor search.
Jiang, T; Jiang, C-Y; Shu, J-H; Xu, Y-J
2017-07-10
The molecular mechanism of nasopharyngeal carcinoma (NPC) is poorly understood and effective therapeutic approaches are needed. This research aimed to excavate the attractor modules involved in the progression of NPC and provide further understanding of the underlying mechanism of NPC. Based on the gene expression data of NPC, two specific protein-protein interaction networks for NPC and control conditions were re-weighted using Pearson correlation coefficient. Then, a systematic tracking of candidate modules was conducted on the re-weighted networks via cliques algorithm, and a total of 19 and 38 modules were separately identified from NPC and control networks, respectively. Among them, 8 pairs of modules with similar gene composition were selected, and 2 attractor modules were identified via the attract method. Functional analysis indicated that these two attractor modules participate in one common bioprocess of cell division. Based on the strategy of integrating systemic module inference with the attract method, we successfully identified 2 attractor modules. These attractor modules might play important roles in the molecular pathogenesis of NPC via affecting the bioprocess of cell division in a conjunct way. Further research is needed to explore the correlations between cell division and NPC.
A geometrical approach to control and controllability of nonlinear dynamical networks
Wang, Le-Zhi; Su, Ri-Qi; Huang, Zi-Gang; Wang, Xiao; Wang, Wen-Xu; Grebogi, Celso; Lai, Ying-Cheng
2016-01-01
In spite of the recent interest and advances in linear controllability of complex networks, controlling nonlinear network dynamics remains an outstanding problem. Here we develop an experimentally feasible control framework for nonlinear dynamical networks that exhibit multistability. The control objective is to apply parameter perturbation to drive the system from one attractor to another, assuming that the former is undesired and the latter is desired. To make our framework practically meaningful, we consider restricted parameter perturbation by imposing two constraints: it must be experimentally realizable and applied only temporarily. We introduce the concept of attractor network, which allows us to formulate a quantifiable controllability framework for nonlinear dynamical networks: a network is more controllable if the attractor network is more strongly connected. We test our control framework using examples from various models of experimental gene regulatory networks and demonstrate the beneficial role of noise in facilitating control. PMID:27076273
Attractors in complex networks
NASA Astrophysics Data System (ADS)
Rodrigues, Alexandre A. P.
2017-10-01
In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).
Attractors in complex networks.
Rodrigues, Alexandre A P
2017-10-01
In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).
Network dynamics and systems biology
NASA Astrophysics Data System (ADS)
Norrell, Johannes A.
The physics of complex systems has grown considerably as a field in recent decades, largely due to improved computational technology and increased availability of systems level data. One area in which physics is of growing relevance is molecular biology. A new field, systems biology, investigates features of biological systems as a whole, a strategy of particular importance for understanding emergent properties that result from a complex network of interactions. Due to the complicated nature of the systems under study, the physics of complex systems has a significant role to play in elucidating the collective behavior. In this dissertation, we explore three problems in the physics of complex systems, motivated in part by systems biology. The first of these concerns the applicability of Boolean models as an approximation of continuous systems. Studies of gene regulatory networks have employed both continuous and Boolean models to analyze the system dynamics, and the two have been found produce similar results in the cases analyzed. We ask whether or not Boolean models can generically reproduce the qualitative attractor dynamics of networks of continuously valued elements. Using a combination of analytical techniques and numerical simulations, we find that continuous networks exhibit two effects---an asymmetry between on and off states, and a decaying memory of events in each element's inputs---that are absent from synchronously updated Boolean models. We show that in simple loops these effects produce exactly the attractors that one would predict with an analysis of the stability of Boolean attractors, but in slightly more complicated topologies, they can destabilize solutions that are stable in the Boolean approximation, and can stabilize new attractors. Second, we investigate ensembles of large, random networks. Of particular interest is the transition between ordered and disordered dynamics, which is well characterized in Boolean systems. Networks at the transition point, called critical, exhibit many of the features of regulatory networks, and recent studies suggest that some specific regulatory networks are indeed near-critical. We ask whether certain statistical measures of the ensemble behavior of large continuous networks are reproduced by Boolean models. We find that, in spite of the lack of correspondence between attractors observed in smaller systems, the statistical characterization given by the continuous and Boolean models show close agreement, and the transition between order and disorder known in Boolean systems can occur in continuous systems as well. One effect that is not present in Boolean systems, the failure of information to propagate down chains of elements of arbitrary length, is present in a class of continuous networks. In these systems, a modified Boolean theory that takes into account the collective effect of propagation failure on chains throughout the network gives a good description of the observed behavior. We find that propagation failure pushes the system toward greater order, resulting in a partial or complete suppression of the disordered phase. Finally, we explore a dynamical process of direct biological relevance: asymmetric cell division in A. thaliana. The long term goal is to develop a model for the process that accurately accounts for both wild type and mutant behavior. To contribute to this endeavor, we use confocal microscopy to image roots in a SHORT-ROOT inducible mutant. We compute correlation functions between the locations of asymmetrically divided cells, and we construct stochastic models based on a few simple assumptions that accurately predict the non-zero correlations. Our result shows that intracellular processes alone cannot be responsible for the observed divisions, and that an intercell signaling mechanism could account for the measured correlations.
Chong, Ket Hing; Zhang, Xiaomeng; Zheng, Jie
2018-01-01
Ageing is a natural phenomenon that is inherently complex and remains a mystery. Conceptual model of cellular ageing landscape was proposed for computational studies of ageing. However, there is a lack of quantitative model of cellular ageing landscape. This study aims to investigate the mechanism of cellular ageing in a theoretical model using the framework of Waddington's epigenetic landscape. We construct an ageing gene regulatory network (GRN) consisting of the core cell cycle regulatory genes (including p53). A model parameter (activation rate) is used as a measure of the accumulation of DNA damage. Using the bifurcation diagrams to estimate the parameter values that lead to multi-stability, we obtained a conceptual model for capturing three distinct stable steady states (or attractors) corresponding to homeostasis, cell cycle arrest, and senescence or apoptosis. In addition, we applied a Monte Carlo computational method to quantify the potential landscape, which displays: I) one homeostasis attractor for low accumulation of DNA damage; II) two attractors for cell cycle arrest and senescence (or apoptosis) in response to high accumulation of DNA damage. Using the Waddington's epigenetic landscape framework, the process of ageing can be characterized by state transitions from landscape I to II. By in silico perturbations, we identified the potential landscape of a perturbed network (inactivation of p53), and thereby demonstrated the emergence of a cancer attractor. The simulated dynamics of the perturbed network displays a landscape with four basins of attraction: homeostasis, cell cycle arrest, senescence (or apoptosis) and cancer. Our analysis also showed that for the same perturbed network with low DNA damage, the landscape displays only the homeostasis attractor. The mechanistic model offers theoretical insights that can facilitate discovery of potential strategies for network medicine of ageing-related diseases such as cancer.
Cancer Theory from Systems Biology Point of View
NASA Astrophysics Data System (ADS)
Wang, Gaowei; Tang, Ying; Yuan, Ruoshi; Ao, Ping
In our previous work, we have proposed a novel cancer theory, endogenous network theory, to understand mechanism underlying cancer genesis and development. Recently, we apply this theory to hepatocellular carcinoma (HCC). A core endogenous network of hepatocyte was established by integrating the current understanding of hepatocyte at molecular level. Quantitative description of the endogenous network consisted of a set of stochastic differential equations which could generate many local attractors with obvious or non-obvious biological functions. By comparing with clinical observation and experimental data, the results showed that two robust attractors from the model reproduced the main known features of normal hepatocyte and cancerous hepatocyte respectively at both modular and molecular level. In light of our theory, the genesis and progression of cancer is viewed as transition from normal attractor to HCC attractor. A set of new insights on understanding cancer genesis and progression, and on strategies for cancer prevention, cure, and care were provided.
Griffin: A Tool for Symbolic Inference of Synchronous Boolean Molecular Networks.
Muñoz, Stalin; Carrillo, Miguel; Azpeitia, Eugenio; Rosenblueth, David A
2018-01-01
Boolean networks are important models of biochemical systems, located at the high end of the abstraction spectrum. A number of Boolean gene networks have been inferred following essentially the same method. Such a method first considers experimental data for a typically underdetermined "regulation" graph. Next, Boolean networks are inferred by using biological constraints to narrow the search space, such as a desired set of (fixed-point or cyclic) attractors. We describe Griffin , a computer tool enhancing this method. Griffin incorporates a number of well-established algorithms, such as Dubrova and Teslenko's algorithm for finding attractors in synchronous Boolean networks. In addition, a formal definition of regulation allows Griffin to employ "symbolic" techniques, able to represent both large sets of network states and Boolean constraints. We observe that when the set of attractors is required to be an exact set, prohibiting additional attractors, a naive Boolean coding of this constraint may be unfeasible. Such cases may be intractable even with symbolic methods, as the number of Boolean constraints may be astronomically large. To overcome this problem, we employ an Artificial Intelligence technique known as "clause learning" considerably increasing Griffin 's scalability. Without clause learning only toy examples prohibiting additional attractors are solvable: only one out of seven queries reported here is answered. With clause learning, by contrast, all seven queries are answered. We illustrate Griffin with three case studies drawn from the Arabidopsis thaliana literature. Griffin is available at: http://turing.iimas.unam.mx/griffin.
Synthetic Modeling of Autonomous Learning with a Chaotic Neural Network
NASA Astrophysics Data System (ADS)
Funabashi, Masatoshi
We investigate the possible role of intermittent chaotic dynamics called chaotic itinerancy, in interaction with nonsupervised learnings that reinforce and weaken the neural connection depending on the dynamics itself. We first performed hierarchical stability analysis of the Chaotic Neural Network model (CNN) according to the structure of invariant subspaces. Irregular transition between two attractor ruins with positive maximum Lyapunov exponent was triggered by the blowout bifurcation of the attractor spaces, and was associated with riddled basins structure. We secondly modeled two autonomous learnings, Hebbian learning and spike-timing-dependent plasticity (STDP) rule, and simulated the effect on the chaotic itinerancy state of CNN. Hebbian learning increased the residence time on attractor ruins, and produced novel attractors in the minimum higher-dimensional subspace. It also augmented the neuronal synchrony and established the uniform modularity in chaotic itinerancy. STDP rule reduced the residence time on attractor ruins, and brought a wide range of periodicity in emerged attractors, possibly including strange attractors. Both learning rules selectively destroyed and preserved the specific invariant subspaces, depending on the neuron synchrony of the subspace where the orbits are situated. Computational rationale of the autonomous learning is discussed in connectionist perspective.
Remembering the Past and Imagining the Future: A Neural Model of Spatial Memory and Imagery
ERIC Educational Resources Information Center
Byrne, Patrick; Becker, Suzanna; Burgess, Neil
2007-01-01
The authors model the neural mechanisms underlying spatial cognition, integrating neuronal systems and behavioral data, and address the relationships between long-term memory, short-term memory, and imagery, and between egocentric and allocentric and visual and ideothetic representations. Long-term spatial memory is modeled as attractor dynamics…
Local complexity predicts global synchronization of hierarchically networked oscillators
NASA Astrophysics Data System (ADS)
Xu, Jin; Park, Dong-Ho; Jo, Junghyo
2017-07-01
We study the global synchronization of hierarchically-organized Stuart-Landau oscillators, where each subsystem consists of three oscillators with activity-dependent couplings. We considered all possible coupling signs between the three oscillators, and found that they can generate different numbers of phase attractors depending on the network motif. Here, the subsystems are coupled through mean activities of total oscillators. Under weak inter-subsystem couplings, we demonstrate that the synchronization between subsystems is highly correlated with the number of attractors in uncoupled subsystems. Among the network motifs, perfect anti-symmetric ones are unique to generate both single and multiple attractors depending on the activities of oscillators. The flexible local complexity can make global synchronization controllable.
Distortions in recall from visual memory: two classes of attractors at work.
Huang, Jie; Sekuler, Robert
2010-02-24
In a trio of experiments, a matching procedure generated direct, analogue measures of short-term memory for the spatial frequency of Gabor stimuli. Experiment 1 showed that when just a single Gabor was presented for study, a retention interval of just a few seconds was enough to increase the variability of matches, suggesting that noise in memory substantially exceeds that in vision. Experiment 2 revealed that when a pair of Gabors was presented on each trial, the remembered appearance of one of the Gabors was influenced by: (1) the relationship between its spatial frequency and the spatial frequency of the accompanying, task-irrelevant non-target stimulus; and (2) the average spatial frequency of Gabors seen on previous trials. These two influences, which work on very different time scales, were approximately additive in their effects, each operating as an attractor for remembered appearance. Experiment 3 showed that a timely pre-stimulus cue allowed selective attention to curtail the influence of a task-irrelevant non-target, without diminishing the impact of the stimuli seen on previous trials. It appears that these two separable attractors influence distinct processes, with perception being influenced by the non-target stimulus and memory being influenced by stimuli seen on previous trials.
Continuous attractor network models of grid cell firing based on excitatory–inhibitory interactions
Shipston‐Sharman, Oliver; Solanka, Lukas
2016-01-01
Abstract Neurons in the medial entorhinal cortex encode location through spatial firing fields that have a grid‐like organisation. The challenge of identifying mechanisms for grid firing has been addressed through experimental and theoretical investigations of medial entorhinal circuits. Here, we discuss evidence for continuous attractor network models that account for grid firing by synaptic interactions between excitatory and inhibitory cells. These models assume that grid‐like firing patterns are the result of computation of location from velocity inputs, with additional spatial input required to oppose drift in the attractor state. We focus on properties of continuous attractor networks that are revealed by explicitly considering excitatory and inhibitory neurons, their connectivity and their membrane potential dynamics. Models at this level of detail can account for theta‐nested gamma oscillations as well as grid firing, predict spatial firing of interneurons as well as excitatory cells, show how gamma oscillations can be modulated independently from spatial computations, reveal critical roles for neuronal noise, and demonstrate that only a subset of excitatory cells in a network need have grid‐like firing fields. Evaluating experimental data against predictions from detailed network models will be important for establishing the mechanisms mediating grid firing. PMID:27870120
Effect of dilution in asymmetric recurrent neural networks.
Folli, Viola; Gosti, Giorgio; Leonetti, Marco; Ruocco, Giancarlo
2018-04-16
We study with numerical simulation the possible limit behaviors of synchronous discrete-time deterministic recurrent neural networks composed of N binary neurons as a function of a network's level of dilution and asymmetry. The network dilution measures the fraction of neuron couples that are connected, and the network asymmetry measures to what extent the underlying connectivity matrix is asymmetric. For each given neural network, we study the dynamical evolution of all the different initial conditions, thus characterizing the full dynamical landscape without imposing any learning rule. Because of the deterministic dynamics, each trajectory converges to an attractor, that can be either a fixed point or a limit cycle. These attractors form the set of all the possible limit behaviors of the neural network. For each network we then determine the convergence times, the limit cycles' length, the number of attractors, and the sizes of the attractors' basin. We show that there are two network structures that maximize the number of possible limit behaviors. The first optimal network structure is fully-connected and symmetric. On the contrary, the second optimal network structure is highly sparse and asymmetric. The latter optimal is similar to what observed in different biological neuronal circuits. These observations lead us to hypothesize that independently from any given learning model, an efficient and effective biologic network that stores a number of limit behaviors close to its maximum capacity tends to develop a connectivity structure similar to one of the optimal networks we found. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Algebraic model checking for Boolean gene regulatory networks.
Tran, Quoc-Nam
2011-01-01
We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.
A Framework for Network Visualisation: Progress Report
2006-12-01
time; secondly a simple oscillation, in which traffic changes, but those changes repeat periodically; or thirdly, a “ strange attractor ”, a pattern of...changes that never repeats exactly, though it may appear to repeat approximately. The strange attractor is the signature of a chaotic system, which...IST-063 3 - 1 Taylor, M.M. (2006) A Framework for Network Visualisation: Progress Report. In Visualising Network Information (pp. 3-1 – 3-22
De la Fuente, Ildefonso M.; Cortes, Jesus M.; Pelta, David A.; Veguillas, Juan
2013-01-01
Background The experimental observations and numerical studies with dissipative metabolic networks have shown that cellular enzymatic activity self-organizes spontaneously leading to the emergence of a Systemic Metabolic Structure in the cell, characterized by a set of different enzymatic reactions always locked into active states (metabolic core) while the rest of the catalytic processes are only intermittently active. This global metabolic structure was verified for Escherichia coli, Helicobacter pylori and Saccharomyces cerevisiae, and it seems to be a common key feature to all cellular organisms. In concordance with these observations, the cell can be considered a complex metabolic network which mainly integrates a large ensemble of self-organized multienzymatic complexes interconnected by substrate fluxes and regulatory signals, where multiple autonomous oscillatory and quasi-stationary catalytic patterns simultaneously emerge. The network adjusts the internal metabolic activities to the external change by means of flux plasticity and structural plasticity. Methodology/Principal Findings In order to research the systemic mechanisms involved in the regulation of the cellular enzymatic activity we have studied different catalytic activities of a dissipative metabolic network under different external stimuli. The emergent biochemical data have been analysed using statistical mechanic tools, studying some macroscopic properties such as the global information and the energy of the system. We have also obtained an equivalent Hopfield network using a Boltzmann machine. Our main result shows that the dissipative metabolic network can behave as an attractor metabolic network. Conclusions/Significance We have found that the systemic enzymatic activities are governed by attractors with capacity to store functional metabolic patterns which can be correctly recovered from specific input stimuli. The network attractors regulate the catalytic patterns, modify the efficiency in the connection between the multienzymatic complexes, and stably retain these modifications. Here for the first time, we have introduced the general concept of attractor metabolic network, in which this dynamic behavior is observed. PMID:23554883
Corticonic models of brain mechanisms underlying cognition and intelligence
NASA Astrophysics Data System (ADS)
Farhat, Nabil H.
The concern of this review is brain theory or more specifically, in its first part, a model of the cerebral cortex and the way it: (a) interacts with subcortical regions like the thalamus and the hippocampus to provide higher-level-brain functions that underlie cognition and intelligence, (b) handles and represents dynamical sensory patterns imposed by a constantly changing environment, (c) copes with the enormous number of such patterns encountered in a lifetime by means of dynamic memory that offers an immense number of stimulus-specific attractors for input patterns (stimuli) to select from, (d) selects an attractor through a process of “conjugation” of the input pattern with the dynamics of the thalamo-cortical loop, (e) distinguishes between redundant (structured) and non-redundant (random) inputs that are void of information, (f) can do categorical perception when there is access to vast associative memory laid out in the association cortex with the help of the hippocampus, and (g) makes use of “computation” at the edge of chaos and information driven annealing to achieve all this. Other features and implications of the concepts presented for the design of computational algorithms and machines with brain-like intelligence are also discussed. The material and results presented suggest, that a Parametrically Coupled Logistic Map network (PCLMN) is a minimal model of the thalamo-cortical complex and that marrying such a network to a suitable associative memory with re-entry or feedback forms a useful, albeit, abstract model of a cortical module of the brain that could facilitate building a simple artificial brain. In the second part of the review, the results of numerical simulations and drawn conclusions in the first part are linked to the most directly relevant works and views of other workers. What emerges is a picture of brain dynamics on the mesoscopic and macroscopic scales that gives a glimpse of the nature of the long sought after brain code underlying intelligence and other higher level brain functions.
Origins of Chaos in Autonomous Boolean Networks
NASA Astrophysics Data System (ADS)
Socolar, Joshua; Cavalcante, Hugo; Gauthier, Daniel; Zhang, Rui
2010-03-01
Networks with nodes consisting of ideal Boolean logic gates are known to display either steady states, periodic behavior, or an ultraviolet catastrophe where the number of logic-transition events circulating in the network per unit time grows as a power-law. In an experiment, non-ideal behavior of the logic gates prevents the ultraviolet catastrophe and may lead to deterministic chaos. We identify certain non-ideal features of real logic gates that enable chaos in experimental networks. We find that short-pulse rejection and the asymmetry between the logic states tends to engender periodic behavior. On the other hand, a memory effect termed ``degradation'' can generate chaos. Our results strongly suggest that deterministic chaos can be expected in a large class of experimental Boolean-like networks. Such devices may find application in a variety of technologies requiring fast complex waveforms or flat power spectra. The non-ideal effects identified here also have implications for the statistics of attractors in large complex networks.
Continuous Attractor Network Model for Conjunctive Position-by-Velocity Tuning of Grid Cells
Si, Bailu; Romani, Sandro; Tsodyks, Misha
2014-01-01
The spatial responses of many of the cells recorded in layer II of rodent medial entorhinal cortex (MEC) show a triangular grid pattern, which appears to provide an accurate population code for animal spatial position. In layer III, V and VI of the rat MEC, grid cells are also selective to head-direction and are modulated by the speed of the animal. Several putative mechanisms of grid-like maps were proposed, including attractor network dynamics, interactions with theta oscillations or single-unit mechanisms such as firing rate adaptation. In this paper, we present a new attractor network model that accounts for the conjunctive position-by-velocity selectivity of grid cells. Our network model is able to perform robust path integration even when the recurrent connections are subject to random perturbations. PMID:24743341
Understanding genetic regulatory networks
NASA Astrophysics Data System (ADS)
Kauffman, Stuart
2003-04-01
Random Boolean networks (RBM) were introduced about 35 years ago as first crude models of genetic regulatory networks. RBNs are comprised of N on-off genes, connected by a randomly assigned regulatory wiring diagram where each gene has K inputs, and each gene is controlled by a randomly assigned Boolean function. This procedure samples at random from the ensemble of all possible NK Boolean networks. The central ideas are to study the typical, or generic properties of this ensemble, and see 1) whether characteristic differences appear as K and biases in Boolean functions are introducted, and 2) whether a subclass of this ensemble has properties matching real cells. Such networks behave in an ordered or a chaotic regime, with a phase transition, "the edge of chaos" between the two regimes. Networks with continuous variables exhibit the same two regimes. Substantial evidence suggests that real cells are in the ordered regime. A key concept is that of an attractor. This is a reentrant trajectory of states of the network, called a state cycle. The central biological interpretation is that cell types are attractors. A number of properties differentiate the ordered and chaotic regimes. These include the size and number of attractors, the existence in the ordered regime of a percolating "sea" of genes frozen in the on or off state, with a remainder of isolated twinkling islands of genes, a power law distribution of avalanches of gene activity changes following perturbation to a single gene in the ordered regime versus a similar power law distribution plus a spike of enormous avalanches of gene changes in the chaotic regime, and the existence of branching pathway of "differentiation" between attractors induced by perturbations in the ordered regime. Noise is serious issue, since noise disrupts attractors. But numerical evidence suggests that attractors can be made very stable to noise, and meanwhile, metaplasias may be a biological manifestation of noise. As we learn more about the wiring diagram and constraints on rules controlling real genes, we can build refined ensembles reflecting these properties, study the generic properties of the refined ensembles, and hope to gain insight into the dynamics of real cells.
Szalay, Kristóf Z; Nussinov, Ruth; Csermely, Peter
2014-06-01
Conformational barcodes tag functional sites of proteins and are decoded by interacting molecules transmitting the incoming signal. Conformational barcodes are modified by all co-occurring allosteric events induced by post-translational modifications, pathogen, drug binding, etc. We argue that fuzziness (plasticity) of conformational barcodes may be increased by disordered protein structures, by integrative plasticity of multi-phosphorylation events, by increased intracellular water content (decreased molecular crowding) and by increased action of molecular chaperones. This leads to increased plasticity of signaling and cellular networks. Increased plasticity is both substantiated by and inducing an increased noise level. Using the versatile network dynamics tool, Turbine (www.turbine.linkgroup.hu), here we show that the 10 % noise level expected in cellular systems shifts a cancer-related signaling network of human cells from its proliferative attractors to its largest, apoptotic attractor representing their health-preserving response in the carcinogen containing and tumor suppressor deficient environment modeled in our study. Thus, fuzzy conformational barcodes may not only make the cellular system more plastic, and therefore more adaptable, but may also stabilize the complex system allowing better access to its largest attractor. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emenheiser, Jeffrey; Department of Physics, University of California, Davis, California 95616; Chapman, Airlie
Following the long-lived qualitative-dynamics tradition of explaining behavior in complex systems via the architecture of their attractors and basins, we investigate the patterns of switching between distinct trajectories in a network of synchronized oscillators. Our system, consisting of nonlinear amplitude-phase oscillators arranged in a ring topology with reactive nearest-neighbor coupling, is simple and connects directly to experimental realizations. We seek to understand how the multiple stable synchronized states connect to each other in state space by applying Gaussian white noise to each of the oscillators' phases. To do this, we first analytically identify a set of locally stable limit cyclesmore » at any given coupling strength. For each of these attracting states, we analyze the effect of weak noise via the covariance matrix of deviations around those attractors. We then explore the noise-induced attractor switching behavior via numerical investigations. For a ring of three oscillators, we find that an attractor-switching event is always accompanied by the crossing of two adjacent oscillators' phases. For larger numbers of oscillators, we find that the distribution of times required to stochastically leave a given state falls off exponentially, and we build an attractor switching network out of the destination states as a coarse-grained description of the high-dimensional attractor-basin architecture.« less
Classification of attractors for systems of identical coupled Kuramoto oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelbrecht, Jan R.; Mirollo, Renato
2014-03-15
We present a complete classification of attractors for networks of coupled identical Kuramoto oscillators. In such networks, each oscillator is driven by the same first-order trigonometric function, with coefficients given by symmetric functions of the entire oscillator ensemble. For N≠3 oscillators, there are four possible types of attractors: completely synchronized fixed points or limit cycles, and fixed points or limit cycles where all but one of the oscillators are synchronized. The case N = 3 is exceptional; systems of three identical Kuramoto oscillators can also posses attracting fixed points or limit cycles with all three oscillators out of sync, as well asmore » chaotic attractors. Our results rely heavily on the invariance of the flow for such systems under the action of the three-dimensional group of Möbius transformations, which preserve the unit disc, and the analysis of the possible limiting configurations for this group action.« less
Dynamical synapses enhance neural information processing: gracefulness, accuracy, and mobility.
Fung, C C Alan; Wong, K Y Michael; Wang, He; Wu, Si
2012-05-01
Experimental data have revealed that neuronal connection efficacy exhibits two forms of short-term plasticity: short-term depression (STD) and short-term facilitation (STF). They have time constants residing between fast neural signaling and rapid learning and may serve as substrates for neural systems manipulating temporal information on relevant timescales. This study investigates the impact of STD and STF on the dynamics of continuous attractor neural networks and their potential roles in neural information processing. We find that STD endows the network with slow-decaying plateau behaviors: the network that is initially being stimulated to an active state decays to a silent state very slowly on the timescale of STD rather than on that of neuralsignaling. This provides a mechanism for neural systems to hold sensory memory easily and shut off persistent activities gracefully. With STF, we find that the network can hold a memory trace of external inputs in the facilitated neuronal interactions, which provides a way to stabilize the network response to noisy inputs, leading to improved accuracy in population decoding. Furthermore, we find that STD increases the mobility of the network states. The increased mobility enhances the tracking performance of the network in response to time-varying stimuli, leading to anticipative neural responses. In general, we find that STD and STP tend to have opposite effects on network dynamics and complementary computational advantages, suggesting that the brain may employ a strategy of weighting them differentially depending on the computational purpose.
NASA Astrophysics Data System (ADS)
Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao
A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.
Short-term depression and transient memory in sensory cortex.
Gillary, Grant; Heydt, Rüdiger von der; Niebur, Ernst
2017-12-01
Persistent neuronal activity is usually studied in the context of short-term memory localized in central cortical areas. Recent studies show that early sensory areas also can have persistent representations of stimuli which emerge quickly (over tens of milliseconds) and decay slowly (over seconds). Traditional positive feedback models cannot explain sensory persistence for at least two reasons: (i) They show attractor dynamics, with transient perturbations resulting in a quasi-permanent change of system state, whereas sensory systems return to the original state after a transient. (ii) As we show, those positive feedback models which decay to baseline lose their persistence when their recurrent connections are subject to short-term depression, a common property of excitatory connections in early sensory areas. Dual time constant network behavior has also been implemented by nonlinear afferents producing a large transient input followed by much smaller steady state input. We show that such networks require unphysiologically large onset transients to produce the rise and decay observed in sensory areas. Our study explores how memory and persistence can be implemented in another model class, derivative feedback networks. We show that these networks can operate with two vastly different time courses, changing their state quickly when new information is coming in but retaining it for a long time, and that these capabilities are robust to short-term depression. Specifically, derivative feedback networks with short-term depression that acts differentially on positive and negative feedback projections are capable of dynamically changing their time constant, thus allowing fast onset and slow decay of responses without requiring unrealistically large input transients.
Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity
Abbott, L. F.; Sompolinsky, Haim
2017-01-01
Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for given statistics of afferent activations. Previous work has shown that balanced networks amplify spatiotemporal variability and account for observed asynchronous irregular states. Here we present a distinct type of balanced network that amplifies small changes in the impinging signals and emerges automatically from learning to perform neuronal and network functions robustly. PMID:29042519
Radar signal categorization using a neural network
NASA Technical Reports Server (NTRS)
Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.
1991-01-01
Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.
Synchronization behaviors of coupled systems composed of hidden attractors
NASA Astrophysics Data System (ADS)
Zhang, Ge; Wu, Fuqiang; Wang, Chunni; Ma, Jun
2017-10-01
Based on a class of chaotic system composed of hidden attractors, in which the equilibrium points are described by a circular function, complete synchronization between two identical systems, pattern formation and synchronization of network is investigated, respectively. A statistical factor of synchronization is defined and calculated by using the mean field theory, the dependence of synchronization on bifurcation parameters discussed in numerical way. By setting a chain network, which local kinetic is described by hidden attractors, synchronization approach is investigated. It is found that the synchronization and pattern formation are dependent on the coupling intensity and also the selection of coupling variables. In the end, open problems are proposed for readers’ extensive guidance and investigation.
Recall of patterns using binary and gray-scale autoassociative morphological memories
NASA Astrophysics Data System (ADS)
Sussner, Peter
2005-08-01
Morphological associative memories (MAM's) belong to a class of artificial neural networks that perform the operations erosion or dilation of mathematical morphology at each node. Therefore we speak of morphological neural networks. Alternatively, the total input effect on a morphological neuron can be expressed in terms of lattice induced matrix operations in the mathematical theory of minimax algebra. Neural models of associative memories are usually concerned with the storage and the retrieval of binary or bipolar patterns. Thus far, the emphasis in research on morphological associative memory systems has been on binary models, although a number of notable features of autoassociative morphological memories (AMM's) such as optimal absolute storage capacity and one-step convergence have been shown to hold in the general, gray-scale setting. In previous papers, we gained valuable insight into the storage and recall phases of AMM's by analyzing their fixed points and basins of attraction. We have shown in particular that the fixed points of binary AMM's correspond to the lattice polynomials in the original patterns. This paper extends these results in the following ways. In the first place, we provide an exact characterization of the fixed points of gray-scale AMM's in terms of combinations of the original patterns. Secondly, we present an exact expression for the fixed point attractor that represents the output of either a binary or a gray-scale AMM upon presentation of a certain input. The results of this paper are confirmed in several experiments using binary patterns and gray-scale images.
A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks
Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo
2015-01-01
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. PMID:26291608
A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.
Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo
2015-08-01
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.
Neural circuit mechanisms of short-term memory
NASA Astrophysics Data System (ADS)
Goldman, Mark
Memory over time scales of seconds to tens of seconds is thought to be maintained by neural activity that is triggered by a memorized stimulus and persists long after the stimulus is turned off. This presents a challenge to current models of memory-storing mechanisms, because the typical time scales associated with cellular and synaptic dynamics are two orders of magnitude smaller than this. While such long time scales can easily be achieved by bistable processes that toggle like a flip-flop between a baseline and elevated-activity state, many neuronal systems have been observed experimentally to be capable of maintaining a continuum of stable states. For example, in neural integrator networks involved in the accumulation of evidence for decision making and in motor control, individual neurons have been recorded whose activity reflects the mathematical integral of their inputs; in the absence of input, these neurons sustain activity at a level proportional to the running total of their inputs. This represents an analog form of memory whose dynamics can be conceptualized through an energy landscape with a continuum of lowest-energy states. Such continuous attractor landscapes are structurally non-robust, in seeming violation of the relative robustness of biological memory systems. In this talk, I will present and compare different biologically motivated circuit motifs for the accumulation and storage of signals in short-term memory. Challenges to generating robust memory maintenance will be highlighted and potential mechanisms for ameliorating the sensitivity of memory networks to perturbations will be discussed. Funding for this work was provided by NIH R01 MH065034, NSF IIS-1208218, Simons Foundation 324260, and a UC Davis Ophthalmology Research to Prevent Blindness Grant.
Echo state networks with filter neurons and a delay&sum readout.
Holzmann, Georg; Hauser, Helmut
2010-03-01
Echo state networks (ESNs) are a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It has been demonstrated that ESNs outperform other methods on a number of benchmark tasks. Although the approach is appealing, there are still some inherent limitations in the original formulation. Here we suggest two enhancements of this network model. First, the previously proposed idea of filters in neurons is extended to arbitrary infinite impulse response (IIR) filter neurons. This enables such networks to learn multiple attractors and signals at different timescales, which is especially important for modeling real-world time series. Second, a delay&sum readout is introduced, which adds trainable delays in the synaptic connections of output neurons and therefore vastly improves the memory capacity of echo state networks. It is shown in commonly used benchmark tasks and real-world examples, that this new structure is able to significantly outperform standard ESNs and other state-of-the-art models for nonlinear dynamical system modeling. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
LaRue, James P.; Luzanov, Yuriy
2013-05-01
A new extension to the way in which the Bidirectional Associative Memory (BAM) algorithms are implemented is presented here. We will show that by utilizing the singular value decomposition (SVD) and integrating principles of independent component analysis (ICA) into the nullspace (NS) we have created a novel approach to mitigating spurious attractors. We demonstrate this with two applications. The first application utilizes a one-layer association while the second application is modeled after the several hierarchal associations of ventral pathways. The first application will detail the way in which we manage the associations in terms of matrices. The second application will take what we have learned from the first example and apply it to a cascade of a convolutional neural network (CNN) and perceptron this being our signal processing model of the ventral pathways, i.e., visual systems.
Characterization of chaotic attractors under noise: A recurrence network perspective
NASA Astrophysics Data System (ADS)
Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.
2016-12-01
We undertake a detailed numerical investigation to understand how the addition of white and colored noise to a chaotic time series changes the topology and the structure of the underlying attractor reconstructed from the time series. We use the methods and measures of recurrence plot and recurrence network generated from the time series for this analysis. We explicitly show that the addition of noise obscures the property of recurrence of trajectory points in the phase space which is the hallmark of every dynamical system. However, the structure of the attractor is found to be robust even upto high noise levels of 50%. An advantage of recurrence network measures over the conventional nonlinear measures is that they can be applied on short and non stationary time series data. By using the results obtained from the above analysis, we go on to analyse the light curves from a dominant black hole system and show that the recurrence network measures are capable of identifying the nature of noise contamination in a time series.
Maximal switchability of centralized networks
NASA Astrophysics Data System (ADS)
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
Therapeutic target discovery using Boolean network attractors: improvements of kali
Guziolowski, Carito
2018-01-01
In a previous article, an algorithm for identifying therapeutic targets in Boolean networks modelling pathological mechanisms was introduced. In the present article, the improvements made on this algorithm, named kali, are described. These improvements are (i) the possibility to work on asynchronous Boolean networks, (ii) a finer assessment of therapeutic targets and (iii) the possibility to use multivalued logic. kali assumes that the attractors of a dynamical system, such as a Boolean network, are associated with the phenotypes of the modelled biological system. Given a logic-based model of pathological mechanisms, kali searches for therapeutic targets able to reduce the reachability of the attractors associated with pathological phenotypes, thus reducing their likeliness. kali is illustrated on an example network and used on a biological case study. The case study is a published logic-based model of bladder tumorigenesis from which kali returns consistent results. However, like any computational tool, kali can predict but cannot replace human expertise: it is a supporting tool for coping with the complexity of biological systems in the field of drug discovery. PMID:29515890
Suemitsu, Yoshikazu; Nara, Shigetoshi
2004-09-01
Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.
Overlapping community detection based on link graph using distance dynamics
NASA Astrophysics Data System (ADS)
Chen, Lei; Zhang, Jing; Cai, Li-Jun
2018-01-01
The distance dynamics model was recently proposed to detect the disjoint community of a complex network. To identify the overlapping structure of a network using the distance dynamics model, an overlapping community detection algorithm, called L-Attractor, is proposed in this paper. The process of L-Attractor mainly consists of three phases. In the first phase, L-Attractor transforms the original graph to a link graph (a new edge graph) to assure that one node has multiple distances. In the second phase, using the improved distance dynamics model, a dynamic interaction process is introduced to simulate the distance dynamics (shrink or stretch). Through the dynamic interaction process, all distances converge, and the disjoint community structure of the link graph naturally manifests itself. In the third phase, a recovery method is designed to convert the disjoint community structure of the link graph to the overlapping community structure of the original graph. Extensive experiments are conducted on the LFR benchmark networks as well as real-world networks. Based on the results, our algorithm demonstrates higher accuracy and quality than other state-of-the-art algorithms.
Cross over of recurrence networks to random graphs and random geometric graphs
NASA Astrophysics Data System (ADS)
Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.
2017-02-01
Recurrence networks are complex networks constructed from the time series of chaotic dynamical systems where the connection between two nodes is limited by the recurrence threshold. This condition makes the topology of every recurrence network unique with the degree distribution determined by the probability density variations of the representative attractor from which it is constructed. Here we numerically investigate the properties of recurrence networks from standard low-dimensional chaotic attractors using some basic network measures and show how the recurrence networks are different from random and scale-free networks. In particular, we show that all recurrence networks can cross over to random geometric graphs by adding sufficient amount of noise to the time series and into the classical random graphs by increasing the range of interaction to the system size. We also highlight the effectiveness of a combined plot of characteristic path length and clustering coefficient in capturing the small changes in the network characteristics.
Coherence resonance in bursting neural networks
NASA Astrophysics Data System (ADS)
Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.
2015-10-01
Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.
Chaos in a neural network circuit
NASA Astrophysics Data System (ADS)
Kepler, Thomas B.; Datt, Sumeet; Meyer, Robert B.; Abott, L. F.
1990-12-01
We have constructed a neural network circuit of four clipped, high-grain, integrating operational amplifiers coupled to each other through an array of digitally programmable resistor ladders (MDACs). In addition to fixed-point and cyclic behavior, the circuit exhibits chaotic behavior with complex strange attractors which are approached through period doubling, intermittent attractor expansion and/or quasiperiodic pathways. Couplings between the nonlinear circuit elements are controlled by a computer which can automatically search through the space of couplings for interesting phenomena. We report some initial statistical results relating the behavior of the network to properties of its coupling matrix. Through these results and further research the circuit should help resolve fundamental issues concerning chaos in neural networks.
Neural attractor network for application in visual field data classification.
Fink, Wolfgang
2004-07-07
The purpose was to introduce a novel method for computer-based classification of visual field data derived from perimetric examination, that may act as a 'counsellor', providing an independent 'second opinion' to the diagnosing physician. The classification system consists of a Hopfield-type neural attractor network that obtains its input data from perimetric examination results. An iterative relaxation process determines the states of the neurons dynamically. Therefore, even 'noisy' perimetric output, e.g., early stages of a disease, may eventually be classified correctly according to the predefined idealized visual field defect (scotoma) patterns, stored as attractors of the network, that are found with diseases of the eye, optic nerve and the central nervous system. Preliminary tests of the classification system on real visual field data derived from perimetric examinations have shown a classification success of over 80%. Some of the main advantages of the Hopfield-attractor-network-based approach over feed-forward type neural networks are: (1) network architecture is defined by the classification problem; (2) no training is required to determine the neural coupling strengths; (3) assignment of an auto-diagnosis confidence level is possible by means of an overlap parameter and the Hamming distance. In conclusion, the novel method for computer-based classification of visual field data, presented here, furnishes a valuable first overview and an independent 'second opinion' in judging perimetric examination results, pointing towards a final diagnosis by a physician. It should not be considered a substitute for the diagnosing physician. Thanks to the worldwide accessibility of the Internet, the classification system offers a promising perspective towards modern computer-assisted diagnosis in both medicine and tele-medicine, for example and in particular, with respect to non-ophthalmic clinics or in communities where perimetric expertise is not readily available.
Bonaiuto, James J; de Berker, Archy; Bestmann, Sven
2016-01-01
Animals and humans have a tendency to repeat recent choices, a phenomenon known as choice hysteresis. The mechanism for this choice bias remains unclear. Using an established, biophysically informed model of a competitive attractor network for decision making, we found that decaying tail activity from the previous trial caused choice hysteresis, especially during difficult trials, and accurately predicted human perceptual choices. In the model, choice variability could be directionally altered through amplification or dampening of post-trial activity decay through simulated depolarizing or hyperpolarizing network stimulation. An analogous intervention using transcranial direct current stimulation (tDCS) over left dorsolateral prefrontal cortex (dlPFC) yielded a close match between model predictions and experimental results: net soma depolarizing currents increased choice hysteresis, while hyperpolarizing currents suppressed it. Residual activity in competitive attractor networks within dlPFC may thus give rise to biases in perceptual choices, which can be directionally controlled through non-invasive brain stimulation. DOI: http://dx.doi.org/10.7554/eLife.20047.001 PMID:28005007
ERIC Educational Resources Information Center
Cree, George S.; McNorgan, Chris; McRae, Ken
2006-01-01
The authors present data from 2 feature verification experiments designed to determine whether distinctive features have a privileged status in the computation of word meaning. They use an attractor-based connectionist model of semantic memory to derive predictions for the experiments. Contrary to central predictions of the conceptual structure…
A Realtime Active Feedback Control System For Coupled Nonlinear Chemical Oscillators
NASA Astrophysics Data System (ADS)
Tompkins, Nathan; Fraden, Seth
2012-02-01
We study the manipulation and control of oscillatory networks. As a model system we use an emulsion of Belousov-Zhabotinsky (BZ) oscillators packed on a hexagonal lattice. Each drop is observed and perturbed by a Programmable Illumination Microscope (PIM). The PIM allows us to track individual BZ oscillators, calculate the phase and order parameters of every drop, and selectively perturb specific drops with photo illumination, all in realtime. To date we have determined the native attractor patterns for drops in 1D arrays and 2D hexagonal packing as a function of coupling strength as well as determined methods to move the system from one attractor basin to another. Current work involves implementing these attractor control methods with our experimental system and future work will likely include implementing a model neural network for use with photo controllable BZ emulsions.
Quasi-potential landscape in complex multi-stable systems
Zhou, Joseph Xu; Aliyu, M. D. S.; Aurell, Erik; Huang, Sui
2012-01-01
The developmental dynamics of multicellular organisms is a process that takes place in a multi-stable system in which each attractor state represents a cell type, and attractor transitions correspond to cell differentiation paths. This new understanding has revived the idea of a quasi-potential landscape, first proposed by Waddington as a metaphor. To describe development, one is interested in the ‘relative stabilities’ of N attractors (N > 2). Existing theories of state transition between local minima on some potential landscape deal with the exit part in the transition between two attractors in pair-attractor systems but do not offer the notion of a global potential function that relates more than two attractors to each other. Several ad hoc methods have been used in systems biology to compute a landscape in non-gradient systems, such as gene regulatory networks. Here we present an overview of currently available methods, discuss their limitations and propose a new decomposition of vector fields that permits the computation of a quasi-potential function that is equivalent to the Freidlin–Wentzell potential but is not limited to two attractors. Several examples of decomposition are given, and the significance of such a quasi-potential function is discussed. PMID:22933187
Kanamaru, Takashi; Fujii, Hiroshi; Aihara, Kazuyuki
2013-01-01
Corticopetal acetylcholine (ACh) is released transiently from the nucleus basalis of Meynert (NBM) into the cortical layers and is associated with top-down attention. Recent experimental data suggest that this release of ACh disinhibits layer 2/3 pyramidal neurons (PYRs) via muscarinic presynaptic effects on inhibitory synapses. Together with other possible presynaptic cholinergic effects on excitatory synapses, this may result in dynamic and temporal modifications of synapses associated with top-down attention. However, the system-level consequences and cognitive relevance of such disinhibitions are poorly understood. Herein, we propose a theoretical possibility that such transient modifications of connectivity associated with ACh release, in addition to top-down glutamatergic input, may provide a neural mechanism for the temporal reactivation of attractors as neural correlates of memories. With baseline levels of ACh, the brain returns to quasi-attractor states, exhibiting transitive dynamics between several intrinsic internal states. This suggests that top-down attention may cause the attention-induced deformations between two types of attractor landscapes: the quasi-attractor landscape (Q-landscape, present under low-ACh, non-attentional conditions) and the attractor landscape (A-landscape, present under high-ACh, top-down attentional conditions). We present a conceptual computational model based on experimental knowledge of the structure of PYRs and interneurons (INs) in cortical layers 1 and 2/3 and discuss the possible physiological implications of our results. PMID:23326520
Panda, Priyadarshini; Roy, Kaushik
2017-01-01
Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations. PMID:29311774
Noise-Assisted Concurrent Multipath Traffic Distribution in Ad Hoc Networks
Murata, Masayuki
2013-01-01
The concept of biologically inspired networking has been introduced to tackle unpredictable and unstable situations in computer networks, especially in wireless ad hoc networks where network conditions are continuously changing, resulting in the need of robustness and adaptability of control methods. Unfortunately, existing methods often rely heavily on the detailed knowledge of each network component and the preconfigured, that is, fine-tuned, parameters. In this paper, we utilize a new concept, called attractor perturbation (AP), which enables controlling the network performance using only end-to-end information. Based on AP, we propose a concurrent multipath traffic distribution method, which aims at lowering the average end-to-end delay by only adjusting the transmission rate on each path. We demonstrate through simulations that, by utilizing the attractor perturbation relationship, the proposed method achieves a lower average end-to-end delay compared to other methods which do not take fluctuations into account. PMID:24319375
Structures and Boolean Dynamics in Gene Regulatory Networks
NASA Astrophysics Data System (ADS)
Szedlak, Anthony
This dissertation discusses the topological and dynamical properties of GRNs in cancer, and is divided into four main chapters. First, the basic tools of modern complex network theory are introduced. These traditional tools as well as those developed by myself (set efficiency, interset efficiency, and nested communities) are crucial for understanding the intricate topological properties of GRNs, and later chapters recall these concepts. Second, the biology of gene regulation is discussed, and a method for disease-specific GRN reconstruction developed by our collaboration is presented. This complements the traditional exhaustive experimental approach of building GRNs edge-by-edge by quickly inferring the existence of as of yet undiscovered edges using correlations across sets of gene expression data. This method also provides insight into the distribution of common mutations across GRNs. Third, I demonstrate that the structures present in these reconstructed networks are strongly related to the evolutionary histories of their constituent genes. Investigation of how the forces of evolution shaped the topology of GRNs in multicellular organisms by growing outward from a core of ancient, conserved genes can shed light upon the ''reverse evolution'' of normal cells into unicellular-like cancer states. Next, I simulate the dynamics of the GRNs of cancer cells using the Hopfield model, an infinite range spin-glass model designed with the ability to encode Boolean data as attractor states. This attractor-driven approach facilitates the integration of gene expression data into predictive mathematical models. Perturbations representing therapeutic interventions are applied to sets of genes, and the resulting deviations from their attractor states are recorded, suggesting new potential drug targets for experimentation. Finally, I extend the Hopfield model to modular networks, cyclic attractors, and complex attractors, and apply these concepts to simulations of the cell cycle process. Futher development of these and other theoretical and computational tools is necessary to analyze the deluge of experimental data produced by modern and future biological high throughput methods. (Abstract shortened by ProQuest.).
ASP-G: an ASP-based method for finding attractors in genetic regulatory networks
Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine
2014-01-01
Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722
Dimensionality and entropy of spontaneous and evoked rate activity
NASA Astrophysics Data System (ADS)
Engelken, Rainer; Wolf, Fred
Cortical circuits exhibit complex activity patterns both spontaneously and evoked by external stimuli. Finding low-dimensional structure in population activity is a challenge. What is the diversity of the collective neural activity and how is it affected by an external stimulus? Using concepts from ergodic theory, we calculate the attractor dimensionality and dynamical entropy production of these networks. We obtain these two canonical measures of the collective network dynamics from the full set of Lyapunov exponents. We consider a randomly-wired firing-rate network that exhibits chaotic rate fluctuations for sufficiently strong synaptic weights. We show that dynamical entropy scales logarithmically with synaptic coupling strength, while the attractor dimensionality saturates. Thus, despite the increasing uncertainty, the diversity of collective activity saturates for strong coupling. We find that a time-varying external stimulus drastically reduces both entropy and dimensionality. Finally, we analytically approximate the full Lyapunov spectrum in several limiting cases by random matrix theory. Our study opens a novel avenue to characterize the complex dynamics of rate networks and the geometric structure of the corresponding high-dimensional chaotic attractor. received funding from Evangelisches Studienwerk Villigst, DFG through CRC 889 and Volkswagen Foundation.
A source-attractor approach to network detection of radiation sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Barry, M. L..; Grieme, M.
Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less
Irregular synchronous activity in stochastically-coupled networks of integrate-and-fire neurons.
Lin, J K; Pawelzik, K; Ernst, U; Sejnowski, T J
1998-08-01
We investigate the spatial and temporal aspects of firing patterns in a network of integrate-and-fire neurons arranged in a one-dimensional ring topology. The coupling is stochastic and shaped like a Mexican hat with local excitation and lateral inhibition. With perfect precision in the couplings, the attractors of activity in the network occur at every position in the ring. Inhomogeneities in the coupling break the translational invariance of localized attractors and lead to synchronization within highly active as well as weakly active clusters. The interspike interval variability is high, consistent with recent observations of spike time distributions in visual cortex. The robustness of our results is demonstrated with more realistic simulations on a network of McGregor neurons which model conductance changes and after-hyperpolarization potassium currents.
Ring Attractor Dynamics Emerge from a Spiking Model of the Entire Protocerebral Bridge.
Kakaria, Kyobi S; de Bivort, Benjamin L
2017-01-01
Animal navigation is accomplished by a combination of landmark-following and dead reckoning based on estimates of self motion. Both of these approaches require the encoding of heading information, which can be represented as an allocentric or egocentric azimuthal angle. Recently, Ca 2+ correlates of landmark position and heading direction, in egocentric coordinates, were observed in the ellipsoid body (EB), a ring-shaped processing unit in the fly central complex (CX; Seelig and Jayaraman, 2015). These correlates displayed key dynamics of so-called ring attractors, namely: (1) responsiveness to the position of external stimuli; (2) persistence in the absence of external stimuli; (3) locking onto a single external stimulus when presented with two competitors; (4) stochastically switching between competitors with low probability; and (5) sliding or jumping between positions when an external stimulus moves. We hypothesized that ring attractor-like activity in the EB arises from reciprocal neuronal connections to a related structure, the protocerebral bridge (PB). Using recent light-microscopy resolution catalogs of neuronal cell types in the PB (Lin et al., 2013; Wolff et al., 2015), we determined a connectivity matrix for the PB-EB circuit. When activity in this network was simulated using a leaky-integrate-and-fire model, we observed patterns of activity that closely resemble the reported Ca 2+ phenomena. All qualitative ring attractor behaviors were recapitulated in our model, allowing us to predict failure modes of the putative PB-EB ring attractor and the circuit dynamics phenotypes of thermogenetic or optogenetic manipulations. Ring attractor dynamics emerged under a wide variety of parameter configurations, even including non-spiking leaky-integrator implementations. This suggests that the ring-attractor computation is a robust output of this circuit, apparently arising from its high-level network properties (topological configuration, local excitation and long-range inhibition) rather than fine-scale biological detail.
NASA Astrophysics Data System (ADS)
Ezhilarasu, P. Megavarna; Inbavalli, M.; Murali, K.; Thamilmaran, K.
2018-07-01
In this paper, we report the dynamical transitions to strange non-chaotic attractors in a quasiperiodically forced state controlled-cellular neural network (SC-CNN)-based MLC circuit via two different mechanisms, namely the Heagy-Hammel route and the gradual fractalisation route. These transitions were observed through numerical simulations and hardware experiments and confirmed using statistical tools, such as maximal Lyapunov exponent spectrum and its variance and singular continuous spectral analysis. We find that there is a remarkable agreement of the results from both numerical simulations as well as from hardware experiments.
Neural learning of constrained nonlinear transformations
NASA Technical Reports Server (NTRS)
Barhen, Jacob; Gulati, Sandeep; Zak, Michail
1989-01-01
Two issues that are fundamental to developing autonomous intelligent robots, namely, rudimentary learning capability and dexterous manipulation, are examined. A powerful neural learning formalism is introduced for addressing a large class of nonlinear mapping problems, including redundant manipulator inverse kinematics, commonly encountered during the design of real-time adaptive control mechanisms. Artificial neural networks with terminal attractor dynamics are used. The rapid network convergence resulting from the infinite local stability of these attractors allows the development of fast neural learning algorithms. Approaches to manipulator inverse kinematics are reviewed, the neurodynamics model is discussed, and the neural learning algorithm is presented.
A fast community detection method in bipartite networks by distance dynamics
NASA Astrophysics Data System (ADS)
Sun, Hong-liang; Ch'ng, Eugene; Yong, Xi; Garibaldi, Jonathan M.; See, Simon; Chen, Duan-bing
2018-04-01
Many real bipartite networks are found to be divided into two-mode communities. In this paper, we formulate a new two-mode community detection algorithm BiAttractor. It is based on distance dynamics model Attractor proposed by Shao et al. with extension from unipartite to bipartite networks. Since Jaccard coefficient of distance dynamics model is incapable to measure distances of different types of vertices in bipartite networks, our main contribution is to extend distance dynamics model from unipartite to bipartite networks using a novel measure Local Jaccard Distance (LJD). Furthermore, distances between different types of vertices are not affected by common neighbors in the original method. This new idea makes clear assumptions and yields interpretable results in linear time complexity O(| E |) in sparse networks, where | E | is the number of edges. Experiments on synthetic networks demonstrate it is capable to overcome resolution limit compared with existing other methods. Further research on real networks shows that this model can accurately detect interpretable community structures in a short time.
Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.
2017-01-01
Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187
NASA Astrophysics Data System (ADS)
Hellen, Edward H.; Volkov, Evgeny
2018-09-01
We study the dynamical regimes demonstrated by a pair of identical 3-element ring oscillators (reduced version of synthetic 3-gene genetic Repressilator) coupled using the design of the 'quorum sensing (QS)' process natural for interbacterial communications. In this work QS is implemented as an additional network incorporating elements of the ring as both the source and the activation target of the fast diffusion QS signal. This version of indirect nonlinear coupling, in cooperation with the reasonable extension of the parameters which control properties of the isolated oscillators, exhibits the formation of a very rich array of attractors. Using a parameter-space defined by the individual oscillator amplitude and the coupling strength, we found the extended area of parameter-space where the identical oscillators demonstrate quasiperiodicity, which evolves to chaos via the period doubling of either resonant limit cycles or complex antiphase symmetric limit cycles with five winding numbers. The symmetric chaos extends over large parameter areas up to its loss of stability, followed by a system transition to an unexpected mode: an asymmetric limit cycle with a winding number of 1:2. In turn, after long evolution across the parameter-space, this cycle demonstrates a period doubling cascade which restores the symmetry of dynamics by formation of symmetric chaos, which nevertheless preserves the memory of the asymmetric limit cycles in the form of stochastic alternating "polarization" of the time series. All stable attractors coexist with some others, forming remarkable and complex multistability including the coexistence of torus and limit cycles, chaos and regular attractors, symmetric and asymmetric regimes. We traced the paths and bifurcations leading to all areas of chaos, and presented a detailed map of all transformations of the dynamics.
Attractors of equations of non-Newtonian fluid dynamics
NASA Astrophysics Data System (ADS)
Zvyagin, V. G.; Kondrat'ev, S. K.
2014-10-01
This survey describes a version of the trajectory-attractor method, which is applied to study the limit asymptotic behaviour of solutions of equations of non-Newtonian fluid dynamics. The trajectory-attractor method emerged in papers of the Russian mathematicians Vishik and Chepyzhov and the American mathematician Sell under the condition that the corresponding trajectory spaces be invariant under the translation semigroup. The need for such an approach was caused by the fact that for many equations of mathematical physics for which the Cauchy initial-value problem has a global (weak) solution with respect to the time, the uniqueness of such a solution has either not been established or does not hold. In particular, this is the case for equations of fluid dynamics. At the same time, trajectory spaces invariant under the translation semigroup could not be constructed for many equations of non-Newtonian fluid dynamics. In this connection, a different approach to the construction of trajectory attractors for dissipative systems was proposed in papers of Zvyagin and Vorotnikov without using invariance of trajectory spaces under the translation semigroup and is based on the topological lemma of Shura-Bura. This paper presents examples of equations of non-Newtonian fluid dynamics (the Jeffreys system describing movement of the Earth's crust, the model of motion of weak aqueous solutions of polymers, a system with memory) for which the aforementioned construction is used to prove the existence of attractors in both the autonomous and the non-autonomous cases. At the beginning of the paper there is also a brief exposition of the results of Ladyzhenskaya on the existence of attractors of the two-dimensional Navier-Stokes system and the result of Vishik and Chepyzhov for the case of attractors of the three-dimensional Navier-Stokes system. Bibliography: 34 titles.
Associative memory of phase-coded spatiotemporal patterns in leaky Integrate and Fire networks.
Scarpetta, Silvia; Giacco, Ferdinando
2013-04-01
We study the collective dynamics of a Leaky Integrate and Fire network in which precise relative phase relationship of spikes among neurons are stored, as attractors of the dynamics, and selectively replayed at different time scales. Using an STDP-based learning process, we store in the connectivity several phase-coded spike patterns, and we find that, depending on the excitability of the network, different working regimes are possible, with transient or persistent replay activity induced by a brief signal. We introduce an order parameter to evaluate the similarity between stored and recalled phase-coded pattern, and measure the storage capacity. Modulation of spiking thresholds during replay changes the frequency of the collective oscillation or the number of spikes per cycle, keeping preserved the phases relationship. This allows a coding scheme in which phase, rate and frequency are dissociable. Robustness with respect to noise and heterogeneity of neurons parameters is studied, showing that, since dynamics is a retrieval process, neurons preserve stable precise phase relationship among units, keeping a unique frequency of oscillation, even in noisy conditions and with heterogeneity of internal parameters of the units.
Super-linear Precision in Simple Neural Population Codes
NASA Astrophysics Data System (ADS)
Schwab, David; Fiete, Ila
2015-03-01
A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.
AHaH computing-from metastable switches to attractors to machine learning.
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.
AHaH Computing–From Metastable Switches to Attractors to Machine Learning
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. PMID:24520315
Precision and reliability of periodically and quasiperiodically driven integrate-and-fire neurons.
Tiesinga, P H E
2002-04-01
Neurons in the brain communicate via trains of all-or-none electric events known as spikes. How the brain encodes information using spikes-the neural code-remains elusive. Here the robustness against noise of stimulus-induced neural spike trains is studied in terms of attractors and bifurcations. The dynamics of model neurons converges after a transient onto an attractor yielding a reproducible sequence of spike times. At a bifurcation point the spike times on the attractor change discontinuously when a parameter is varied. Reliability, the stability of the attractor against noise, is reduced when the neuron operates close to a bifurcation point. We determined using analytical spike-time maps the attractor and bifurcation structure of an integrate-and-fire model neuron driven by a periodic or a quasiperiodic piecewise constant current and investigated the stability of attractors against noise. The integrate-and-fire model neuron became mode locked to the periodic current with a rational winding number p/q and produced p spikes per q cycles. There were q attractors. p:q mode-locking regions formed Arnold tongues. In the model, reliability was the highest during 1:1 mode locking when there was only one attractor, as was also observed in recent experiments. The quasiperiodically driven neuron mode locked to either one of the two drive periods, or to a linear combination of both of them. Mode-locking regions were organized in Arnold tongues and reliability was again highest when there was only one attractor. These results show that neuronal reliability in response to the rhythmic drive generated by synchronized networks of neurons is profoundly influenced by the location of the Arnold tongues in parameter space.
Boyatzis, Richard E.; Rochford, Kylie; Taylor, Scott N.
2015-01-01
Personal and shared vision have a long history in management and organizational practices yet only recently have we begun to build a systematic body of empirical knowledge about the role of personal and shared vision in organizations. As the introductory paper for this special topic in Frontiers in Psychology, we present a theoretical argument as to the existence and critical role of two states in which a person, dyad, team, or organization may find themselves when engaging in the creation of a personal or shared vision: the positive emotional attractor (PEA) and the negative emotional attractor (NEA). These two primary states are strange attractors, each characterized by three dimensions: (1) positive versus negative emotional arousal; (2) endocrine arousal of the parasympathetic nervous system versus sympathetic nervous system; and (3) neurological activation of the default mode network versus the task positive network. We argue that arousing the PEA is critical when creating or affirming a personal vision (i.e., sense of one’s purpose and ideal self). We begin our paper by reviewing the underpinnings of our PEA–NEA theory, briefly review each of the papers in this special issue, and conclude by discussing the practical implications of the theory. PMID:26052300
DEEP ATTRACTOR NETWORK FOR SINGLE-MICROPHONE SPEAKER SEPARATION.
Chen, Zhuo; Luo, Yi; Mesgarani, Nima
2017-03-01
Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.
Calibration of the head direction network: a role for symmetric angular head velocity cells.
Stratton, Peter; Wyeth, Gordon; Wiles, Janet
2010-06-01
Continuous attractor networks require calibration. Computational models of the head direction (HD) system of the rat usually assume that the connections that maintain HD neuron activity are pre-wired and static. Ongoing activity in these models relies on precise continuous attractor dynamics. It is currently unknown how such connections could be so precisely wired, and how accurate calibration is maintained in the face of ongoing noise and perturbation. Our adaptive attractor model of the HD system that uses symmetric angular head velocity (AHV) cells as a training signal shows that the HD system can learn to support stable firing patterns from poorly-performing, unstable starting conditions. The proposed calibration mechanism suggests a requirement for symmetric AHV cells, the existence of which has previously been unexplained, and predicts that symmetric and asymmetric AHV cells should be distinctly different (in morphology, synaptic targets and/or methods of action on postsynaptic HD cells) due to their distinctly different functions.
NASA Astrophysics Data System (ADS)
Zhou, Ling; Wang, Chunhua; Zhang, Xin; Yao, Wei
By replacing the resistor in a Twin-T network with a generalized flux-controlled memristor, this paper proposes a simple fourth-order memristive Twin-T oscillator. Rich dynamical behaviors can be observed in the dynamical system. The most striking feature is that this system has various periodic orbits and various chaotic attractors generated by adjusting parameter b. At the same time, coexisting attractors and antimonotonicity are also detected (especially, two full Feigenbaum remerging trees in series are observed in such autonomous chaotic systems). Their dynamical features are analyzed by phase portraits, Lyapunov exponents, bifurcation diagrams and basin of attraction. Moreover, hardware experiments on a breadboard are carried out. Experimental measurements are in accordance with the simulation results. Finally, a multi-channel random bit generator is designed for encryption applications. Numerical results illustrate the usefulness of the random bit generator.
Reducing the computational footprint for real-time BCPNN learning
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. PMID:25657618
Reducing the computational footprint for real-time BCPNN learning.
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
NASA Astrophysics Data System (ADS)
Huang, Sui
Transitions between high-dimensional attractor states in the quasi-potential landscape of the gene regulatory network, induced by environmental perturbations and/or facilitated by mutational rewiring of the network, underlie cell phenotype switching in development as well as in cancer progression, including acquisition of drug-resistant phenotypes. Considering heterogeneous cell populations as statistical ensembles of cells, and single-cell resolution gene expression profiling of cell populations undergoing a cell phenotype shift allow us now to map the topography of the landscape and its distortion. From snapshots of single-cell expression patterns of a cell population measured during major transitions we compute a quantity that identifies symmetry-breaking destabilization of attractors (bifurcation) and concomitant dimension-reduction of the state space manifold (landscape distortion) which precede critical transitions to new attractor states. The model predicts, and we show experimentally, the almost inevitable generation of aberrant cells associated with such critical transitions in multi-attractor landscapes: therapeutic perturbations which seek to push cancer cells to the apoptotic state, almost always produce ``rebellious'' cells which move in the ``opposite direction'': instead of dying they become more stem-cell-like and malignant. We show experimentally that the inadvertent generation of more malignant cancer cells by therapy indeed results from transition of surviving (but stressed) cells into unforeseen attractor states and not simply from selection of inherently more resistant cells. Thus, cancer cells follow not so much Darwin, as generally thought (survival of the fittest), but rather Nietzsche (What does not kill me makes me stronger). Supported by NIH (NCI, NIGMS), Alberta Innovates.
Criticality in conserved dynamical systems: experimental observation vs. exact properties.
Marković, Dimitrije; Gros, Claudius; Schuelein, André
2013-03-01
Conserved dynamical systems are generally considered to be critical. We study a class of critical routing models, equivalent to random maps, which can be solved rigorously in the thermodynamic limit. The information flow is conserved for these routing models and governed by cyclic attractors. We consider two classes of information flow, Markovian routing without memory and vertex routing involving a one-step routing memory. Investigating the respective cycle length distributions for complete graphs, we find log corrections to power-law scaling for the mean cycle length, as a function of the number of vertices, and a sub-polynomial growth for the overall number of cycles. When observing experimentally a real-world dynamical system one normally samples stochastically its phase space. The number and the length of the attractors are then weighted by the size of their respective basins of attraction. This situation is equivalent, for theory studies, to "on the fly" generation of the dynamical transition probabilities. For the case of vertex routing models, we find in this case power law scaling for the weighted average length of attractors, for both conserved routing models. These results show that the critical dynamical systems are generically not scale-invariant but may show power-law scaling when sampled stochastically. It is hence important to distinguish between intrinsic properties of a critical dynamical system and its behavior that one would observe when randomly probing its phase space.
A Bayesian connectivity-based approach to constructing probabilistic gene regulatory networks.
Zhou, Xiaobo; Wang, Xiaodong; Pal, Ranadip; Ivanov, Ivan; Bittner, Michael; Dougherty, Edward R
2004-11-22
We have hypothesized that the construction of transcriptional regulatory networks using a method that optimizes connectivity would lead to regulation consistent with biological expectations. A key expectation is that the hypothetical networks should produce a few, very strong attractors, highly similar to the original observations, mimicking biological state stability and determinism. Another central expectation is that, since it is expected that the biological control is distributed and mutually reinforcing, interpretation of the observations should lead to a very small number of connection schemes. We propose a fully Bayesian approach to constructing probabilistic gene regulatory networks (PGRNs) that emphasizes network topology. The method computes the possible parent sets of each gene, the corresponding predictors and the associated probabilities based on a nonlinear perceptron model, using a reversible jump Markov chain Monte Carlo (MCMC) technique, and an MCMC method is employed to search the network configurations to find those with the highest Bayesian scores to construct the PGRN. The Bayesian method has been used to construct a PGRN based on the observed behavior of a set of genes whose expression patterns vary across a set of melanoma samples exhibiting two very different phenotypes with respect to cell motility and invasiveness. Key biological features have been faithfully reflected in the model. Its steady-state distribution contains attractors that are either identical or very similar to the states observed in the data, and many of the attractors are singletons, which mimics the biological propensity to stably occupy a given state. Most interestingly, the connectivity rules for the most optimal generated networks constituting the PGRN are remarkably similar, as would be expected for a network operating on a distributed basis, with strong interactions between the components.
A class of cellular automata modeling winnerless competition
NASA Astrophysics Data System (ADS)
Afraimovich, V.; Ordaz, F. C.; Urías, J.
2002-06-01
Neural units introduced by Rabinovich et al. ("Sensory coding with dynamically competitive networks," UCSD and CIT, February 1999) motivate a class of cellular automata (CA) where spatio-temporal encoding is feasible. The spatio-temporal information capacity of a CA is estimated by the information capacity of the attractor set, which happens to be finitely specified. Two-dimensional CA are studied in detail. An example is given for which the attractor is not a subshift.
Dynamical networks with topological self-organization
NASA Technical Reports Server (NTRS)
Zak, M.
2001-01-01
Coupled evolution of state and topology of dynamical networks is introduced. Due to the well organized tensor structure, the governing equations are presented in a canonical form, and required attractors as well as their basins can be easily implanted and controlled.
Intermittent and sustained periodic windows in networked chaotic Rössler oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Zhiwei; Sun, Yong; University of the Chinese Academy of Sciences, Beijing 100049
Route to chaos (or periodicity) in dynamical systems is one of fundamental problems. Here, dynamical behaviors of coupled chaotic Rössler oscillators on complex networks are investigated and two different types of periodic windows with the variation of coupling strength are found. Under a moderate coupling, the periodic window is intermittent, and the attractors within the window extremely sensitively depend on the initial conditions, coupling parameter, and topology of the network. Therefore, after adding or removing one edge of network, the periodic attractor can be destroyed and substituted by a chaotic one, or vice versa. In contrast, under an extremely weakmore » coupling, another type of periodic window appears, which insensitively depends on the initial conditions, coupling parameter, and network. It is sustained and unchanged for different types of network structure. It is also found that the phase differences of the oscillators are almost discrete and randomly distributed except that directly linked oscillators more likely have different phases. These dynamical behaviors have also been generally observed in other networked chaotic oscillators.« less
[Dynamic paradigm in psychopathology: "chaos theory", from physics to psychiatry].
Pezard, L; Nandrino, J L
2001-01-01
For the last thirty years, progress in the field of physics, known as "Chaos theory"--or more precisely: non-linear dynamical systems theory--has increased our understanding of complex systems dynamics. This framework's formalism is general enough to be applied in other domains, such as biology or psychology, where complex systems are the rule rather than the exception. Our goal is to show here that this framework can become a valuable tool in scientific fields such as neuroscience and psychiatry where objects possess natural time dependency (i.e. dynamical properties) and non-linear characteristics. The application of non-linear dynamics concepts on these topics is more precise than a loose metaphor and can throw a new light on mental functioning and dysfunctioning. A class of neural networks (recurrent neural networks) constitutes an example of the implementation of the dynamical system concept and provides models of cognitive processes (15). The state of activity of the network is represented in its state space and the time evolution of this state is a trajectory in this space. After a period of time those networks settle on an equilibrium (a kind of attractor). The strength of connections between neurons define the number and relations between those attractors. The attractors of the network are usually interpreted as "mental representations". When an initial condition is imposed to the network, the evolution towards an attractor is considered as a model of information processing (27). This information processing is not defined in a symbolic manner but is a result of the interaction between distributed elements. Several properties of dynamical models can be used to define a way where the symbolic properties emerge from physical and dynamical properties (28) and thus they can be candidates for the definition of the emergence of mental properties on the basis of neuronal dynamics (42). Nevertheless, mental properties can also be considered as the result of an underlying dynamics without explicit mention of the neuronal one (47). In that case, dynamical tools can be used to elucidate the Freudian psychodynamics (34, 35). Recurrent neuronal networks have been used to propose interpretation of several mental dysfunctions (12). For example in the case of schizophrenia, it has been proposed that troubles in the cortical pruning during development (13) may cause a decrease in neural network storage ability and lead to the creation of spurious attractors. Those attractors do not correspond to stored memories and attract a large amount of initial conditions: they were thus associated to reality distorsion observed in schizophrenia (14). Nevertheless, the behavior of these models are too simple to be directly compared with real physiological data. In fact, equilibrium attractors are hardly met in biological dynamics. More complex behaviors (such as oscillations or chaos) should thus to be taken into account. The study of chaotic behavior have lead to the development of numerical methods devoted to the analysis of complex time series (17). These methods may be used to characterise the dynamical processes at the time-scales of both the cerebral dynamics and the clinical symptoms variations. The application of these methods to physiological signals have shown that complex behaviors are related to healthy states whereas simple dynamics are related to pathology (8). These studies have thus confirmed the notion of "dynamical disease" (20, 21) which denotes pathological conditions characterised by changes in physiological rhythms. Depression has been studied within this framework (25, 32) in order to define possible changes in brain electrical rhythms related to this trouble and its evolution. It has been shown that controls' brain dynamics is more complex than depressive one and that the recovery of a complex brain activity depends on the number of previous episodes. In the case of the symptoms time evolution, several studies have demonstrated that non-linear dynamical process may be involved in the recurrence of symptoms in troubles such as manic-depressive illness (9) or schizophrenia (51). These observations can contribute to more parcimonious interpretation of the time course of these illnesses than usual theories. In the search of a relationship between brain dynamics and mental troubles, it has been shown in three depressed patients an important correlation between the characteristics of brain dynamics and the intensity of depressive mood (49). This preliminary observation is in accordance with the emergence hypothesis according which changes in neuronal dynamics should be related to changes in mental processes. We reviewed here some theoretical and experimental results related to the use of "physical" dynamical theory in the field of psychopathology. It has been argued that these applications go beyond metaphor and that they are empirically founded. Nevertheless, these studies only constitute first steps on the way of a cautious development and definition of a "dynamical paradigm" in psychopathology. The introduction of concepts from dynamics such as complexity and dynamical changes (i.e. bifurcations) permits a new perspective on function and dysfunction of the mind/brain and the time evolution of symptoms. Moreover, it offers a ground for the hypothesis of the emergence of mental properties on the basis of neuronal dynamics (42). Since this theory can help to throw light on classical problems in psychopathology, we consider that a precise examination of both its theoretical and empirical consequences is requested to define its validity on this topic.
Brain pathways for cognitive-emotional decision making in the human animal.
Levine, Daniel S
2009-04-01
As roles for different brain regions become clearer, a picture emerges of how primate prefrontal cortex executive circuitry influences subcortical decision making pathways inherited from other mammals. The human's basic needs or drives can be interpreted as residing in an on-center off-surround network in motivational regions of the hypothalamus and brain stem. Such a network has multiple attractors that, in this case, represent the amount of satisfaction of these needs, and we consider and interpret neurally a continuous-time simulated annealing algorithm for moving between attractors under the influence of noise that represents "discontent" combined with "initiative." For decision making on specific tasks, we employ a variety of rules whose neural circuitry appears to involve the amygdala and the orbital, cingulate, and dorsolateral regions of prefrontal cortex. These areas can be interpreted as connected in a three-layer adaptive resonance network. The vigilance of the network, which is influenced by the state of the hypothalamic needs network, determines the level of sophistication of the rule being utilized.
Del Giudice, Paolo; Fusi, Stefano; Mattia, Maurizio
2003-01-01
In this paper we review a series of works concerning models of spiking neurons interacting via spike-driven, plastic, Hebbian synapses, meant to implement stimulus driven, unsupervised formation of working memory (WM) states. Starting from a summary of the experimental evidence emerging from delayed matching to sample (DMS) experiments, we briefly review the attractor picture proposed to underlie WM states. We then describe a general framework for a theoretical approach to learning with synapses subject to realistic constraints and outline some general requirements to be met by a mechanism of Hebbian synaptic structuring. We argue that a stochastic selection of the synapses to be updated allows for optimal memory storage, even if the number of stable synaptic states is reduced to the extreme (bistable synapses). A description follows of models of spike-driven synapses that implement the stochastic selection by exploiting the high irregularity in the pre- and post-synaptic activity. Reasons are listed why dynamic learning, that is the process by which the synaptic structure develops under the only guidance of neural activities, driven in turn by stimuli, is hard to accomplish. We provide a 'feasibility proof' of dynamic formation of WM states in this context the beneficial role of short-term depression (STD) is illustrated. by showing how an initially unstructured network autonomously develops a synaptic structure supporting simultaneously stable spontaneous and WM states in this context the beneficial role of short-term depression (STD) is illustrated. After summarizing heuristic indications emerging from the study performed, we conclude by briefly discussing open problems and critical issues still to be clarified.
The Statistical Mechanics of Dilute, Disordered Systems
NASA Astrophysics Data System (ADS)
Blackburn, Roger Michael
Available from UMI in association with The British Library. Requires signed TDF. A graph partitioning problem with variable inter -partition costs is studied by exploiting its mapping on to the Ashkin-Teller spin glass. The cavity method is used to derive the TAP equations and free energy for both extensively connected and dilute systems. Unlike Ising and Potts spin glasses, the self-consistent equation for the distribution of effective fields does not have a solution solely made up of delta functions. Numerical integration is used to find the stable solution, from which the ground state energy is calculated. Simulated annealing is used to test the results. The retrieving activity distribution for networks of boolean functions trained as associative memories for optimal capacity is derived. For infinite networks, outputs are shown to be frozen, in contrast to dilute asymmetric networks trained with the Hebb rule. For finite networks, a steady leaking to the non-retrieving attractor is demonstrated. Simulations of quenched networks are reported which show a departure from this picture: some configurations remain frozen for all time, while others follow cycles of small periods. An estimate of the critical capacity from the simulations is found to be in broad agreement with recent analytical results. The existing theory is extended to include noise on recall, and the behaviour is found to be robust to noise up to order 1/c^2 for networks with connectivity c.
Retrieval Property of Attractor Network with Synaptic Depression
NASA Astrophysics Data System (ADS)
Matsumoto, Narihisa; Ide, Daisuke; Watanabe, Masataka; Okada, Masato
2007-08-01
Synaptic connections are known to change dynamically. High-frequency presynaptic inputs induce decrease of synaptic weights. This process is known as short-term synaptic depression. The synaptic depression controls a gain for presynaptic inputs. However, it remains a controversial issue what are functional roles of this gain control. We propose a new hypothesis that one of the functional roles is to enlarge basins of attraction. To verify this hypothesis, we employ a binary discrete-time associative memory model which consists of excitatory and inhibitory neurons. It is known that the excitatory-inhibitory balance controls an overall activity of the network. The synaptic depression might incorporate an activity control mechanism. Using a mean-field theory and computer simulations, we find that the synaptic depression enlarges the basins at a small loading rate while the excitatory-inhibitory balance enlarges them at a large loading rate. Furthermore the synaptic depression does not affect the steady state of the network if a threshold is set at an appropriate value. These results suggest that the synaptic depression works in addition to the effect of the excitatory-inhibitory balance, and it might improve an error-correcting ability in cortical circuits.
Phenotypic Plasticity and Cell Fate Decisions in Cancer: Insights from Dynamical Systems Theory.
Jia, Dongya; Jolly, Mohit Kumar; Kulkarni, Prakash; Levine, Herbert
2017-06-22
Waddington's epigenetic landscape, a famous metaphor in developmental biology, depicts how a stem cell progresses from an undifferentiated phenotype to a differentiated one. The concept of "landscape" in the context of dynamical systems theory represents a high-dimensional space, in which each cell phenotype is considered as an "attractor" that is determined by interactions between multiple molecular players, and is buffered against environmental fluctuations. In addition, biological noise is thought to play an important role during these cell-fate decisions and in fact controls transitions between different phenotypes. Here, we discuss the phenotypic transitions in cancer from a dynamical systems perspective and invoke the concept of "cancer attractors"-hidden stable states of the underlying regulatory network that are not occupied by normal cells. Phenotypic transitions in cancer occur at varying levels depending on the context. Using epithelial-to-mesenchymal transition (EMT), cancer stem-like properties, metabolic reprogramming and the emergence of therapy resistance as examples, we illustrate how phenotypic plasticity in cancer cells enables them to acquire hybrid phenotypes (such as hybrid epithelial/mesenchymal and hybrid metabolic phenotypes) that tend to be more aggressive and notoriously resilient to therapies such as chemotherapy and androgen-deprivation therapy. Furthermore, we highlight multiple factors that may give rise to phenotypic plasticity in cancer cells, such as (a) multi-stability or oscillatory behaviors governed by underlying regulatory networks involved in cell-fate decisions in cancer cells, and (b) network rewiring due to conformational dynamics of intrinsically disordered proteins (IDPs) that are highly enriched in cancer cells. We conclude by discussing why a therapeutic approach that promotes "recanalization", i.e., the exit from "cancer attractors" and re-entry into "normal attractors", is more likely to succeed rather than a conventional approach that targets individual molecules/pathways.
Dynamics of feature categorization.
Martí, Daniel; Rinzel, John
2013-01-01
In visual and auditory scenes, we are able to identify shared features among sensory objects and group them according to their similarity. This grouping is preattentive and fast and is thought of as an elementary form of categorization by which objects sharing similar features are clustered in some abstract perceptual space. It is unclear what neuronal mechanisms underlie this fast categorization. Here we propose a neuromechanistic model of fast feature categorization based on the framework of continuous attractor networks. The mechanism for category formation does not rely on learning and is based on biologically plausible assumptions, for example, the existence of populations of neurons tuned to feature values, feature-specific interactions, and subthreshold-evoked responses upon the presentation of single objects. When the network is presented with a sequence of stimuli characterized by some feature, the network sums the evoked responses and provides a running estimate of the distribution of features in the input stream. If the distribution of features is structured into different components or peaks (i.e., is multimodal), recurrent excitation amplifies the response of activated neurons, and categories are singled out as emerging localized patterns of elevated neuronal activity (bumps), centered at the centroid of each cluster. The emergence of bump states through sequential, subthreshold activation and the dependence on input statistics is a novel application of attractor networks. We show that the extraction and representation of multiple categories are facilitated by the rich attractor structure of the network, which can sustain multiple stable activity patterns for a robust range of connectivity parameters compatible with cortical physiology.
Zillmer, Rüdiger; Brunel, Nicolas; Hansel, David
2009-03-01
We present results of an extensive numerical study of the dynamics of networks of integrate-and-fire neurons connected randomly through inhibitory interactions. We first consider delayed interactions with infinitely fast rise and decay. Depending on the parameters, the network displays transients which are short or exponentially long in the network size. At the end of these transients, the dynamics settle on a periodic attractor. If the number of connections per neuron is large ( approximately 1000) , this attractor is a cluster state with a short period. In contrast, if the number of connections per neuron is small ( approximately 100) , the attractor has complex dynamics and very long period. During the long transients the neurons fire in a highly irregular manner. They can be viewed as quasistationary states in which, depending on the coupling strength, the pattern of activity is asynchronous or displays population oscillations. In the first case, the average firing rates and the variability of the single-neuron activity are well described by a mean-field theory valid in the thermodynamic limit. Bifurcations of the long transient dynamics from asynchronous to synchronous activity are also well predicted by this theory. The transient dynamics display features reminiscent of stable chaos. In particular, despite being linearly stable, the trajectories of the transient dynamics are destabilized by finite perturbations as small as O(1/N) . We further show that stable chaos is also observed for postsynaptic currents with finite decay time. However, we report in this type of network that chaotic dynamics characterized by positive Lyapunov exponents can also be observed. We show in fact that chaos occurs when the decay time of the synaptic currents is long compared to the synaptic delay, provided that the network is sufficiently large.
Neural dynamics for landmark orientation and angular path integration
Seelig, Johannes D.; Jayaraman, Vivek
2015-01-01
Summary Many animals navigate using a combination of visual landmarks and path integration. In mammalian brains, head direction cells integrate these two streams of information by representing an animal's heading relative to landmarks, yet maintaining their directional tuning in darkness based on self-motion cues. Here we use two-photon calcium imaging in head-fixed flies walking on a ball in a virtual reality arena to demonstrate that landmark-based orientation and angular path integration are combined in the population responses of neurons whose dendrites tile the ellipsoid body — a toroidal structure in the center of the fly brain. The population encodes the fly's azimuth relative to its environment, tracking visual landmarks when available and relying on self-motion cues in darkness. When both visual and self-motion cues are absent, a representation of the animal's orientation is maintained in this network through persistent activity — a potential substrate for short-term memory. Several features of the population dynamics of these neurons and their circular anatomical arrangement are suggestive of ring attractors — network structures proposed to support the function of navigational brain circuits. PMID:25971509
Solanka, Lukas; van Rossum, Mark CW; Nolan, Matthew F
2015-01-01
Neural computations underlying cognitive functions require calibration of the strength of excitatory and inhibitory synaptic connections and are associated with modulation of gamma frequency oscillations in network activity. However, principles relating gamma oscillations, synaptic strength and circuit computations are unclear. We address this in attractor network models that account for grid firing and theta-nested gamma oscillations in the medial entorhinal cortex. We show that moderate intrinsic noise massively increases the range of synaptic strengths supporting gamma oscillations and grid computation. With moderate noise, variation in excitatory or inhibitory synaptic strength tunes the amplitude and frequency of gamma activity without disrupting grid firing. This beneficial role for noise results from disruption of epileptic-like network states. Thus, moderate noise promotes independent control of multiplexed firing rate- and gamma-based computational mechanisms. Our results have implications for tuning of normal circuit function and for disorders associated with changes in gamma oscillations and synaptic strength. DOI: http://dx.doi.org/10.7554/eLife.06444.001 PMID:26146940
PyBoolNet: a python package for the generation, analysis and visualization of boolean networks.
Klarner, Hannes; Streck, Adam; Siebert, Heike
2017-03-01
The goal of this project is to provide a simple interface to working with Boolean networks. Emphasis is put on easy access to a large number of common tasks including the generation and manipulation of networks, attractor and basin computation, model checking and trap space computation, execution of established graph algorithms as well as graph drawing and layouts. P y B ool N et is a Python package for working with Boolean networks that supports simple access to model checking via N u SMV, standard graph algorithms via N etwork X and visualization via dot . In addition, state of the art attractor computation exploiting P otassco ASP is implemented. The package is function-based and uses only native Python and N etwork X data types. https://github.com/hklarner/PyBoolNet. hannes.klarner@fu-berlin.de. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Lerner, Itamar; Bentin, Shlomo; Shriki, Oren
2014-01-01
Semantic priming has long been recognized to reflect, along with automatic semantic mechanisms, the contribution of controlled strategies. However, previous theories of controlled priming were mostly qualitative, lacking common grounds with modern mathematical models of automatic priming based on neural networks. Recently, we have introduced a novel attractor network model of automatic semantic priming with latching dynamics. Here, we extend this work to show how the same model can also account for important findings regarding controlled processes. Assuming the rate of semantic transitions in the network can be adapted using simple reinforcement learning, we show how basic findings attributed to controlled processes in priming can be achieved, including their dependency on stimulus onset asynchrony and relatedness proportion and their unique effect on associative, category-exemplar, mediated and backward prime-target relations. We discuss how our mechanism relates to the classic expectancy theory and how it can be further extended in future developments of the model. PMID:24890261
Detection of strong attractors in social media networks.
Qasem, Ziyaad; Jansen, Marc; Hecking, Tobias; Hoppe, H Ulrich
2016-01-01
Detection of influential actors in social media such as Twitter or Facebook plays an important role for improving the quality and efficiency of work and services in many fields such as education and marketing. The work described here aims to introduce a new approach that characterizes the influence of actors by the strength of attracting new active members into a networked community. We present a model of influence of an actor that is based on the attractiveness of the actor in terms of the number of other new actors with which he or she has established relations over time. We have used this concept and measure of influence to determine optimal seeds in a simulation of influence maximization using two empirically collected social networks for the underlying graphs. Our empirical results on the datasets demonstrate that our measure stands out as a useful measure to define the attractors comparing to the other influence measures.
Complexity and non-commutativity of learning operations on graphs.
Atmanspacher, Harald; Filk, Thomas
2006-07-01
We present results from numerical studies of supervised learning operations in small recurrent networks considered as graphs, leading from a given set of input conditions to predetermined outputs. Graphs that have optimized their output for particular inputs with respect to predetermined outputs are asymptotically stable and can be characterized by attractors, which form a representation space for an associative multiplicative structure of input operations. As the mapping from a series of inputs onto a series of such attractors generally depends on the sequence of inputs, this structure is generally non-commutative. Moreover, the size of the set of attractors, indicating the complexity of learning, is found to behave non-monotonically as learning proceeds. A tentative relation between this complexity and the notion of pragmatic information is indicated.
Basins of Attraction for Generative Justice
NASA Astrophysics Data System (ADS)
Eglash, Ron; Garvey, Colin
It has long been known that dynamic systems typically tend towards some state - an "attractor" - into which they finally settle. The introduction of chaos theory has modified our understanding of these attractors: we no longer think of the final "resting state" as necessarily being at rest. In this essay we consider the attractors of social ecologies: the networks of people, technologies and natural resources that makeup our built environments. Following the work of "communitarians" we posit that basins of attraction could be created for social ecologies that foster both environmental sustainability and social justice. We refer to this confluence as "generative justice"; a phrase which references both the "bottom-up", self-generating source of its adaptive meta stability, as well as its grounding in the ethics of egalitarian political theory.
Strange nonchaotic attractors for computation
NASA Astrophysics Data System (ADS)
Sathish Aravindh, M.; Venkatesan, A.; Lakshmanan, M.
2018-05-01
We investigate the response of quasiperiodically driven nonlinear systems exhibiting strange nonchaotic attractors (SNAs) to deterministic input signals. We show that if one uses two square waves in an aperiodic manner as input to a quasiperiodically driven double-well Duffing oscillator system, the response of the system can produce logical output controlled by such a forcing. Changing the threshold or biasing of the system changes the output to another logic operation and memory latch. The interplay of nonlinearity and quasiperiodic forcing yields logical behavior, and the emergent outcome of such a system is a logic gate. It is further shown that the logical behaviors persist even for an experimental noise floor. Thus the SNA turns out to be an efficient tool for computation.
NASA Astrophysics Data System (ADS)
Nicolis, John S.; Katsikas, Anastassis A.
Collective parameters such as the Zipf's law-like statistics, the Transinformation, the Block Entropy and the Markovian character are compared for natural, genetic, musical and artificially generated long texts from generating partitions (alphabets) on homogeneous as well as on multifractal chaotic maps. It appears that minimal requirements for a language at the syntactical level such as memory, selectivity of few keywords and broken symmetry in one dimension (polarity) are more or less met by dynamically iterating simple maps or flows e.g. very simple chaotic hardware. The same selectivity is observed at the semantic level where the aim refers to partitioning a set of enviromental impinging stimuli onto coexisting attractors-categories. Under the regime of pattern recognition and classification, few key features of a pattern or few categories claim the lion's share of the information stored in this pattern and practically, only these key features are persistently scanned by the cognitive processor. A multifractal attractor model can in principle explain this high selectivity, both at the syntactical and the semantic levels.
NASA Astrophysics Data System (ADS)
Freud, Sven; Plaga, Rainer; Breithaupt, Ralph
2016-06-01
The hyper-chaotic strange attractor of systems of four Chua’s circuits that are mutually coupled by three strong and three weak couplings is studied, both experimentally and via simulation. A new metric to compare strange attractors is presented. It is found that the strength of the couplings between circuits have a complex and determining influence on the probability for the presence of a trajectory within their attractors. This influence is strictly local, i.e. the probability of the presence of the trajectories is determined by the coupling strength to the directly adjacent circuits and independent of the coupling strengths among other circuits. Fluctuations in the properties of Chua’s circuits due to random fluctuations during the production of its components have a significant influence on the probability of presence of the attractor’s trajectories that could be qualitatively, but not quantitatively, modeled by our simulation. The consequences of these results for the possibility to construct “physical unclonable functions” as networks of Chua’s circuits with a hyper-chaotic dynamics are discussed.
Chaotic interactions of self-replicating RNA.
Forst, C V
1996-03-01
A general system of high-order differential equations describing complex dynamics of replicating biomolecules is given. Symmetry relations and coordinate transformations of general replication systems leading to topologically equivalent systems are derived. Three chaotic attractors observed in Lotka-Volterra equations of dimension n = 3 are shown to represent three cross-sections of one and the same chaotic regime. Also a fractal torus in a generalized three-dimensional Lotka-Volterra Model has been linked to one of the chaotic attractors. The strange attractors are studied in the equivalent four-dimensional catalytic replicator network. The fractal torus has been examined in adapted Lotka-Volterra equations. Analytic expressions are derived for the Lyapunov exponents of the flow in the replicator system. Lyapunov spectra for different pathways into chaos has been calculated. In the generalized Lotka-Volterra system a second inner rest point--coexisting with (quasi)-periodic orbits--can be observed; with an abundance of different bifurcations. Pathways from chaotic tori, via quasi-periodic tori, via limit cycles, via multi-periodic orbits--emerging out of periodic doubling bifurcations--to "simple" chaotic attractors can be found.
On the origin of reproducible sequential activity in neural circuits
NASA Astrophysics Data System (ADS)
Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.
2004-12-01
Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.
On the origin of reproducible sequential activity in neural circuits.
Afraimovich, V S; Zhigulin, V P; Rabinovich, M I
2004-12-01
Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.
A self-organized learning strategy for object recognition by an embedded line of attraction
NASA Astrophysics Data System (ADS)
Seow, Ming-Jung; Alex, Ann T.; Asari, Vijayan K.
2012-04-01
For humans, a picture is worth a thousand words, but to a machine, it is just a seemingly random array of numbers. Although machines are very fast and efficient, they are vastly inferior to humans for everyday information processing. Algorithms that mimic the way the human brain computes and learns may be the solution. In this paper we present a theoretical model based on the observation that images of similar visual perceptions reside in a complex manifold in an image space. The perceived features are often highly structured and hidden in a complex set of relationships or high-dimensional abstractions. To model the pattern manifold, we present a novel learning algorithm using a recurrent neural network. The brain memorizes information using a dynamical system made of interconnected neurons. Retrieval of information is accomplished in an associative sense. It starts from an arbitrary state that might be an encoded representation of a visual image and converges to another state that is stable. The stable state is what the brain remembers. In designing a recurrent neural network, it is usually of prime importance to guarantee the convergence in the dynamics of the network. We propose to modify this picture: if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented with an unknown encoded representation of a visual image belonging to a different category. That is, the identification of an instability mode is an indication that a presented pattern is far away from any stored pattern and therefore cannot be associated with current memories. These properties can be used to circumvent the plasticity-stability dilemma by using the fluctuating mode as an indicator to create new states. We capture this behavior using a novel neural architecture and learning algorithm, in which the system performs self-organization utilizing a stability mode and an instability mode for the dynamical system. Based on this observation we developed a self- organizing line attractor, which is capable of generating new lines in the feature space to learn unrecognized patterns. Experiments performed on UMIST pose database and CMU face expression variant database for face recognition have shown that the proposed nonlinear line attractor is able to successfully identify the individuals and it provided better recognition rate when compared to the state of the art face recognition techniques. Experiments on FRGC version 2 database has also provided excellent recognition rate in images captured in complex lighting environments. Experiments performed on the Japanese female face expression database and Essex Grimace database using the self organizing line attractor have also shown successful expression invariant face recognition. These results show that the proposed model is able to create nonlinear manifolds in a multidimensional feature space to distinguish complex patterns.
Floral Morphogenesis: Stochastic Explorations of a Gene Network Epigenetic Landscape
Aldana, Maximino; Benítez, Mariana; Cortes-Poza, Yuriria; Espinosa-Soto, Carlos; Hartasánchez, Diego A.; Lotto, R. Beau; Malkin, David; Escalera Santos, Gerardo J.; Padilla-Longoria, Pablo
2008-01-01
In contrast to the classical view of development as a preprogrammed and deterministic process, recent studies have demonstrated that stochastic perturbations of highly non-linear systems may underlie the emergence and stability of biological patterns. Herein, we address the question of whether noise contributes to the generation of the stereotypical temporal pattern in gene expression during flower development. We modeled the regulatory network of organ identity genes in the Arabidopsis thaliana flower as a stochastic system. This network has previously been shown to converge to ten fixed-point attractors, each with gene expression arrays that characterize inflorescence cells and primordial cells of sepals, petals, stamens, and carpels. The network used is binary, and the logical rules that govern its dynamics are grounded in experimental evidence. We introduced different levels of uncertainty in the updating rules of the network. Interestingly, for a level of noise of around 0.5–10%, the system exhibited a sequence of transitions among attractors that mimics the sequence of gene activation configurations observed in real flowers. We also implemented the gene regulatory network as a continuous system using the Glass model of differential equations, that can be considered as a first approximation of kinetic-reaction equations, but which are not necessarily equivalent to the Boolean model. Interestingly, the Glass dynamics recover a temporal sequence of attractors, that is qualitatively similar, although not identical, to that obtained using the Boolean model. Thus, time ordering in the emergence of cell-fate patterns is not an artifact of synchronous updating in the Boolean model. Therefore, our model provides a novel explanation for the emergence and robustness of the ubiquitous temporal pattern of floral organ specification. It also constitutes a new approach to understanding morphogenesis, providing predictions on the population dynamics of cells with different genetic configurations during development. PMID:18978941
Bayesian Networks Predict Neuronal Transdifferentiation.
Ainsworth, Richard I; Ai, Rizi; Ding, Bo; Li, Nan; Zhang, Kai; Wang, Wei
2018-05-30
We employ the language of Bayesian networks to systematically construct gene-regulation topologies from deep-sequencing single-nucleus RNA-Seq data for human neurons. From the perspective of the cell-state potential landscape, we identify attractors that correspond closely to different neuron subtypes. Attractors are also recovered for cell states from an independent data set confirming our models accurate description of global genetic regulations across differing cell types of the neocortex (not included in the training data). Our model recovers experimentally confirmed genetic regulations and community analysis reveals genetic associations in common pathways. Via a comprehensive scan of all theoretical three-gene perturbations of gene knockout and overexpression, we discover novel neuronal trans-differrentiation recipes (including perturbations of SATB2, GAD1, POU6F2 and ADARB2) for excitatory projection neuron and inhibitory interneuron subtypes. Copyright © 2018, G3: Genes, Genomes, Genetics.
Membrane potential dynamics of grid cells
Domnisoru, Cristina; Kinkhabwala, Amina A.; Tank, David W.
2014-01-01
During navigation, grid cells increase their spike rates in firing fields arranged on a strikingly regular triangular lattice, while their spike timing is often modulated by theta oscillations. Oscillatory interference models of grid cells predict theta amplitude modulations of membrane potential during firing field traversals, while competing attractor network models predict slow depolarizing ramps. Here, using in-vivo whole-cell recordings, we tested these models by directly measuring grid cell intracellular potentials in mice running along linear tracks in virtual reality. Grid cells had large and reproducible ramps of membrane potential depolarization that were the characteristic signature tightly correlated with firing fields. Grid cells also exhibited intracellular theta oscillations that influenced their spike timing. However, the properties of theta amplitude modulations were not consistent with the view that they determine firing field locations. Our results support cellular and network mechanisms in which grid fields are produced by slow ramps, as in attractor models, while theta oscillations control spike timing. PMID:23395984
On the molecular basis of the receptor mosaic hypothesis of the engram.
Agnati, Luigi F; Ferré, Sergi; Leo, Giuseppina; Lluis, Carme; Canela, Enric I; Franco, Rafael; Fuxe, Kjell
2004-08-01
1. This paper revisits the so-called "receptor mosaic hypothesis" for memory trace formation in the light of recent findings in "functional (or interaction) proteomics." The receptor mosaic hypothesis maintains that receptors may form molecular aggregates at the plasma membrane level representing part of the computational molecular networks. 2. Specific interactions between receptors occur as a consequence of the pattern of transmitter release from the source neurons, which release the chemical code impinging on the receptor mosaics of the target neuron. Thus, the decoding of the chemical message depends on the receptors forming the receptor mosaics and on the type of interactions among receptors and other proteins in the molecular network with novel long-term mosaics formed by their stabilization via adapter proteins formed in target neurons through the incoming neurotransmitter code. The internalized receptor heteromeric complexes or parts of them may act as transcription factors for the formation of such adapter proteins. 3. Receptor mosaics are formed both at the pre- and postsynaptic level of the plasma membranes and this phenomenon can play a role in the Hebbian behavior of some synaptic contacts. The appropriate "matching" of the pre- with the postsynaptic receptor mosaic can be thought of as the "clamping of the synapse to the external teaching signal." According to our hypothesis the behavior of the molecular networks at plasma membrane level to which the receptor mosaics belong can be set in a "frozen" conformation (i.e. in a frozen functional state) and this may represent a mechanism to maintain constant the input to a neuron. 4. Thus, we are suggesting that molecular networks at plasma membrane level may display multiple "attractors" each of which stores the memory of a specific neurotransmitter code due to a unique firing pattern. Hence, this mechanism may play a role in learning processes where the input to a neuron is likely to remain constant for a while.
On the robustness of complex heterogeneous gene expression networks.
Gómez-Gardeñes, Jesús; Moreno, Yamir; Floría, Luis M
2005-04-01
We analyze a continuous gene expression model on the underlying topology of a complex heterogeneous network. Numerical simulations aimed at studying the chaotic and periodic dynamics of the model are performed. The results clearly indicate that there is a region in which the dynamical and structural complexity of the system avoid chaotic attractors. However, contrary to what has been reported for Random Boolean Networks, the chaotic phase cannot be completely suppressed, which has important bearings on network robustness and gene expression modeling.
Demongeot, Jacques; Ben Amor, Hedi; Elena, Adrien; Gillois, Pierre; Noual, Mathilde; Sené, Sylvain
2009-01-01
Regulatory interaction networks are often studied on their dynamical side (existence of attractors, study of their stability). We focus here also on their robustness, that is their ability to offer the same spatiotemporal patterns and to resist to external perturbations such as losses of nodes or edges in the networks interactions architecture, changes in their environmental boundary conditions as well as changes in the update schedule (or updating mode) of the states of their elements (e.g., if these elements are genes, their synchronous coexpression mode versus their sequential expression). We define the generic notions of boundary, core, and critical vertex or edge of the underlying interaction graph of the regulatory network, whose disappearance causes dramatic changes in the number and nature of attractors (e.g., passage from a bistable behaviour to a unique periodic regime) or in the range of their basins of stability. The dynamic transition of states will be presented in the framework of threshold Boolean automata rules. A panorama of applications at different levels will be given: brain and plant morphogenesis, bulbar cardio-respiratory regulation, glycolytic/oxidative metabolic coupling, and eventually cell cycle and feather morphogenesis genetic control. PMID:20057955
Self-organized topology of recurrence-based complex networks
NASA Astrophysics Data System (ADS)
Yang, Hui; Liu, Gang
2013-12-01
With the rapid technological advancement, network is almost everywhere in our daily life. Network theory leads to a new way to investigate the dynamics of complex systems. As a result, many methods are proposed to construct a network from nonlinear time series, including the partition of state space, visibility graph, nearest neighbors, and recurrence approaches. However, most previous works focus on deriving the adjacency matrix to represent the complex network and extract new network-theoretic measures. Although the adjacency matrix provides connectivity information of nodes and edges, the network geometry can take variable forms. The research objective of this article is to develop a self-organizing approach to derive the steady geometric structure of a network from the adjacency matrix. We simulate the recurrence network as a physical system by treating the edges as springs and the nodes as electrically charged particles. Then, force-directed algorithms are developed to automatically organize the network geometry by minimizing the system energy. Further, a set of experiments were designed to investigate important factors (i.e., dynamical systems, network construction methods, force-model parameter, nonhomogeneous distribution) affecting this self-organizing process. Interestingly, experimental results show that the self-organized geometry recovers the attractor of a dynamical system that produced the adjacency matrix. This research addresses a question, i.e., "what is the self-organizing geometry of a recurrence network?" and provides a new way to reproduce the attractor or time series from the recurrence plot. As a result, novel network-theoretic measures (e.g., average path length and proximity ratio) can be achieved based on actual node-to-node distances in the self-organized network topology. The paper brings the physical models into the recurrence analysis and discloses the spatial geometry of recurrence networks.
Self-organized topology of recurrence-based complex networks.
Yang, Hui; Liu, Gang
2013-12-01
With the rapid technological advancement, network is almost everywhere in our daily life. Network theory leads to a new way to investigate the dynamics of complex systems. As a result, many methods are proposed to construct a network from nonlinear time series, including the partition of state space, visibility graph, nearest neighbors, and recurrence approaches. However, most previous works focus on deriving the adjacency matrix to represent the complex network and extract new network-theoretic measures. Although the adjacency matrix provides connectivity information of nodes and edges, the network geometry can take variable forms. The research objective of this article is to develop a self-organizing approach to derive the steady geometric structure of a network from the adjacency matrix. We simulate the recurrence network as a physical system by treating the edges as springs and the nodes as electrically charged particles. Then, force-directed algorithms are developed to automatically organize the network geometry by minimizing the system energy. Further, a set of experiments were designed to investigate important factors (i.e., dynamical systems, network construction methods, force-model parameter, nonhomogeneous distribution) affecting this self-organizing process. Interestingly, experimental results show that the self-organized geometry recovers the attractor of a dynamical system that produced the adjacency matrix. This research addresses a question, i.e., "what is the self-organizing geometry of a recurrence network?" and provides a new way to reproduce the attractor or time series from the recurrence plot. As a result, novel network-theoretic measures (e.g., average path length and proximity ratio) can be achieved based on actual node-to-node distances in the self-organized network topology. The paper brings the physical models into the recurrence analysis and discloses the spatial geometry of recurrence networks.
Self-organized topology of recurrence-based complex networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Hui, E-mail: huiyang@usf.edu; Liu, Gang
With the rapid technological advancement, network is almost everywhere in our daily life. Network theory leads to a new way to investigate the dynamics of complex systems. As a result, many methods are proposed to construct a network from nonlinear time series, including the partition of state space, visibility graph, nearest neighbors, and recurrence approaches. However, most previous works focus on deriving the adjacency matrix to represent the complex network and extract new network-theoretic measures. Although the adjacency matrix provides connectivity information of nodes and edges, the network geometry can take variable forms. The research objective of this article ismore » to develop a self-organizing approach to derive the steady geometric structure of a network from the adjacency matrix. We simulate the recurrence network as a physical system by treating the edges as springs and the nodes as electrically charged particles. Then, force-directed algorithms are developed to automatically organize the network geometry by minimizing the system energy. Further, a set of experiments were designed to investigate important factors (i.e., dynamical systems, network construction methods, force-model parameter, nonhomogeneous distribution) affecting this self-organizing process. Interestingly, experimental results show that the self-organized geometry recovers the attractor of a dynamical system that produced the adjacency matrix. This research addresses a question, i.e., “what is the self-organizing geometry of a recurrence network?” and provides a new way to reproduce the attractor or time series from the recurrence plot. As a result, novel network-theoretic measures (e.g., average path length and proximity ratio) can be achieved based on actual node-to-node distances in the self-organized network topology. The paper brings the physical models into the recurrence analysis and discloses the spatial geometry of recurrence networks.« less
A framework to find the logic backbone of a biological network.
Maheshwari, Parul; Albert, Réka
2017-12-06
Cellular behaviors are governed by interaction networks among biomolecules, for example gene regulatory and signal transduction networks. An often used dynamic modeling framework for these networks, Boolean modeling, can obtain their attractors (which correspond to cell types and behaviors) and their trajectories from an initial state (e.g. a resting state) to the attractors, for example in response to an external signal. The existing methods however do not elucidate the causal relationships between distant nodes in the network. In this work, we propose a simple logic framework, based on categorizing causal relationships as sufficient or necessary, as a complement to Boolean networks. We identify and explore the properties of complex subnetworks that are distillable into a single logic relationship. We also identify cyclic subnetworks that ensure the stabilization of the state of participating nodes regardless of the rest of the network. We identify the logic backbone of biomolecular networks, consisting of external signals, self-sustaining cyclic subnetworks (stable motifs), and output nodes. Furthermore, we use the logic framework to identify crucial nodes whose override can drive the system from one steady state to another. We apply these techniques to two biological networks: the epithelial-to-mesenchymal transition network corresponding to a developmental process exploited in tumor invasion, and the network of abscisic acid induced stomatal closure in plants. We find interesting subnetworks with logical implications in these networks. Using these subgraphs and motifs, we efficiently reduce both networks to succinct backbone structures. The logic representation identifies the causal relationships between distant nodes and subnetworks. This knowledge can form the basis of network control or used in the reverse engineering of networks.
Syntactic sequencing in Hebbian cell assemblies.
Wennekers, Thomas; Palm, Günther
2009-12-01
Hebbian cell assemblies provide a theoretical framework for the modeling of cognitive processes that grounds them in the underlying physiological neural circuits. Recently we have presented an extension of cell assemblies by operational components which allows to model aspects of language, rules, and complex behaviour. In the present work we study the generation of syntactic sequences using operational cell assemblies timed by unspecific trigger signals. Syntactic patterns are implemented in terms of hetero-associative transition graphs in attractor networks which cause a directed flow of activity through the neural state space. We provide regimes for parameters that enable an unspecific excitatory control signal to switch reliably between attractors in accordance with the implemented syntactic rules. If several target attractors are possible in a given state, noise in the system in conjunction with a winner-takes-all mechanism can randomly choose a target. Disambiguation can also be guided by context signals or specific additional external signals. Given a permanently elevated level of external excitation the model can enter an autonomous mode, where it generates temporal grammatical patterns continuously.
Fasoli, Diego; Cattani, Anna; Panzeri, Stefano
2018-05-01
Despite their biological plausibility, neural network models with asymmetric weights are rarely solved analytically, and closed-form solutions are available only in some limiting cases or in some mean-field approximations. We found exact analytical solutions of an asymmetric spin model of neural networks with arbitrary size without resorting to any approximation, and we comprehensively studied its dynamical and statistical properties. The network had discrete time evolution equations and binary firing rates, and it could be driven by noise with any distribution. We found analytical expressions of the conditional and stationary joint probability distributions of the membrane potentials and the firing rates. By manipulating the conditional probability distribution of the firing rates, we extend to stochastic networks the associating learning rule previously introduced by Personnaz and coworkers. The new learning rule allowed the safe storage, under the presence of noise, of point and cyclic attractors, with useful implications for content-addressable memories. Furthermore, we studied the bifurcation structure of the network dynamics in the zero-noise limit. We analytically derived examples of the codimension 1 and codimension 2 bifurcation diagrams of the network, which describe how the neuronal dynamics changes with the external stimuli. This showed that the network may undergo transitions among multistable regimes, oscillatory behavior elicited by asymmetric synaptic connections, and various forms of spontaneous symmetry breaking. We also calculated analytically groupwise correlations of neural activity in the network in the stationary regime. This revealed neuronal regimes where, statistically, the membrane potentials and the firing rates are either synchronous or asynchronous. Our results are valid for networks with any number of neurons, although our equations can be realistically solved only for small networks. For completeness, we also derived the network equations in the thermodynamic limit of infinite network size and we analytically studied their local bifurcations. All the analytical results were extensively validated by numerical simulations.
From cognitive networks to seizures: Stimulus evoked dynamics in a coupled cortical network
NASA Astrophysics Data System (ADS)
Lee, Jaejin; Ermentrout, Bard; Bodner, Mark
2013-12-01
Epilepsy is one of the most common neuropathologies worldwide. Seizures arising in epilepsy or in seizure disorders are characterized generally by uncontrolled spread of excitation and electrical activity to a limited region or even over the entire cortex. While it is generally accepted that abnormal excessive firing and synchronization of neuron populations lead to seizures, little is known about the precise mechanisms underlying human epileptic seizures, the mechanisms of transitions from normal to paroxysmal activity, or about how seizures spread. Further complication arises in that seizures do not occur with a single type of dynamics but as many different phenotypes and genotypes with a range of patterns, synchronous oscillations, and time courses. The concept of preventing, terminating, or modulating seizures and/or paroxysmal activity through stimulation of brain has also received considerable attention. The ability of such stimulation to prevent or modulate such pathological activity may depend on identifiable parameters. In this work, firing rate networks with inhibitory and excitatory populations were modeled. Network parameters were chosen to model normal working memory behaviors. Two different models of cognitive activity were developed. The first model consists of a single network corresponding to a local area of the brain. The second incorporates two networks connected through sparser recurrent excitatory connectivity with transmission delays ranging from approximately 3 ms within local populations to 15 ms between populations residing in different cortical areas. The effect of excitatory stimulation to activate working memory behavior through selective persistent activation of populations is examined in the models, and the conditions and transition mechanisms through which that selective activation breaks down producing spreading paroxysmal activity and seizure states are characterized. Specifically, we determine critical parameters and architectural changes that produce the different seizure dynamics in the networks. This provides possible mechanisms for seizure generation. Because seizures arise as attractors in a multi-state system, the system may possibly be returned to its baseline state through some particular stimulation. The ability of stimulation to terminate seizure dynamics in the local and distributed models is studied. We systematically examine when this may occur and the form of the stimulation necessary for the range of seizure dynamics. In both the local and distributed network models, termination is possible for all seizure types observed by stimulation possessing some particular configuration of spatial and temporal characteristics.
Historical Contingency in Controlled Evolution
NASA Astrophysics Data System (ADS)
Schuster, Peter
2014-12-01
A basic question in evolution is dealing with the nature of an evolutionary memory. At thermodynamic equilibrium, at stable stationary states or other stable attractors the memory on the path leading to the long-time solution is erased, at least in part. Similar arguments hold for unique optima. Optimality in biology is discussed on the basis of microbial metabolism. Biology, on the other hand, is characterized by historical contingency, which has recently become accessible to experimental test in bacterial populations evolving under controlled conditions. Computer simulations give additional insight into the nature of the evolutionary memory, which is ultimately caused by the enormous space of possibilities that is so large that it escapes all attempts of visualization. In essence, this contribution is dealing with two questions of current evolutionary theory: (i) Are organisms operating at optimal performance? and (ii) How is the evolutionary memory built up in populations?
MONOMIALS AND BASIN CYLINDERS FOR NETWORK DYNAMICS.
Austin, Daniel; Dinwoodie, Ian H
We describe methods to identify cylinder sets inside a basin of attraction for Boolean dynamics of biological networks. Such sets are used for designing regulatory interventions that make the system evolve towards a chosen attractor, for example initiating apoptosis in a cancer cell. We describe two algebraic methods for identifying cylinders inside a basin of attraction, one based on the Groebner fan that finds monomials that define cylinders and the other on primary decomposition. Both methods are applied to current examples of gene networks.
MONOMIALS AND BASIN CYLINDERS FOR NETWORK DYNAMICS
AUSTIN, DANIEL; DINWOODIE, IAN H
2014-01-01
We describe methods to identify cylinder sets inside a basin of attraction for Boolean dynamics of biological networks. Such sets are used for designing regulatory interventions that make the system evolve towards a chosen attractor, for example initiating apoptosis in a cancer cell. We describe two algebraic methods for identifying cylinders inside a basin of attraction, one based on the Groebner fan that finds monomials that define cylinders and the other on primary decomposition. Both methods are applied to current examples of gene networks. PMID:25620893
Optimal interdependence enhances the dynamical robustness of complex systems.
Singh, Rishu Kumar; Sinha, Sitabhra
2017-08-01
Although interdependent systems have usually been associated with increased fragility, we show that strengthening the interdependence between dynamical processes on different networks can make them more likely to survive over long times. By coupling the dynamics of networks that in isolation exhibit catastrophic collapse with extinction of nodal activity, we demonstrate system-wide persistence of activity for an optimal range of interdependence between the networks. This is related to the appearance of attractors of the global dynamics comprising disjoint sets ("islands") of stable activity.
Optimal interdependence enhances the dynamical robustness of complex systems
NASA Astrophysics Data System (ADS)
Singh, Rishu Kumar; Sinha, Sitabhra
2017-08-01
Although interdependent systems have usually been associated with increased fragility, we show that strengthening the interdependence between dynamical processes on different networks can make them more likely to survive over long times. By coupling the dynamics of networks that in isolation exhibit catastrophic collapse with extinction of nodal activity, we demonstrate system-wide persistence of activity for an optimal range of interdependence between the networks. This is related to the appearance of attractors of the global dynamics comprising disjoint sets ("islands") of stable activity.
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
Dynamic effects of memory in a cobweb model with competing technologies
NASA Astrophysics Data System (ADS)
Agliari, Anna; Naimzada, Ahmad; Pecora, Nicolò
2017-02-01
We analyze a simple model based on the cobweb demand-supply framework with costly innovators and free imitators and study the endogenous dynamics of price and firms' fractions in a homogeneous good market. The evolutionary selection between technologies depends on a performance measure in which a memory parameter is introduced. The resulting dynamics is then described by a two-dimensional map. In addition to the locally stabilizing effect due to the presence of memory, we show the existence of a double stability threshold which entails for different dynamic scenarios occurring when the memory parameter takes extreme values (i.e. when consideration of the last profit realization prevails or it is too much neglected). The eventuality of different coexisting attractors as well as the structure of the basins of attraction that characterizes the path dependence property of the model with memory is shown. In particular, through global analysis we also illustrate particular bifurcations sequences that may increase the complexity of the related basins of attraction.
Hierarchical Heteroclinics in Dynamical Model of Cognitive Processes: Chunking
NASA Astrophysics Data System (ADS)
Afraimovich, Valentin S.; Young, Todd R.; Rabinovich, Mikhail I.
Combining the results of brain imaging and nonlinear dynamics provides a new hierarchical vision of brain network functionality that is helpful in understanding the relationship of the network to different mental tasks. Using these ideas it is possible to build adequate models for the description and prediction of different cognitive activities in which the number of variables is usually small enough for analysis. The dynamical images of different mental processes depend on their temporal organization and, as a rule, cannot be just simple attractors since cognition is characterized by transient dynamics. The mathematical image for a robust transient is a stable heteroclinic channel consisting of a chain of saddles connected by unstable separatrices. We focus here on hierarchical chunking dynamics that can represent several cognitive activities. Chunking is the dynamical phenomenon that means dividing a long information chain into shorter items. Chunking is known to be important in many processes of perception, learning, memory and cognition. We prove that in the phase space of the model that describes chunking there exists a new mathematical object — heteroclinic sequence of heteroclinic cycles — using the technique of slow-fast approximations. This new object serves as a skeleton of motions reflecting sequential features of hierarchical chunking dynamics and is an adequate image of the chunking processing.
Universality in survivor distributions: Characterizing the winners of competitive dynamics
NASA Astrophysics Data System (ADS)
Luck, J. M.; Mehta, A.
2015-11-01
We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept—the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree.
Analysis of a dynamic model of guard cell signaling reveals the stability of signal propagation
NASA Astrophysics Data System (ADS)
Gan, Xiao; Albert, RéKa
Analyzing the long-term behaviors (attractors) of dynamic models of biological systems can provide valuable insight into biological phenotypes and their stability. We identified the long-term behaviors of a multi-level, 70-node discrete dynamic model of the stomatal opening process in plants. We reduce the model's huge state space by reducing unregulated nodes and simple mediator nodes, and by simplifying the regulatory functions of selected nodes while keeping the model consistent with experimental observations. We perform attractor analysis on the resulting 32-node reduced model by two methods: 1. converting it into a Boolean model, then applying two attractor-finding algorithms; 2. theoretical analysis of the regulatory functions. We conclude that all nodes except two in the reduced model have a single attractor; and only two nodes can admit oscillations. The multistability or oscillations do not affect the stomatal opening level in any situation. This conclusion applies to the original model as well in all the biologically meaningful cases. We further demonstrate the robustness of signal propagation by showing that a large percentage of single-node knockouts does not affect the stomatal opening level. Thus, we conclude that the complex structure of this signal transduction network provides multiple information propagation pathways while not allowing extensive multistability or oscillations, resulting in robust signal propagation. Our innovative combination of methods offers a promising way to analyze multi-level models.
Quantifying the underlying landscape and paths of cancer
Li, Chunhe; Wang, Jin
2014-01-01
Cancer is a disease regulated by the underlying gene networks. The emergence of normal and cancer states as well as the transformation between them can be thought of as a result of the gene network interactions and associated changes. We developed a global potential landscape and path framework to quantify cancer and associated processes. We constructed a cancer gene regulatory network based on the experimental evidences and uncovered the underlying landscape. The resulting tristable landscape characterizes important biological states: normal, cancer and apoptosis. The landscape topography in terms of barrier heights between stable state attractors quantifies the global stability of the cancer network system. We propose two mechanisms of cancerization: one is by the changes of landscape topography through the changes in regulation strengths of the gene networks. The other is by the fluctuations that help the system to go over the critical barrier at fixed landscape topography. The kinetic paths from least action principle quantify the transition processes among normal state, cancer state and apoptosis state. The kinetic rates provide the quantification of transition speeds among normal, cancer and apoptosis attractors. By the global sensitivity analysis of the gene network parameters on the landscape topography, we uncovered some key gene regulations determining the transitions between cancer and normal states. This can be used to guide the design of new anti-cancer tactics, through cocktail strategy of targeting multiple key regulation links simultaneously, for preventing cancer occurrence or transforming the early cancer state back to normal state. PMID:25232051
Willed action, free will, and the stochastic neurodynamics of decision-making
Rolls, Edmund T.
2012-01-01
It is shown that the randomness of the firing times of neurons in decision-making attractor neuronal networks that is present before the decision cues are applied can cause statistical fluctuations that influence the decision that will be taken. In this rigorous sense, it is possible to partially predict decisions before they are made. This raises issues about free will and determinism. There are many decision-making networks in the brain. Some decision systems operate to choose between gene-specified rewards such as taste, touch, and beauty (in for example the peacock's tail). Other processes capable of planning ahead with multiple steps held in working memory may require correction by higher order thoughts that may involve explicit, conscious, processing. The explicit system can allow the gene-specified rewards not to be selected or deferred. The decisions between the selfish gene-specified rewards, and the explicitly calculated rewards that are in the interests of the individual, the phenotype, may themselves be influenced by noise in the brain. When the explicit planning system does take the decision, it can report on its decision-making, and can provide a causal account rather than a confabulation about the decision process. We might use the terms “willed action” and “free will” to refer to the operation of the planning system that can think ahead over several steps held in working memory with which it can take explicit decisions. Reduced connectivity in some of the default mode cortical regions including the precuneus that are active during self-initiated action appears to be related to the reduction in the sense of self and agency, of causing willed actions, that can be present in schizophrenia. PMID:22973205
Lerner, Itamar; Shriki, Oren
2014-01-01
For the last four decades, semantic priming—the facilitation in recognition of a target word when it follows the presentation of a semantically related prime word—has been a central topic in research of human cognitive processing. Studies have drawn a complex picture of findings which demonstrated the sensitivity of this priming effect to a unique combination of variables, including, but not limited to, the type of relatedness between primes and targets, the prime-target Stimulus Onset Asynchrony (SOA), the relatedness proportion (RP) in the stimuli list and the specific task subjects are required to perform. Automatic processes depending on the activation patterns of semantic representations in memory and controlled strategies adapted by individuals when attempting to maximize their recognition performance have both been implicated in contributing to the results. Lately, we have published a new model of semantic priming that addresses the majority of these findings within one conceptual framework. In our model, semantic memory is depicted as an attractor neural network in which stochastic transitions from one stored pattern to another are continually taking place due to synaptic depression mechanisms. We have shown how such transitions, in combination with a reinforcement-learning rule that adjusts their pace, resemble the classic automatic and controlled processes involved in semantic priming and account for a great number of the findings in the literature. Here, we review the core findings of our model and present new simulations that show how similar principles of parameter-adjustments could account for additional data not addressed in our previous studies, such as the relation between expectancy and inhibition in priming, target frequency and target degradation effects. Finally, we describe two human experiments that validate several key predictions of the model. PMID:24795670
Event-related brain potentials index cue-based retrieval interference during sentence comprehension.
Martin, Andrea E; Nieuwland, Mante S; Carreiras, Manuel
2012-01-16
Successful language use requires access to products of past processing within an evolving discourse. A central issue for any neurocognitive theory of language then concerns the role of memory variables during language processing. Under a cue-based retrieval account of language comprehension, linguistic dependency resolution (e.g., retrieving antecedents) is subject to interference from other information in the sentence, especially information that occurs between the words that form the dependency (e.g., between the antecedent and the retrieval site). Retrieval interference may then shape processing complexity as a function of the match of the information at retrieval with the antecedent versus other recent or similar items in memory. To address these issues, we studied the online processing of ellipsis in Castilian Spanish, a language with morphological gender agreement. We recorded event-related brain potentials while participants read sentences containing noun-phrase ellipsis indicated by the determiner otro/a ('another'). These determiners had a grammatically correct or incorrect gender with respect to their antecedent nouns that occurred earlier in the sentence. Moreover, between each antecedent and determiner, another noun phrase occurred that was structurally unavailable as an antecedent and that matched or mismatched the gender of the antecedent (i.e., a local agreement attractor). In contrast to extant P600 results on agreement violation processing, and inconsistent with predictions from neurocognitive models of sentence processing, grammatically incorrect determiners evoked a sustained, broadly distributed negativity compared to correct ones between 400 and 1000ms after word onset, possibly related to sustained negativities as observed for referential processing difficulties. Crucially, this effect was modulated by the attractor: an increased negativity was observed for grammatically correct determiners that did not match the gender of the attractor, suggesting that structurally unavailable noun phrases were at least temporarily considered for grammatically correct ellipsis. These results constitute the first ERP evidence for cue-based retrieval interference during comprehension of grammatical sentences. Copyright © 2011 Elsevier Inc. All rights reserved.
Sensory feedback in a bump attractor model of path integration.
Poll, Daniel B; Nguyen, Khanh; Kilpatrick, Zachary P
2016-04-01
Mammalian spatial navigation systems utilize several different sensory information channels. This information is converted into a neural code that represents the animal's current position in space by engaging place cell, grid cell, and head direction cell networks. In particular, sensory landmark (allothetic) cues can be utilized in concert with an animal's knowledge of its own velocity (idiothetic) cues to generate a more accurate representation of position than path integration provides on its own (Battaglia et al. The Journal of Neuroscience 24(19):4541-4550 (2004)). We develop a computational model that merges path integration with feedback from external sensory cues that provide a reliable representation of spatial position along an annular track. Starting with a continuous bump attractor model, we explore the impact of synaptic spatial asymmetry and heterogeneity, which disrupt the position code of the path integration process. We use asymptotic analysis to reduce the bump attractor model to a single scalar equation whose potential represents the impact of asymmetry and heterogeneity. Such imperfections cause errors to build up when the network performs path integration, but these errors can be corrected by an external control signal representing the effects of sensory cues. We demonstrate that there is an optimal strength and decay rate of the control signal when cues appear either periodically or randomly. A similar analysis is performed when errors in path integration arise from dynamic noise fluctuations. Again, there is an optimal strength and decay of discrete control that minimizes the path integration error.
Heterogeneous Attractor Cell Assemblies for Motor Planning in Premotor Cortex
Pani, Pierpaolo; Mirabella, Giovanni; Costa, Stefania; Del Giudice, Paolo
2013-01-01
Cognitive functions like motor planning rely on the concerted activity of multiple neuronal assemblies underlying still elusive computational strategies. During reaching tasks, we observed stereotyped sudden transitions (STs) between low and high multiunit activity of monkey dorsal premotor cortex (PMd) predicting forthcoming actions on a single-trial basis. Occurrence of STs was observed even when movement was delayed or successfully canceled after a stop signal, excluding a mere substrate of the motor execution. An attractor model accounts for upward STs and high-frequency modulations of field potentials, indicative of local synaptic reverberation. We found in vivo compelling evidence that motor plans in PMd emerge from the coactivation of such attractor modules, heterogeneous in the strength of local synaptic self-excitation. Modules with strong coupling early reacted with variable times to weak inputs, priming a chain reaction of both upward and downward STs in other modules. Such web of “flip-flops” rapidly converged to a stereotyped distributed representation of the motor program, as prescribed by the long-standing theory of associative networks. PMID:23825419
Evolution of canalizing Boolean networks
NASA Astrophysics Data System (ADS)
Szejka, A.; Drossel, B.
2007-04-01
Boolean networks with canalizing functions are used to model gene regulatory networks. In order to learn how such networks may behave under evolutionary forces, we simulate the evolution of a single Boolean network by means of an adaptive walk, which allows us to explore the fitness landscape. Mutations change the connections and the functions of the nodes. Our fitness criterion is the robustness of the dynamical attractors against small perturbations. We find that with this fitness criterion the global maximum is always reached and that there is a huge neutral space of 100% fitness. Furthermore, in spite of having such a high degree of robustness, the evolved networks still share many features with “chaotic” networks.
Various Types of Coexisting Attractors in a New 4D Autonomous Chaotic System
NASA Astrophysics Data System (ADS)
Lai, Qiang; Akgul, Akif; Zhao, Xiao-Wen; Pei, Huiqin
An unique 4D autonomous chaotic system with signum function term is proposed in this paper. The system has four unstable equilibria and various types of coexisting attractors appear. Four-wing and four-scroll strange attractors are observed in the system and they will be broken into two coexisting butterfly attractors and two coexisting double-scroll attractors with the variation of the parameters. Numerical simulation shows that the system has various types of multiple coexisting attractors including two butterfly attractors with four limit cycles, two double-scroll attractors with a limit cycle, four single-scroll strange attractors, four limit cycles with regard to different parameters and initial values. The coexistence of the attractors is determined by the bifurcation diagrams. The chaotic and hyperchaotic properties of the attractors are verified by the Lyapunov exponents. Moreover, we present an electronic circuit to experimentally realize the dynamic behavior of the system.
Attractor Signaling Models for Discovery of Combinatorial Therapies
2014-11-01
acquired!drug!resistance!still!makes!the!5-year!survival!rate!for!this! disease ! less!than!15%.!Over!the!years,!many!specific!mechanisms!associated!with!drug...Moreover, it has been suggested that a biological system in a chronic or therapy- resistant disease state can be seen as a network that has become...therapeutic methods for complex diseases such as cancer. Even if our knowledge of biological networks is incomplete, rapid progress is currently being
Synchronization in neural nets
NASA Technical Reports Server (NTRS)
Vidal, Jacques J.; Haggerty, John
1988-01-01
The paper presents an artificial neural network concept (the Synchronizable Oscillator Networks) where the instants of individual firings in the form of point processes constitute the only form of information transmitted between joining neurons. In the model, neurons fire spontaneously and regularly in the absence of perturbation. When interaction is present, the scheduled firings are advanced or delayed by the firing of neighboring neurons. Networks of such neurons become global oscillators which exhibit multiple synchronizing attractors. From arbitrary initial states, energy minimization learning procedures can make the network converge to oscillatory modes that satisfy multi-dimensional constraints. Such networks can directly represent routing and scheduling problems that consist of ordering sequences of events.
A recurrent network mechanism of time integration in perceptual decisions.
Wong, Kong-Fatt; Wang, Xiao-Jing
2006-01-25
Recent physiological studies using behaving monkeys revealed that, in a two-alternative forced-choice visual motion discrimination task, reaction time was correlated with ramping of spike activity of lateral intraparietal cortical neurons. The ramping activity appears to reflect temporal accumulation, on a timescale of hundreds of milliseconds, of sensory evidence before a decision is reached. To elucidate the cellular and circuit basis of such integration times, we developed and investigated a simplified two-variable version of a biophysically realistic cortical network model of decision making. In this model, slow time integration can be achieved robustly if excitatory reverberation is primarily mediated by NMDA receptors; our model with only fast AMPA receptors at recurrent synapses produces decision times that are not comparable with experimental observations. Moreover, we found two distinct modes of network behavior, in which decision computation by winner-take-all competition is instantiated with or without attractor states for working memory. Decision process is closely linked to the local dynamics, in the "decision space" of the system, in the vicinity of an unstable saddle steady state that separates the basins of attraction for the two alternative choices. This picture provides a rigorous and quantitative explanation for the dependence of performance and response time on the degree of task difficulty, and the reason for which reaction times are longer in error trials than in correct trials as observed in the monkey experiment. Our reduced two-variable neural model offers a simple yet biophysically plausible framework for studying perceptual decision making in general.
Analysis of gene network robustness based on saturated fixed point attractors
2014-01-01
The analysis of gene network robustness to noise and mutation is important for fundamental and practical reasons. Robustness refers to the stability of the equilibrium expression state of a gene network to variations of the initial expression state and network topology. Numerical simulation of these variations is commonly used for the assessment of robustness. Since there exists a great number of possible gene network topologies and initial states, even millions of simulations may be still too small to give reliable results. When the initial and equilibrium expression states are restricted to being saturated (i.e., their elements can only take values 1 or −1 corresponding to maximum activation and maximum repression of genes), an analytical gene network robustness assessment is possible. We present this analytical treatment based on determination of the saturated fixed point attractors for sigmoidal function models. The analysis can determine (a) for a given network, which and how many saturated equilibrium states exist and which and how many saturated initial states converge to each of these saturated equilibrium states and (b) for a given saturated equilibrium state or a given pair of saturated equilibrium and initial states, which and how many gene networks, referred to as viable, share this saturated equilibrium state or the pair of saturated equilibrium and initial states. We also show that the viable networks sharing a given saturated equilibrium state must follow certain patterns. These capabilities of the analytical treatment make it possible to properly define and accurately determine robustness to noise and mutation for gene networks. Previous network research conclusions drawn from performing millions of simulations follow directly from the results of our analytical treatment. Furthermore, the analytical results provide criteria for the identification of model validity and suggest modified models of gene network dynamics. The yeast cell-cycle network is used as an illustration of the practical application of this analytical treatment. PMID:24650364
NASA Astrophysics Data System (ADS)
Quan, Austin; Osorio, Ivan; Ohira, Toru; Milton, John
2011-12-01
Resonance can occur in bistable dynamical systems due to the interplay between noise and delay (τ) in the absence of a periodic input. We investigate resonance in a two-neuron model with mutual time-delayed inhibitory feedback. For appropriate choices of the parameters and inputs three fixed-point attractors co-exist: two are stable and one is unstable. In the absence of noise, delay-induced transient oscillations (referred to herein as DITOs) arise whenever the initial function is tuned sufficiently close to the unstable fixed-point. In the presence of noisy perturbations, DITOs arise spontaneously. Since the correlation time for the stationary dynamics is ˜τ, we approximated a higher order Markov process by a three-state Markov chain model by rescaling time as t → 2sτ, identifying the states based on whether the sub-intervals were completely confined to one basin of attraction (the two stable attractors) or straddled the separatrix, and then determining the transition probability matrix empirically. The resultant Markov chain model captured the switching behaviors including the statistical properties of the DITOs. Our observations indicate that time-delayed and noisy bistable dynamical systems are prone to generate DITOs as switches between the two attractors occur. Bistable systems arise transiently in situations when one attractor is gradually replaced by another. This may explain, for example, why seizures in certain epileptic syndromes tend to occur as sleep stages change.
NASA Astrophysics Data System (ADS)
Pusuluri, Sai Teja
Energy landscapes are often used as metaphors for phenomena in biology, social sciences and finance. Different methods have been implemented in the past for the construction of energy landscapes. Neural network models based on spin glass physics provide an excellent mathematical framework for the construction of energy landscapes. This framework uses a minimal number of parameters and constructs the landscape using data from the actual phenomena. In the past neural network models were used to mimic the storage and retrieval process of memories (patterns) in the brain. With advances in the field now, these models are being used in machine learning, deep learning and modeling of complex phenomena. Most of the past literature focuses on increasing the storage capacity and stability of stored patterns in the network but does not study these models from a modeling perspective or an energy landscape perspective. This dissertation focuses on neural network models both from a modeling perspective and from an energy landscape perspective. I firstly show how the cellular interconversion phenomenon can be modeled as a transition between attractor states on an epigenetic landscape constructed using neural network models. The model allows the identification of a reaction coordinate of cellular interconversion by analyzing experimental and simulation time course data. Monte Carlo simulations of the model show that the initial phase of cellular interconversion is a Poisson process and the later phase of cellular interconversion is a deterministic process. Secondly, I explore the static features of landscapes generated using neural network models, such as sizes of basins of attraction and densities of metastable states. The simulation results show that the static landscape features are strongly dependent on the correlation strength and correlation structure between patterns. Using different hierarchical structures of the correlation between patterns affects the landscape features. These results show how the static landscape features can be controlled by adjusting the correlations between patterns. Finally, I explore the dynamical features of landscapes generated using neural network models such as the stability of minima and the transition rates between minima. The results from this project show that the stability depends on the correlations between patterns. It is also found that the transition rates between minima strongly depend on the type of bias applied and the correlation between patterns. The results from this part of the dissertation can be useful in engineering an energy landscape without even having the complete information about the associated minima of the landscape.
Dissociated Overt and Covert Recognition as an Emergent Property of Lesioned Attractor Networks
1992-01-01
1989). Prosopagnosia and object agnosia without covert recognition. Neuropsychologia, 27, 179-191. Rafal, R., Smith, J., Krantz, J., Cohen, A., Brennan...recognition in patients with face agnosia . RphaiorA2 Brain Research, 30, 235-249. 55 Volpe, B. T., LeDoux, J. A., and Gazzaniga, M. S. (1979). Information
Huang, Sui
2012-09-01
Current investigation of cancer progression towards increasing malignancy focuses on the molecular pathways that produce the various cancerous traits of cells. Their acquisition is explained by the somatic mutation theory: tumor progression is the result of a neo-Darwinian evolution in the tissue. Herein cells are the units of selection. Random genetic mutations permanently affecting these pathways create malignant cell phenotypes that are selected for in the disturbed tissue. However, could it be that the capacity of the genome and its gene regulatory network to generate the vast diversity of cell types during development, i.e., to produce inheritable phenotypic changes without mutations, is harnessed by tumorigenesis to propel a directional change towards malignancy? Here we take an encompassing perspective, transcending the orthodoxy of molecular carcinogenesis and review mechanisms of somatic evolution beyond the Neo-Darwinian scheme. We discuss the central concept of "cancer attractors" - the hidden stable states of gene regulatory networks normally not occupied by cells. Noise-induced transitions into such attractors provide a source for randomness (chance) and regulatory constraints (necessity) in the acquisition of novel expression profiles that can be inherited across cell divisions, and hence, can be selected for. But attractors can also be reached in response to environmental signals - thus offering the possibility for inheriting acquired traits that can also be selected for. Therefore, we face the possibility of non-genetic (mutation-independent) equivalents to both Darwinian and Lamarckian evolution which may jointly explain the arrow of change pointing toward increasing malignancy. Copyright © 2012 Elsevier Ltd. All rights reserved.
Navier-Stokes-Voigt Equations with Memory in 3D Lacking Instantaneous Kinematic Viscosity
NASA Astrophysics Data System (ADS)
Di Plinio, Francesco; Giorgini, Andrea; Pata, Vittorino; Temam, Roger
2018-04-01
We consider a Navier-Stokes-Voigt fluid model where the instantaneous kinematic viscosity has been completely replaced by a memory term incorporating hereditary effects, in presence of Ekman damping. Unlike the classical Navier-Stokes-Voigt system, the energy balance involves the spatial gradient of the past history of the velocity rather than providing an instantaneous control on the high modes. In spite of this difficulty, we show that our system is dissipative in the dynamical systems sense and even possesses regular global and exponential attractors of finite fractal dimension. Such features of asymptotic well-posedness in absence of instantaneous high modes dissipation appear to be unique within the realm of dynamical systems arising from fluid models.
18 CFR 1304.411 - Fish attractor, spawning, and habitat structures.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Fish attractor... STRUCTURES AND OTHER ALTERATIONS Miscellaneous § 1304.411 Fish attractor, spawning, and habitat structures. Fish attractors constitute potential obstructions and require TVA approval. (a) Fish attractors may be...
18 CFR 1304.411 - Fish attractor, spawning, and habitat structures.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 2 2013-04-01 2012-04-01 true Fish attractor, spawning... OTHER ALTERATIONS Miscellaneous § 1304.411 Fish attractor, spawning, and habitat structures. Fish attractors constitute potential obstructions and require TVA approval. (a) Fish attractors may be constructed...
18 CFR 1304.411 - Fish attractor, spawning, and habitat structures.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 2 2014-04-01 2014-04-01 false Fish attractor... STRUCTURES AND OTHER ALTERATIONS Miscellaneous § 1304.411 Fish attractor, spawning, and habitat structures. Fish attractors constitute potential obstructions and require TVA approval. (a) Fish attractors may be...
18 CFR 1304.411 - Fish attractor, spawning, and habitat structures.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 2 2012-04-01 2012-04-01 false Fish attractor... STRUCTURES AND OTHER ALTERATIONS Miscellaneous § 1304.411 Fish attractor, spawning, and habitat structures. Fish attractors constitute potential obstructions and require TVA approval. (a) Fish attractors may be...
18 CFR 1304.411 - Fish attractor, spawning, and habitat structures.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Fish attractor... STRUCTURES AND OTHER ALTERATIONS Miscellaneous § 1304.411 Fish attractor, spawning, and habitat structures. Fish attractors constitute potential obstructions and require TVA approval. (a) Fish attractors may be...
Hidden attractors in dynamical systems
NASA Astrophysics Data System (ADS)
Dudkowski, Dawid; Jafari, Sajad; Kapitaniak, Tomasz; Kuznetsov, Nikolay V.; Leonov, Gennady A.; Prasad, Awadhesh
2016-06-01
Complex dynamical systems, ranging from the climate, ecosystems to financial markets and engineering applications typically have many coexisting attractors. This property of the system is called multistability. The final state, i.e., the attractor on which the multistable system evolves strongly depends on the initial conditions. Additionally, such systems are very sensitive towards noise and system parameters so a sudden shift to a contrasting regime may occur. To understand the dynamics of these systems one has to identify all possible attractors and their basins of attraction. Recently, it has been shown that multistability is connected with the occurrence of unpredictable attractors which have been called hidden attractors. The basins of attraction of the hidden attractors do not touch unstable fixed points (if exists) and are located far away from such points. Numerical localization of the hidden attractors is not straightforward since there are no transient processes leading to them from the neighborhoods of unstable fixed points and one has to use the special analytical-numerical procedures. From the viewpoint of applications, the identification of hidden attractors is the major issue. The knowledge about the emergence and properties of hidden attractors can increase the likelihood that the system will remain on the most desirable attractor and reduce the risk of the sudden jump to undesired behavior. We review the most representative examples of hidden attractors, discuss their theoretical properties and experimental observations. We also describe numerical methods which allow identification of the hidden attractors.
Recurrent-neural-network-based Boolean factor analysis and its application to word clustering.
Frolov, Alexander A; Husek, Dusan; Polyakov, Pavel Yu
2009-07-01
The objective of this paper is to introduce a neural-network-based algorithm for word clustering as an extension of the neural-network-based Boolean factor analysis algorithm (Frolov , 2007). It is shown that this extended algorithm supports even the more complex model of signals that are supposed to be related to textual documents. It is hypothesized that every topic in textual data is characterized by a set of words which coherently appear in documents dedicated to a given topic. The appearance of each word in a document is coded by the activity of a particular neuron. In accordance with the Hebbian learning rule implemented in the network, sets of coherently appearing words (treated as factors) create tightly connected groups of neurons, hence, revealing them as attractors of the network dynamics. The found factors are eliminated from the network memory by the Hebbian unlearning rule facilitating the search of other factors. Topics related to the found sets of words can be identified based on the words' semantics. To make the method complete, a special technique based on a Bayesian procedure has been developed for the following purposes: first, to provide a complete description of factors in terms of component probability, and second, to enhance the accuracy of classification of signals to determine whether it contains the factor. Since it is assumed that every word may possibly contribute to several topics, the proposed method might be related to the method of fuzzy clustering. In this paper, we show that the results of Boolean factor analysis and fuzzy clustering are not contradictory, but complementary. To demonstrate the capabilities of this attempt, the method is applied to two types of textual data on neural networks in two different languages. The obtained topics and corresponding words are at a good level of agreement despite the fact that identical topics in Russian and English conferences contain different sets of keywords.
NASA Astrophysics Data System (ADS)
Kemeth, Felix P.; Haugland, Sindre W.; Krischer, Katharina
2018-05-01
Symmetry broken states arise naturally in oscillatory networks. In this Letter, we investigate chaotic attractors in an ensemble of four mean-coupled Stuart-Landau oscillators with two oscillators being synchronized. We report that these states with partially broken symmetry, so-called chimera states, have different setwise symmetries in the incoherent oscillators, and in particular, some are and some are not invariant under a permutation symmetry on average. This allows for a classification of different chimera states in small networks. We conclude our report with a discussion of related states in spatially extended systems, which seem to inherit the symmetry properties of their counterparts in small networks.
Attractors for discrete periodic dynamical systems
John E. Franke; James F. Selgrade
2003-01-01
A mathematical framework is introduced to study attractors of discrete, nonautonomous dynamical systems which depend periodically on time. A structure theorem for such attractors is established which says that the attractor of a time-periodic dynamical system is the unin of attractors of appropriate autonomous maps. If the nonautonomous system is a perturbation of an...
NASA Astrophysics Data System (ADS)
Lai, Bang-Cheng; He, Jian-Jun
2018-03-01
In this paper, we construct a novel 4D autonomous chaotic system with four cross-product nonlinear terms and five equilibria. The multiple coexisting attractors and the multiscroll attractor of the system are numerically investigated. Research results show that the system has various types of multiple attractors, including three strange attractors with a limit cycle, three limit cycles, two strange attractors with a pair of limit cycles, two coexisting strange attractors. By using the passive control theory, a controller is designed for controlling the chaos of the system. Both analytical and numerical studies verify that the designed controller can suppress chaotic motion and stabilise the system at the origin. Moreover, an electronic circuit is presented for implementing the chaotic system.
Uenohara, Seiji; Mitsui, Takahito; Hirata, Yoshito; Morie, Takashi; Horio, Yoshihiko; Aihara, Kazuyuki
2013-06-01
We experimentally study strange nonchaotic attractors (SNAs) and chaotic attractors by using a nonlinear integrated circuit driven by a quasiperiodic input signal. An SNA is a geometrically strange attractor for which typical orbits have nonpositive Lyapunov exponents. It is a difficult problem to distinguish between SNAs and chaotic attractors experimentally. If a system has an SNA as a unique attractor, the system produces an identical response to a repeated quasiperiodic signal, regardless of the initial conditions, after a certain transient time. Such reproducibility of response outputs is called consistency. On the other hand, if the attractor is chaotic, the consistency is low owing to the sensitive dependence on initial conditions. In this paper, we analyze the experimental data for distinguishing between SNAs and chaotic attractors on the basis of the consistency.
Modelling and prediction for chaotic fir laser attractor using rational function neural network.
Cho, S
2001-02-01
Many real-world systems such as irregular ECG signal, volatility of currency exchange rate and heated fluid reaction exhibit highly complex nonlinear characteristic known as chaos. These chaotic systems cannot be retreated satisfactorily using linear system theory due to its high dimensionality and irregularity. This research focuses on prediction and modelling of chaotic FIR (Far InfraRed) laser system for which the underlying equations are not given. This paper proposed a method for prediction and modelling a chaotic FIR laser time series using rational function neural network. Three network architectures, TDNN (Time Delayed Neural Network), RBF (radial basis function) network and the RF (rational function) network, are also presented. Comparisons between these networks performance show the improvements introduced by the RF network in terms of a decrement in network complexity and better ability of predictability.
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.
Lemieux, Maxime; Josset, Nicolas; Roussel, Marie; Couraud, Sébastien; Bretzner, Frédéric
2016-01-01
Locomotion results from an interplay between biomechanical constraints of the muscles attached to the skeleton and the neuronal circuits controlling and coordinating muscle activities. Quadrupeds exhibit a wide range of locomotor gaits. Given our advances in the genetic identification of spinal and supraspinal circuits important to locomotion in the mouse, it is now important to get a better understanding of the full repertoire of gaits in the freely walking mouse. To assess this range, young adult C57BL/6J mice were trained to walk and run on a treadmill at different locomotor speeds. Instead of using the classical paradigm defining gaits according to their footfall pattern, we combined the inter-limb coupling and the duty cycle of the stance phase, thus identifying several types of gaits: lateral walk, trot, out-of-phase walk, rotary gallop, transverse gallop, hop, half-bound, and full-bound. Out-of-phase walk, trot, and full-bound were robust and appeared to function as attractor gaits (i.e., a state to which the network flows and stabilizes) at low, intermediate, and high speeds respectively. In contrast, lateral walk, hop, transverse gallop, rotary gallop, and half-bound were more transient and therefore considered transitional gaits (i.e., a labile state of the network from which it flows to the attractor state). Surprisingly, lateral walk was less frequently observed. Using graph analysis, we demonstrated that transitions between gaits were predictable, not random. In summary, the wild-type mouse exhibits a wider repertoire of locomotor gaits than expected. Future locomotor studies should benefit from this paradigm in assessing transgenic mice or wild-type mice with neurotraumatic injury or neurodegenerative disease affecting gait.
Remembering the past and imagining the future
Byrne, Patrick; Becker, Suzanna; Burgess, Neil
2009-01-01
The neural mechanisms underlying spatial cognition are modelled, integrating neuronal, systems and behavioural data, and addressing the relationships between long-term memory, short-term memory and imagery, and between egocentric and allocentric and visual and idiothetic representations. Long-term spatial memory is modeled as attractor dynamics within medial-temporal allocentric representations, and short-term memory as egocentric parietal representations driven by perception, retrieval and imagery, and modulated by directed attention. Both encoding and retrieval/ imagery require translation between egocentric and allocentric representations, mediated by posterior parietal and retrosplenial areas and utilizing head direction representations in Papez’s circuit. Thus hippocampus effectively indexes information by real or imagined location, while Papez’s circuit translates to imagery or from perception according to the direction of view. Modulation of this translation by motor efference allows “spatial updating” of representations, while prefrontal simulated motor efference allows mental exploration. The alternating temporo-parietal flows of information are organized by the theta rhythm. Simulations demonstrate the retrieval and updating of familiar spatial scenes, hemispatial neglect in memory, and the effects on hippocampal place cell firing of lesioned head direction representations and of conflicting visual and ideothetic inputs. PMID:17500630
Network resiliency through memory health monitoring and proactive management
Andrade Costa, Carlos H.; Cher, Chen-Yong; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2017-11-21
A method for managing a network queue memory includes receiving sensor information about the network queue memory, predicting a memory failure in the network queue memory based on the sensor information, and outputting a notification through a plurality of nodes forming a network and using the network queue memory, the notification configuring communications between the nodes.
d=4 attractors, effective horizon radius, and fake supergravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrara, Sergio; INFN-Laboratori Nazionali di Frascati, Via Enrico Fermi 40, I-00044 Frascati; Gnecchi, Alessandra
2008-09-15
We consider extremal black hole attractors [both Bogomol'nyi-Prasad-Sommerfield (BPS) and non-BPS] for N=3 and N=5 supergravity in d=4 space-time dimensions. Attractors for matter-coupled N=3 theory are similar to attractors in N=2 supergravity minimally coupled to Abelian vector multiplets. On the other hand, N=5 attractors are similar to attractors in N=4 pure supergravity, and in such theories only (1/N)-BPS nondegenerate solutions exist. All the above-mentioned theories have a simple interpretation in the first order (fake supergravity) formalism. Furthermore, such theories do not have a d=5 uplift. Finally we comment on the duality relations among the attractor solutions of N{>=}2 supergravities sharingmore » the same full bosonic sector.« less
A Simplified Algorithm for Statistical Investigation of Damage Spreading
NASA Astrophysics Data System (ADS)
Gecow, Andrzej
2009-04-01
On the way to simulating adaptive evolution of complex system describing a living object or human developed project, a fitness should be defined on node states or network external outputs. Feedbacks lead to circular attractors of these states or outputs which make it difficult to define a fitness. The main statistical effects of adaptive condition are the result of small change tendency and to appear, they only need a statistically correct size of damage initiated by evolutionary change of system. This observation allows to cut loops of feedbacks and in effect to obtain a particular statistically correct state instead of a long circular attractor which in the quenched model is expected for chaotic network with feedback. Defining fitness on such states is simple. We calculate only damaged nodes and only once. Such an algorithm is optimal for investigation of damage spreading i.e. statistical connections of structural parameters of initial change with the size of effected damage. It is a reversed-annealed method—function and states (signals) may be randomly substituted but connections are important and are preserved. The small damages important for adaptive evolution are correctly depicted in comparison to Derrida annealed approximation which expects equilibrium levels for large networks. The algorithm indicates these levels correctly. The relevant program in Pascal, which executes the algorithm for a wide range of parameters, can be obtained from the author.
The up and down states of cortical networks
NASA Astrophysics Data System (ADS)
Ghorbani, Maryam; Levine, Alex J.; Mehta, Mayank; Bruinsma, Robijn
2011-03-01
The cortical networks show a collective activity of alternating active and silent states known as up and down states during slow wave sleep or anesthesia. The mechanism of this spontaneous activity as well as the anesthesia or sleep are still not clear. Here, using a mean field approach, we present a simple model to study the spontaneous activity of a homogenous cortical network of excitatory and inhibitory neurons that are recurrently connected. A key new ingredient in this model is that the activity-dependant synaptic depression is considered only for the excitatory neurons. We find depending on the strength of the synaptic depression and synaptic efficacies, the phase space contains strange attractors or stable fixed points at active or quiescent regimes. At the strange attractor phase, we can have oscillations similar to up and down states with flat and noisy up states. Moreover, we show that by increasing the synaptic efficacy corresponding to the connections between the excitatory neurons, the characteristics of the up and down states change in agreement with the changes that we observe in the intracellular recordings of the membrane potential from the entorhinal cortex by varying the depth of anesthesia. Thus, we propose that by measuring the value of this synaptic efficacy, one can quantify the depth of anesthesia which is clinically very important. These findings provide a simple, analytical understanding of the spontaneous cortical dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gecow, Andrzej
On the way to simulating adaptive evolution of complex system describing a living object or human developed project, a fitness should be defined on node states or network external outputs. Feedbacks lead to circular attractors of these states or outputs which make it difficult to define a fitness. The main statistical effects of adaptive condition are the result of small change tendency and to appear, they only need a statistically correct size of damage initiated by evolutionary change of system. This observation allows to cut loops of feedbacks and in effect to obtain a particular statistically correct state instead ofmore » a long circular attractor which in the quenched model is expected for chaotic network with feedback. Defining fitness on such states is simple. We calculate only damaged nodes and only once. Such an algorithm is optimal for investigation of damage spreading i.e. statistical connections of structural parameters of initial change with the size of effected damage. It is a reversed-annealed method--function and states (signals) may be randomly substituted but connections are important and are preserved. The small damages important for adaptive evolution are correctly depicted in comparison to Derrida annealed approximation which expects equilibrium levels for large networks. The algorithm indicates these levels correctly. The relevant program in Pascal, which executes the algorithm for a wide range of parameters, can be obtained from the author.« less
Dynamics of delay-coupled FitzHugh-Nagumo neural rings.
Mao, Xiaochen; Sun, Jianqiao; Li, Shaofan
2018-01-01
This paper studies the dynamical behaviors of a pair of FitzHugh-Nagumo neural networks with bidirectional delayed couplings. It presents a detailed analysis of delay-independent and delay-dependent stabilities and the existence of bifurcated oscillations. Illustrative examples are performed to validate the analytical results and to discover interesting phenomena. It is shown that the network exhibits a variety of complicated activities, such as multiple stability switches, the coexistence of periodic and quasi-periodic oscillations, the coexistence of periodic and chaotic orbits, and the coexisting chaotic attractors.
Dynamics of delay-coupled FitzHugh-Nagumo neural rings
NASA Astrophysics Data System (ADS)
Mao, Xiaochen; Sun, Jianqiao; Li, Shaofan
2018-01-01
This paper studies the dynamical behaviors of a pair of FitzHugh-Nagumo neural networks with bidirectional delayed couplings. It presents a detailed analysis of delay-independent and delay-dependent stabilities and the existence of bifurcated oscillations. Illustrative examples are performed to validate the analytical results and to discover interesting phenomena. It is shown that the network exhibits a variety of complicated activities, such as multiple stability switches, the coexistence of periodic and quasi-periodic oscillations, the coexistence of periodic and chaotic orbits, and the coexisting chaotic attractors.
Chaotic itinerancy in the oscillator neural network without Lyapunov functions.
Uchiyama, Satoki; Fujisaka, Hirokazu
2004-09-01
Chaotic itinerancy (CI), which is defined as an incessant spontaneous switching phenomenon among attractor ruins in deterministic dynamical systems without Lyapunov functions, is numerically studied in the case of an oscillator neural network model. The model is the pseudoinverse-matrix version of the previous model [S. Uchiyama and H. Fujisaka, Phys. Rev. E 65, 061912 (2002)] that was studied theoretically with the aid of statistical neurodynamics. It is found that CI in neural nets can be understood as the intermittent dynamics of weakly destabilized chaotic retrieval solutions. Copyright 2004 American Institute of Physics
Attracting Dynamics of Frontal Cortex Ensembles during Memory-Guided Decision-Making
Seamans, Jeremy K.; Durstewitz, Daniel
2011-01-01
A common theoretical view is that attractor-like properties of neuronal dynamics underlie cognitive processing. However, although often proposed theoretically, direct experimental support for the convergence of neural activity to stable population patterns as a signature of attracting states has been sparse so far, especially in higher cortical areas. Combining state space reconstruction theorems and statistical learning techniques, we were able to resolve details of anterior cingulate cortex (ACC) multiple single-unit activity (MSUA) ensemble dynamics during a higher cognitive task which were not accessible previously. The approach worked by constructing high-dimensional state spaces from delays of the original single-unit firing rate variables and the interactions among them, which were then statistically analyzed using kernel methods. We observed cognitive-epoch-specific neural ensemble states in ACC which were stable across many trials (in the sense of being predictive) and depended on behavioral performance. More interestingly, attracting properties of these cognitively defined ensemble states became apparent in high-dimensional expansions of the MSUA spaces due to a proper unfolding of the neural activity flow, with properties common across different animals. These results therefore suggest that ACC networks may process different subcomponents of higher cognitive tasks by transiting among different attracting states. PMID:21625577
Zhao, Lei; Wang, Jin
2016-11-01
Recent studies on Caenorhabditis elegans reveal that gene manipulations can extend its lifespan several fold. However, how the genes work together to determine longevity is still an open question. Here we construct a gene regulatory network for worm ageing and quantify its underlying potential and flux landscape. We found ageing and rejuvenation states can emerge as basins of attraction at certain gene expression levels. The system state can switch from one attractor to another driven by the intrinsic or external perturbations through genetics or the environment. Furthermore, we simulated gene silencing experiments and found that the silencing of longevity-promoting or lifespan-limiting genes leads to ageing or rejuvenation domination, respectively. This indicates that the difference in depths between ageing and the rejuvenation attractor is highly correlated with worm longevity. We further uncovered some key genes and regulations which have a strong influence on landscape basin stability. A dynamic landscape model is proposed to describe the whole process of ageing: the ageing attractor dominates when senescence progresses. We also uncovered the oscillation dynamics, and a similar behaviour was observed in the long-lived creature Turritopsis dohrnii Our landscape theory provides a global and physical approach to explore the underlying mechanisms of ageing. © 2016 The Author(s).
Ashwin, Peter; Podvigina, Olga
2010-06-01
We investigate the robust heteroclinic dynamics arising in a system of ordinary differential equations in R(4) with symmetry [Formula in text]. This system arises from the normal form reduction of a 1: squate root of 2 mode interaction for Boussinesq convection. We investigate the structure of a particular robust heteroclinic attractor with "depth two connections" from equilibria to subcycles as well as connections between equilibria. The "subcycle" is not asymptotically stable, due to nearby trajectories undertaking an "excursion," but it is a Milnor attractor, meaning that a positive measure set of nearby initial conditions converges to the subcycle. We investigate the dynamics in the presence of noise and find a number of interesting properties. We confirm that typical trajectories wind around the subcycle with very occasional excursions near a depth two connection. The frequency of excursions depends on noise intensity in a subtle manner; in particular, for anisotropic noise, the depth two connection may be visited much more often than for isotropic noise, and more generally the long term statistics of the system depends not only on the noise strength but also on the anisotropy of the noise. Similar properties are confirmed in simulations of Boussinesq convection for parameters giving an attractor with depth two connections. (c) 2010 American Institute of Physics.
NASA Astrophysics Data System (ADS)
Lai, Qiang; Zhao, Xiao-Wen; Rajagopal, Karthikeyan; Xu, Guanghui; Akgul, Akif; Guleryuz, Emre
2018-01-01
This paper considers the generation of multi-butterfly chaotic attractors from a generalised Sprott C system with multiple non-hyperbolic equilibria. The system is constructed by introducing an additional variable whose derivative has a switching function to the Sprott C system. It is numerically found that the system creates two-, three-, four-, five-butterfly attractors and any other multi-butterfly attractors. First, the dynamic analyses of multi-butterfly chaotic attractors are presented. Secondly, the field programmable gate array implementation, electronic circuit realisation and random number generator are done with the multi-butterfly chaotic attractors.
NASA Astrophysics Data System (ADS)
Vaidyanathan, S.; Sambas, A.; Sukono; Mamat, M.; Gundara, G.; Mada Sanjaya, W. S.; Subiyanto
2018-03-01
A 3-D new chaotic attractor with two quadratic nonlinearities is proposed in this paper. The dynamical properties of the new chaotic system are described in terms of phase portraits, equilibrium points, Lyapunov exponents, Kaplan-Yorke dimension, dissipativity, etc. We show that the new chaotic system has three unstable equilibrium points. The new chaotic attractor is dissipative in nature. As an engineering application, adaptive synchronization of identical new chaotic attractors is designed via nonlinear control and Lyapunov stability theory. Furthermore, an electronic circuit realization of the new chaotic attractor is presented in detail to confirm the feasibility of the theoretical chaotic attractor model.
Applying Chaos Theory to Careers: Attraction and Attractors
ERIC Educational Resources Information Center
Pryor, Robert G. L.; Bright, Jim E. H.
2007-01-01
This article presents the Chaos Theory of Careers with particular reference to the concepts of "attraction" and "attractors". Attractors are defined in terms of characteristic trajectories, feedback mechanisms, end states, ordered boundedness, reality visions and equilibrium and fluctuation. The identified types of attractors (point, pendulum,…
Canalization and Control in Automata Networks: Body Segmentation in Drosophila melanogaster
Marques-Pita, Manuel; Rocha, Luis M.
2013-01-01
We present schema redescription as a methodology to characterize canalization in automata networks used to model biochemical regulation and signalling. In our formulation, canalization becomes synonymous with redundancy present in the logic of automata. This results in straightforward measures to quantify canalization in an automaton (micro-level), which is in turn integrated into a highly scalable framework to characterize the collective dynamics of large-scale automata networks (macro-level). This way, our approach provides a method to link micro- to macro-level dynamics – a crux of complexity. Several new results ensue from this methodology: uncovering of dynamical modularity (modules in the dynamics rather than in the structure of networks), identification of minimal conditions and critical nodes to control the convergence to attractors, simulation of dynamical behaviour from incomplete information about initial conditions, and measures of macro-level canalization and robustness to perturbations. We exemplify our methodology with a well-known model of the intra- and inter cellular genetic regulation of body segmentation in Drosophila melanogaster. We use this model to show that our analysis does not contradict any previous findings. But we also obtain new knowledge about its behaviour: a better understanding of the size of its wild-type attractor basin (larger than previously thought), the identification of novel minimal conditions and critical nodes that control wild-type behaviour, and the resilience of these to stochastic interventions. Our methodology is applicable to any complex network that can be modelled using automata, but we focus on biochemical regulation and signalling, towards a better understanding of the (decentralized) control that orchestrates cellular activity – with the ultimate goal of explaining how do cells and tissues ‘compute’. PMID:23520449
Canalization and control in automata networks: body segmentation in Drosophila melanogaster.
Marques-Pita, Manuel; Rocha, Luis M
2013-01-01
We present schema redescription as a methodology to characterize canalization in automata networks used to model biochemical regulation and signalling. In our formulation, canalization becomes synonymous with redundancy present in the logic of automata. This results in straightforward measures to quantify canalization in an automaton (micro-level), which is in turn integrated into a highly scalable framework to characterize the collective dynamics of large-scale automata networks (macro-level). This way, our approach provides a method to link micro- to macro-level dynamics--a crux of complexity. Several new results ensue from this methodology: uncovering of dynamical modularity (modules in the dynamics rather than in the structure of networks), identification of minimal conditions and critical nodes to control the convergence to attractors, simulation of dynamical behaviour from incomplete information about initial conditions, and measures of macro-level canalization and robustness to perturbations. We exemplify our methodology with a well-known model of the intra- and inter cellular genetic regulation of body segmentation in Drosophila melanogaster. We use this model to show that our analysis does not contradict any previous findings. But we also obtain new knowledge about its behaviour: a better understanding of the size of its wild-type attractor basin (larger than previously thought), the identification of novel minimal conditions and critical nodes that control wild-type behaviour, and the resilience of these to stochastic interventions. Our methodology is applicable to any complex network that can be modelled using automata, but we focus on biochemical regulation and signalling, towards a better understanding of the (decentralized) control that orchestrates cellular activity--with the ultimate goal of explaining how do cells and tissues 'compute'.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffries, C.; Perez, J.
For a driven nonlinear oscillator we report direct evidence for three cases of an interior crisis of the attractor, as conjectured by Grebogi, Ott, and Yorke. These crises are sudden and discontinuous changes in the attractor, observed directly from bifurcation diagrams and attractor diagrams (Poincare sections) in real time. The crises arise from intersection of an unstable orbit with the chaotic attractor.
Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State.
Lagzi, Fereshteh; Rotter, Stefan
2015-01-01
We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the "within" versus "between" connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed "winnerless competition", which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a general approach to study the dynamics of interacting populations of spiking networks.
Information and the Origin of Qualia
Orpwood, Roger
2017-01-01
This article argues that qualia are a likely outcome of the processing of information in local cortical networks. It uses an information-based approach and makes a distinction between information structures (the physical embodiment of information in the brain, primarily patterns of action potentials), and information messages (the meaning of those structures to the brain, and the basis of qualia). It develops formal relationships between these two kinds of information, showing how information structures can represent messages, and how information messages can be identified from structures. The article applies this perspective to basic processing in cortical networks or ensembles, showing how networks can transform between the two kinds of information. The article argues that an input pattern of firing is identified by a network as an information message, and that the output pattern of firing generated is a representation of that message. If a network is encouraged to develop an attractor state through attention or other re-entrant processes, then the message identified each time physical information is cycled through the network becomes “representation of the previous message”. Using an example of olfactory perception, it is shown how this piggy-backing of messages on top of previous messages could lead to olfactory qualia. The message identified on each pass of information could evolve from inner identity, to inner form, to inner likeness or image. The outcome is an olfactory quale. It is shown that the same outcome could result from information cycled through a hierarchy of networks in a resonant state. The argument for qualia generation is applied to other sensory modalities, showing how, through a process of brain-wide constraint satisfaction, a particular state of consciousness could develop at any given moment. Evidence for some of the key predictions of the theory is presented, using ECoG data and studies of gamma oscillations and attractors, together with an outline of what further evidence is needed to provide support for the theory. PMID:28484376
Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State
Lagzi, Fereshteh; Rotter, Stefan
2015-01-01
We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a general approach to study the dynamics of interacting populations of spiking networks. PMID:26407178
Episodic memory in aspects of large-scale brain networks
Jeong, Woorim; Chung, Chun Kee; Kim, June Sic
2015-01-01
Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL) structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network (DMN). Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network (RSN). Altered patterns of functional connectivity (FC) among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment. PMID:26321939
A hexapod walker using a heterarchical architecture for action selection
Schilling, Malte; Paskarbeit, Jan; Hoinville, Thierry; Hüffmeier, Arne; Schneider, Axel; Schmitz, Josef; Cruse, Holk
2013-01-01
Moving in a cluttered environment with a six-legged walking machine that has additional body actuators, therefore controlling 22 DoFs, is not a trivial task. Already simple forward walking on a flat plane requires the system to select between different internal states. The orchestration of these states depends on walking velocity and on external disturbances. Such disturbances occur continuously, for example due to irregular up-and-down movements of the body or slipping of the legs, even on flat surfaces, in particular when negotiating tight curves. The number of possible states is further increased when the system is allowed to walk backward or when front legs are used as grippers and cannot contribute to walking. Further states are necessary for expansion that allow for navigation. Here we demonstrate a solution for the selection and sequencing of different (attractor) states required to control different behaviors as are forward walking at different speeds, backward walking, as well as negotiation of tight curves. This selection is made by a recurrent neural network (RNN) of motivation units, controlling a bank of decentralized memory elements in combination with the feedback through the environment. The underlying heterarchical architecture of the network allows to select various combinations of these elements. This modular approach representing an example of neural reuse of a limited number of procedures allows for adaptation to different internal and external conditions. A way is sketched as to how this approach may be expanded to form a cognitive system being able to plan ahead. This architecture is characterized by different types of modules being arranged in layers and columns, but the complete network can also be considered as a holistic system showing emergent properties which cannot be attributed to a specific module. PMID:24062682
Ising formulation of associative memory models and quantum annealing recall
NASA Astrophysics Data System (ADS)
Santra, Siddhartha; Shehab, Omar; Balu, Radhakrishnan
2017-12-01
Associative memory models, in theoretical neuro- and computer sciences, can generally store at most a linear number of memories. Recalling memories in these models can be understood as retrieval of the energy minimizing configuration of classical Ising spins, closest in Hamming distance to an imperfect input memory, where the energy landscape is determined by the set of stored memories. We present an Ising formulation for associative memory models and consider the problem of memory recall using quantum annealing. We show that allowing for input-dependent energy landscapes allows storage of up to an exponential number of memories (in terms of the number of neurons). Further, we show how quantum annealing may naturally be used for recall tasks in such input-dependent energy landscapes, although the recall time may increase with the number of stored memories. Theoretically, we obtain the radius of attractor basins R (N ) and the capacity C (N ) of such a scheme and their tradeoffs. Our calculations establish that for randomly chosen memories the capacity of our model using the Hebbian learning rule as a function of problem size can be expressed as C (N ) =O (eC1N) , C1≥0 , and succeeds on randomly chosen memory sets with a probability of (1 -e-C2N) , C2≥0 with C1+C2=(0.5-f ) 2/(1 -f ) , where f =R (N )/N , 0 ≤f ≤0.5 , is the radius of attraction in terms of the Hamming distance of an input probe from a stored memory as a fraction of the problem size. We demonstrate the application of this scheme on a programmable quantum annealing device, the D-wave processor.
2007-03-01
partners for their mutual benefit. Unfortunately, based on government reports, FEMA did not have adequate control of its supply chain information ...is one attractor . “Edge of chaos” systems have two to eight attractors and in chaotic systems many attractors . Some are called strange attractors ...investigates whether chaos theory, part of complexity science, can extract information from Katrina contracting data to help managers make better logistics
James F. Selgrade; James H. Roberds
2001-01-01
This work discusses the effects of periodic forcing on attracting cycles and more complicated attractors for autonomous systems of nonlinear difference equations. Results indicate that an attractor for a periodically forced dynamical system may inherit structure from an attractor of the autonomous (unforced) system and also from the periodicity of the forcing. In...
Coexistence of Multiple Attractors in an Active Diode Pair Based Chua’s Circuit
NASA Astrophysics Data System (ADS)
Bao, Bocheng; Wu, Huagan; Xu, Li; Chen, Mo; Hu, Wen
This paper focuses on the coexistence of multiple attractors in an active diode pair based Chua’s circuit with smooth nonlinearity. With dimensionless equations, dynamical properties, including boundness of system orbits and stability distributions of two nonzero equilibrium points, are investigated, and complex coexisting behaviors of multiple kinds of disconnected attractors of stable point attractors, limit cycles and chaotic attractors are numerically revealed. The results show that unlike the classical Chua’s circuit, the proposed circuit has two stable nonzero node-foci for the specified circuit parameters, thereby resulting in the emergence of multistability phenomenon. Based on two general impedance converters, the active diode pair based Chua’s circuit with an adjustable inductor and an adjustable capacitor is made in hardware, from which coexisting multiple attractors are conveniently captured.
Chaotic attractors with separated scrolls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouallegue, Kais, E-mail: kais-bouallegue@yahoo.fr
2015-07-15
This paper proposes a new behavior of chaotic attractors with separated scrolls while combining Julia's process with Chua's attractor and Lorenz's attractor. The main motivation of this work is the ability to generate a set of separated scrolls with different behaviors, which in turn allows us to choose one or many scrolls combined with modulation (amplitude and frequency) for secure communication or synchronization. This set seems a new class of hyperchaos because each element of this set looks like a simple chaotic attractor with one positive Lyapunov exponent, so the cardinal of this set is greater than one. This newmore » approach could be used to generate more general higher-dimensional hyperchaotic attractor for more potential application. Numerical simulations are given to show the effectiveness of the proposed theoretical results.« less
Generating macroscopic chaos in a network of globally coupled phase oscillators
So, Paul; Barreto, Ernest
2011-01-01
We consider an infinite network of globally coupled phase oscillators in which the natural frequencies of the oscillators are drawn from a symmetric bimodal distribution. We demonstrate that macroscopic chaos can occur in this system when the coupling strength varies periodically in time. We identify period-doubling cascades to chaos, attractor crises, and horseshoe dynamics for the macroscopic mean field. Based on recent work that clarified the bifurcation structure of the static bimodal Kuramoto system, we qualitatively describe the mechanism for the generation of such complicated behavior in the time varying case. PMID:21974662
Alavash, Mohsen; Doebler, Philipp; Holling, Heinz; Thiel, Christiane M; Gießing, Carsten
2015-03-01
Is there one optimal topology of functional brain networks at rest from which our cognitive performance would profit? Previous studies suggest that functional integration of resting state brain networks is an important biomarker for cognitive performance. However, it is still unknown whether higher network integration is an unspecific predictor for good cognitive performance or, alternatively, whether specific network organization during rest predicts only specific cognitive abilities. Here, we investigated the relationship between network integration at rest and cognitive performance using two tasks that measured different aspects of working memory; one task assessed visual-spatial and the other numerical working memory. Network clustering, modularity and efficiency were computed to capture network integration on different levels of network organization, and to statistically compare their correlations with the performance in each working memory test. The results revealed that each working memory aspect profits from a different resting state topology, and the tests showed significantly different correlations with each of the measures of network integration. While higher global network integration and modularity predicted significantly better performance in visual-spatial working memory, both measures showed no significant correlation with numerical working memory performance. In contrast, numerical working memory was superior in subjects with highly clustered brain networks, predominantly in the intraparietal sulcus, a core brain region of the working memory network. Our findings suggest that a specific balance between local and global functional integration of resting state brain networks facilitates special aspects of cognitive performance. In the context of working memory, while visual-spatial performance is facilitated by globally integrated functional resting state brain networks, numerical working memory profits from increased capacities for local processing, especially in brain regions involved in working memory performance. Copyright © 2014 Elsevier Inc. All rights reserved.
Stabilizing Motifs in Autonomous Boolean Networks and the Yeast Cell Cycle Oscillator
NASA Astrophysics Data System (ADS)
Sevim, Volkan; Gong, Xinwei; Socolar, Joshua
2009-03-01
Synchronously updated Boolean networks are widely used to model gene regulation. Some properties of these model networks are known to be artifacts of the clocking in the update scheme. Autonomous updating is a less artificial scheme that allows one to introduce small timing perturbations and study stability of the attractors. We argue that the stabilization of a limit cycle in an autonomous Boolean network requires a combination of motifs such as feed-forward loops and auto-repressive links that can correct small fluctuations in the timing of switching events. A recently published model of the transcriptional cell-cycle oscillator in yeast contains the motifs necessary for stability under autonomous updating [1]. [1] D. A. Orlando, et al. Nature (London), 4530 (7197):0 944--947, 2008.
Detection of generalized synchronization using echo state networks
NASA Astrophysics Data System (ADS)
Ibáñez-Soria, D.; Garcia-Ojalvo, J.; Soria-Frisch, A.; Ruffini, G.
2018-03-01
Generalized synchronization between coupled dynamical systems is a phenomenon of relevance in applications that range from secure communications to physiological modelling. Here, we test the capabilities of reservoir computing and, in particular, echo state networks for the detection of generalized synchronization. A nonlinear dynamical system consisting of two coupled Rössler chaotic attractors is used to generate temporal series consisting of time-locked generalized synchronized sequences interleaved with unsynchronized ones. Correctly tuned, echo state networks are able to efficiently discriminate between unsynchronized and synchronized sequences even in the presence of relatively high levels of noise. Compared to other state-of-the-art techniques of synchronization detection, the online capabilities of the proposed Echo State Network based methodology make it a promising choice for real-time applications aiming to monitor dynamical synchronization changes in continuous signals.
Developmental metaplasticity in neural circuit codes of firing and structure.
Baram, Yoram
2017-01-01
Firing-rate dynamics have been hypothesized to mediate inter-neural information transfer in the brain. While the Hebbian paradigm, relating learning and memory to firing activity, has put synaptic efficacy variation at the center of cortical plasticity, we suggest that the external expression of plasticity by changes in the firing-rate dynamics represents a more general notion of plasticity. Hypothesizing that time constants of plasticity and firing dynamics increase with age, and employing the filtering property of the neuron, we obtain the elementary code of global attractors associated with the firing-rate dynamics in each developmental stage. We define a neural circuit connectivity code as an indivisible set of circuit structures generated by membrane and synapse activation and silencing. Synchronous firing patterns under parameter uniformity, and asynchronous circuit firing are shown to be driven, respectively, by membrane and synapse silencing and reactivation, and maintained by the neuronal filtering property. Analytic, graphical and simulation representation of the discrete iteration maps and of the global attractor codes of neural firing rate are found to be consistent with previous empirical neurobiological findings, which have lacked, however, a specific correspondence between firing modes, time constants, circuit connectivity and cortical developmental stages. Copyright © 2016 Elsevier Ltd. All rights reserved.
Armaş, Iuliana; Mendes, Diana A.; Popa, Răzvan-Gabriel; Gheorghe, Mihaela; Popovici, Diana
2017-01-01
The aim of this exploratory research is to capture spatial evolution patterns in the Bucharest metropolitan area using sets of single polarised synthetic aperture radar (SAR) satellite data and multi-temporal radar interferometry. Three sets of SAR data acquired during the years 1992–2010 from ERS-1/-2 and ENVISAT, and 2011–2014 from TerraSAR-X satellites were used in conjunction with the Small Baseline Subset (SBAS) and persistent scatterers (PS) high-resolution multi-temporal interferometry (InSAR) techniques to provide maps of line-of-sight displacements. The satellite-based remote sensing results were combined with results derived from classical methodologies (i.e., diachronic cartography) and field research to study possible trends in developments over former clay pits, landfill excavation sites, and industrial parks. The ground displacement trend patterns were analysed using several linear and nonlinear models, and techniques. Trends based on the estimated ground displacement are characterised by long-term memory, indicated by low noise Hurst exponents, which in the long-term form interesting attractors. We hypothesize these attractors to be tectonic stress fields generated by transpressional movements. PMID:28252103
Armaş, Iuliana; Mendes, Diana A; Popa, Răzvan-Gabriel; Gheorghe, Mihaela; Popovici, Diana
2017-03-02
The aim of this exploratory research is to capture spatial evolution patterns in the Bucharest metropolitan area using sets of single polarised synthetic aperture radar (SAR) satellite data and multi-temporal radar interferometry. Three sets of SAR data acquired during the years 1992-2010 from ERS-1/-2 and ENVISAT, and 2011-2014 from TerraSAR-X satellites were used in conjunction with the Small Baseline Subset (SBAS) and persistent scatterers (PS) high-resolution multi-temporal interferometry (InSAR) techniques to provide maps of line-of-sight displacements. The satellite-based remote sensing results were combined with results derived from classical methodologies (i.e., diachronic cartography) and field research to study possible trends in developments over former clay pits, landfill excavation sites, and industrial parks. The ground displacement trend patterns were analysed using several linear and nonlinear models, and techniques. Trends based on the estimated ground displacement are characterised by long-term memory, indicated by low noise Hurst exponents, which in the long-term form interesting attractors. We hypothesize these attractors to be tectonic stress fields generated by transpressional movements.
Dynamical systems approach to the study of a sociophysics agent-based model
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Prado, Carmen P. C.
2011-03-01
The Sznajd model is a Potts-like model that has been studied in the context of sociophysics [1,2] (where spins are interpreted as opinions). In a recent work [3], we generalized the Sznajd model to include assymetric interactions between the spins (interpreted as biases towards opinions) and used dynamical systems techniques to tackle its mean-field version, given by the flow: ησ = ∑ σ' = 1Mησησ'(ησρσ'→σ-σ'ρσ→σ'). Where hs is the proportion of agents with opinion (spin) σ', M is the number of opinions and σ'→σ' is the probability weight for an agent with opinion σ being convinced by another agent with opinion σ'. We made Monte Carlo simulations of the model in a complex network (using Barabási-Albert networks [4]) and they displayed the same attractors than the mean-field. Using linear stability analysis, we were able to determine the mean-field attractor structure analytically and to show that it has connections with well known graph theory problems (maximal independent sets and positive fluxes in directed graphs). Our dynamical systems approach is quite simple and can be used also in other models, like the voter model.
Dynamical systems approach to the study of a sociophysics agent-based model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timpanaro, Andre M.; Prado, Carmen P. C.
2011-03-24
The Sznajd model is a Potts-like model that has been studied in the context of sociophysics [1,2](where spins are interpreted as opinions). In a recent work [3], we generalized the Sznajd model to include assymetric interactions between the spins (interpreted as biases towards opinions) and used dynamical systems techniques to tackle its mean-field version, given by the flow: {eta}{sub {sigma}} = {Sigma}{sub {sigma}}'{sup M} = 1{eta}{sub {sigma}}{eta}{sigma}'({eta}{sub {sigma}}{rho}{sigma}'{yields}{sigma}-{sigma}'{rho}{sigma}{yields}{sigma}').Where hs is the proportion of agents with opinion (spin){sigma}', M is the number of opinions and {sigma}'{yields}{sigma}' is the probability weight for an agent with opinion {sigma} being convinced by another agentmore » with opinion {sigma}'. We made Monte Carlo simulations of the model in a complex network (using Barabasi-Albert networks [4]) and they displayed the same attractors than the mean-field. Using linear stability analysis, we were able to determine the mean-field attractor structure analytically and to show that it has connections with well known graph theory problems (maximal independent sets and positive fluxes in directed graphs). Our dynamical systems approach is quite simple and can be used also in other models, like the voter model.« less
Chaotic itinerancy and power-law residence time distribution in stochastic dynamical systems.
Namikawa, Jun
2005-08-01
Chaotic itinerant motion among varieties of ordered states is described by a stochastic model based on the mechanism of chaotic itinerancy. The model consists of a random walk on a half-line and a Markov chain with a transition probability matrix. The stability of attractor ruin in the model is investigated by analyzing the residence time distribution of orbits at attractor ruins. It is shown that the residence time distribution averaged over all attractor ruins can be described by the superposition of (truncated) power-law distributions if the basin of attraction for each attractor ruin has a zero measure. This result is confirmed by simulation of models exhibiting chaotic itinerancy. Chaotic itinerancy is also shown to be absent in coupled Milnor attractor systems if the transition probability among attractor ruins can be represented as a Markov chain.
Piccoli, Tommaso; Valente, Giancarlo; Linden, David E J; Re, Marta; Esposito, Fabrizio; Sack, Alexander T; Di Salle, Francesco
2015-01-01
The default mode network and the working memory network are known to be anti-correlated during sustained cognitive processing, in a load-dependent manner. We hypothesized that functional connectivity among nodes of the two networks could be dynamically modulated by task phases across time. To address the dynamic links between default mode network and the working memory network, we used a delayed visuo-spatial working memory paradigm, which allowed us to separate three different phases of working memory (encoding, maintenance, and retrieval), and analyzed the functional connectivity during each phase within and between the default mode network and the working memory network networks. We found that the two networks are anti-correlated only during the maintenance phase of working memory, i.e. when attention is focused on a memorized stimulus in the absence of external input. Conversely, during the encoding and retrieval phases, when the external stimulation is present, the default mode network is positively coupled with the working memory network, suggesting the existence of a dynamically switching of functional connectivity between "task-positive" and "task-negative" brain networks. Our results demonstrate that the well-established dichotomy of the human brain (anti-correlated networks during rest and balanced activation-deactivation during cognition) has a more nuanced organization than previously thought and engages in different patterns of correlation and anti-correlation during specific sub-phases of a cognitive task. This nuanced organization reinforces the hypothesis of a direct involvement of the default mode network in cognitive functions, as represented by a dynamic rather than static interaction with specific task-positive networks, such as the working memory network.
Piccoli, Tommaso; Valente, Giancarlo; Linden, David E. J.; Re, Marta; Esposito, Fabrizio; Sack, Alexander T.; Salle, Francesco Di
2015-01-01
Introduction The default mode network and the working memory network are known to be anti-correlated during sustained cognitive processing, in a load-dependent manner. We hypothesized that functional connectivity among nodes of the two networks could be dynamically modulated by task phases across time. Methods To address the dynamic links between default mode network and the working memory network, we used a delayed visuo-spatial working memory paradigm, which allowed us to separate three different phases of working memory (encoding, maintenance, and retrieval), and analyzed the functional connectivity during each phase within and between the default mode network and the working memory network networks. Results We found that the two networks are anti-correlated only during the maintenance phase of working memory, i.e. when attention is focused on a memorized stimulus in the absence of external input. Conversely, during the encoding and retrieval phases, when the external stimulation is present, the default mode network is positively coupled with the working memory network, suggesting the existence of a dynamically switching of functional connectivity between “task-positive” and “task-negative” brain networks. Conclusions Our results demonstrate that the well-established dichotomy of the human brain (anti-correlated networks during rest and balanced activation-deactivation during cognition) has a more nuanced organization than previously thought and engages in different patterns of correlation and anti-correlation during specific sub-phases of a cognitive task. This nuanced organization reinforces the hypothesis of a direct involvement of the default mode network in cognitive functions, as represented by a dynamic rather than static interaction with specific task-positive networks, such as the working memory network. PMID:25848951
Regulatory networks and connected components of the neutral space. A look at functional islands
NASA Astrophysics Data System (ADS)
Boldhaus, G.; Klemm, K.
2010-09-01
The functioning of a living cell is largely determined by the structure of its regulatory network, comprising non-linear interactions between regulatory genes. An important factor for the stability and evolvability of such regulatory systems is neutrality - typically a large number of alternative network structures give rise to the necessary dynamics. Here we study the discretized regulatory dynamics of the yeast cell cycle [Li et al., PNAS, 2004] and the set of networks capable of reproducing it, which we call functional. Among these, the empirical yeast wildtype network is close to optimal with respect to sparse wiring. Under point mutations, which establish or delete single interactions, the neutral space of functional networks is fragmented into ≈ 4.7 × 108 components. One of the smaller ones contains the wildtype network. On average, functional networks reachable from the wildtype by mutations are sparser, have higher noise resilience and fewer fixed point attractors as compared with networks outside of this wildtype component.
Extreme multistability in a memristor-based multi-scroll hyper-chaotic system.
Yuan, Fang; Wang, Guangyi; Wang, Xiaowei
2016-07-01
In this paper, a new memristor-based multi-scroll hyper-chaotic system is designed. The proposed memristor-based system possesses multiple complex dynamic behaviors compared with other chaotic systems. Various coexisting attractors and hidden coexisting attractors are observed in this system, which means extreme multistability arises. Besides, by adjusting parameters of the system, this chaotic system can perform single-scroll attractors, double-scroll attractors, and four-scroll attractors. Basic dynamic characteristics of the system are investigated, including equilibrium points and stability, bifurcation diagrams, Lyapunov exponents, and so on. In addition, the presented system is also realized by an analog circuit to confirm the correction of the numerical simulations.
Features from the non-attractor beginning of inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Yi-Fu; Wang, Dong-Gang; Wang, Ziwei
2016-10-01
We study the effects of the non-attractor initial conditions for the canonical single-field inflation. The non-attractor stage can last only several e -folding numbers, and should be followed by hilltop inflation. This two-stage evolution leads to large scale suppression in the primordial power spectrum, which is favored by recent observations. Moreover we give a detailed calculation of primordial non-Gaussianity due to the ''from non-attractor to slow-roll'' transition, and find step features in the local and equilateral shapes. We conclude that a plateau-like inflaton potential with an initial non-attractor phase yields interesting features in both power spectrum and bispectrum.
Neural field model of memory-guided search.
Kilpatrick, Zachary P; Poll, Daniel B
2017-12-01
Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.
Neural field model of memory-guided search
NASA Astrophysics Data System (ADS)
Kilpatrick, Zachary P.; Poll, Daniel B.
2017-12-01
Many organisms can remember locations they have previously visited during a search. Visual search experiments have shown exploration is guided away from these locations, reducing redundancies in the search path before finding a hidden target. We develop and analyze a two-layer neural field model that encodes positional information during a search task. A position-encoding layer sustains a bump attractor corresponding to the searching agent's current location, and search is modeled by velocity input that propagates the bump. A memory layer sustains persistent activity bounded by a wave front, whose edges expand in response to excitatory input from the position layer. Search can then be biased in response to remembered locations, influencing velocity inputs to the position layer. Asymptotic techniques are used to reduce the dynamics of our model to a low-dimensional system of equations that track the bump position and front boundary. Performance is compared for different target-finding tasks.
Continuity of pullback and uniform attractors
NASA Astrophysics Data System (ADS)
Hoang, Luan T.; Olson, Eric J.; Robinson, James C.
2018-03-01
We study the continuity of pullback and uniform attractors for non-autonomous dynamical systems with respect to perturbations of a parameter. Consider a family of dynamical systems parameterized by λ ∈ Λ, where Λ is a complete metric space, such that for each λ ∈ Λ there exists a unique pullback attractor Aλ (t). Using the theory of Baire category we show under natural conditions that there exists a residual set Λ* ⊆ Λ such that for every t ∈ R the function λ ↦Aλ (t) is continuous at each λ ∈Λ* with respect to the Hausdorff metric. Similarly, given a family of uniform attractors Aλ, there is a residual set at which the map λ ↦Aλ is continuous. We also introduce notions of equi-attraction suitable for pullback and uniform attractors and then show when Λ is compact that the continuity of pullback attractors and uniform attractors with respect to λ is equivalent to pullback equi-attraction and, respectively, uniform equi-attraction. These abstract results are then illustrated in the context of the Lorenz equations and the two-dimensional Navier-Stokes equations.
Dynamic Organization of Hierarchical Memories
Kurikawa, Tomoki; Kaneko, Kunihiko
2016-01-01
In the brain, external objects are categorized in a hierarchical way. Although it is widely accepted that objects are represented as static attractors in neural state space, this view does not take account interaction between intrinsic neural dynamics and external input, which is essential to understand how neural system responds to inputs. Indeed, structured spontaneous neural activity without external inputs is known to exist, and its relationship with evoked activities is discussed. Then, how categorical representation is embedded into the spontaneous and evoked activities has to be uncovered. To address this question, we studied bifurcation process with increasing input after hierarchically clustered associative memories are learned. We found a “dynamic categorization”; neural activity without input wanders globally over the state space including all memories. Then with the increase of input strength, diffuse representation of higher category exhibits transitions to focused ones specific to each object. The hierarchy of memories is embedded in the transition probability from one memory to another during the spontaneous dynamics. With increased input strength, neural activity wanders over a narrower state space including a smaller set of memories, showing more specific category or memory corresponding to the applied input. Moreover, such coarse-to-fine transitions are also observed temporally during transient process under constant input, which agrees with experimental findings in the temporal cortex. These results suggest the hierarchy emerging through interaction with an external input underlies hierarchy during transient process, as well as in the spontaneous activity. PMID:27618549
ADAM: analysis of discrete models of biological systems using computer algebra.
Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard
2011-07-20
Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.
Scaling properties in time-varying networks with memory
NASA Astrophysics Data System (ADS)
Kim, Hyewon; Ha, Meesoon; Jeong, Hawoong
2015-12-01
The formation of network structure is mainly influenced by an individual node's activity and its memory, where activity can usually be interpreted as the individual inherent property and memory can be represented by the interaction strength between nodes. In our study, we define the activity through the appearance pattern in the time-aggregated network representation, and quantify the memory through the contact pattern of empirical temporal networks. To address the role of activity and memory in epidemics on time-varying networks, we propose temporal-pattern coarsening of activity-driven growing networks with memory. In particular, we focus on the relation between time-scale coarsening and spreading dynamics in the context of dynamic scaling and finite-size scaling. Finally, we discuss the universality issue of spreading dynamics on time-varying networks for various memory-causality tests.
Invariant polygons in systems with grazing-sliding.
Szalai, R; Osinga, H M
2008-06-01
The paper investigates generic three-dimensional nonsmooth systems with a periodic orbit near grazing-sliding. We assume that the periodic orbit is unstable with complex multipliers so that two dominant frequencies are present in the system. Because grazing-sliding induces a dimension loss and the instability drives every trajectory into sliding, the system has an attractor that consists of forward sliding orbits. We analyze this attractor in a suitably chosen Poincare section using a three-parameter generalized map that can be viewed as a normal form. We show that in this normal form the attractor must be contained in a finite number of lines that intersect in the vertices of a polygon. However the attractor is typically larger than the associated polygon. We classify the number of lines involved in forming the attractor as a function of the parameters. Furthermore, for fixed values of parameters we investigate the one-dimensional dynamics on the attractor.
NASA Astrophysics Data System (ADS)
Sun, Changchun; Chen, Zhongtang; Xu, Qicheng
2017-12-01
An original three-dimensional (3D) smooth continuous chaotic system and its mirror-image system with eight common parameters are constructed and a pair of symmetric chaotic attractors can be generated simultaneously. Basic dynamical behaviors of two 3D chaotic systems are investigated respectively. A double-scroll chaotic attractor by connecting the pair of mutual mirror-image attractors is generated via a novel planar switching control approach. Chaos can also be controlled to a fixed point, a periodic orbit and a divergent orbit respectively by switching between two chaotic systems. Finally, an equivalent 3D chaotic system by combining two 3D chaotic systems with a switching law is designed by utilizing a sign function. Two circuit diagrams for realizing the double-scroll attractor are depicted by employing an improved module-based design approach.
Chen, Bor-Sen; Tsai, Kun-Wei; Li, Cheng-Wei
2015-01-01
Molecular biologists have long recognized carcinogenesis as an evolutionary process that involves natural selection. Cancer is driven by the somatic evolution of cell lineages. In this study, the evolution of somatic cancer cell lineages during carcinogenesis was modeled as an equilibrium point (ie, phenotype of attractor) shifting, the process of a nonlinear stochastic evolutionary biological network. This process is subject to intrinsic random fluctuations because of somatic genetic and epigenetic variations, as well as extrinsic disturbances because of carcinogens and stressors. In order to maintain the normal function (ie, phenotype) of an evolutionary biological network subjected to random intrinsic fluctuations and extrinsic disturbances, a network robustness scheme that incorporates natural selection needs to be developed. This can be accomplished by selecting certain genetic and epigenetic variations to modify the network structure to attenuate intrinsic fluctuations efficiently and to resist extrinsic disturbances in order to maintain the phenotype of the evolutionary biological network at an equilibrium point (attractor). However, during carcinogenesis, the remaining (or neutral) genetic and epigenetic variations accumulate, and the extrinsic disturbances become too large to maintain the normal phenotype at the desired equilibrium point for the nonlinear evolutionary biological network. Thus, the network is shifted to a cancer phenotype at a new equilibrium point that begins a new evolutionary process. In this study, the natural selection scheme of an evolutionary biological network of carcinogenesis was derived from a robust negative feedback scheme based on the nonlinear stochastic Nash game strategy. The evolvability and phenotypic robustness criteria of the evolutionary cancer network were also estimated by solving a Hamilton–Jacobi inequality – constrained optimization problem. The simulation revealed that the phenotypic shift of the lung cancer-associated cell network takes 54.5 years from a normal state to stage I cancer, 1.5 years from stage I to stage II cancer, and 2.5 years from stage II to stage III cancer, with a reasonable match for the statistical result of the average age of lung cancer. These results suggest that a robust negative feedback scheme, based on a stochastic evolutionary game strategy, plays a critical role in an evolutionary biological network of carcinogenesis under a natural selection scheme. PMID:26244004
Lucarini, Valerio; Fraedrich, Klaus
2009-08-01
Starting from the classical Saltzman two-dimensional convection equations, we derive via a severe spectral truncation a minimal 10 ODE system which includes the thermal effect of viscous dissipation. Neglecting this process leads to a dynamical system which includes a decoupled generalized Lorenz system. The consideration of this process breaks an important symmetry and couples the dynamics of fast and slow variables, with the ensuing modifications to the structural properties of the attractor and of the spectral features. When the relevant nondimensional number (Eckert number Ec) is different from zero, an additional time scale of O(Ec(-1)) is introduced in the system, as shown with standard multiscale analysis and made clear by several numerical evidences. Moreover, the system is ergodic and hyperbolic, the slow variables feature long-term memory with 1/f(3/2) power spectra, and the fast variables feature amplitude modulation. Increasing the strength of the thermal-viscous feedback has a stabilizing effect, as both the metric entropy and the Kaplan-Yorke attractor dimension decrease monotonically with Ec. The analyzed system features very rich dynamics: it overcomes some of the limitations of the Lorenz system and might have prototypical value in relevant processes in complex systems dynamics, such as the interaction between slow and fast variables, the presence of long-term memory, and the associated extreme value statistics. This analysis shows how neglecting the coupling of slow and fast variables only on the basis of scale analysis can be catastrophic. In fact, this leads to spurious invariances that affect essential dynamical properties (ergodicity, hyperbolicity) and that cause the model losing ability in describing intrinsically multiscale processes.
Revisiting non-Gaussianity from non-attractor inflation models
NASA Astrophysics Data System (ADS)
Cai, Yi-Fu; Chen, Xingang; Namjoo, Mohammad Hossein; Sasaki, Misao; Wang, Dong-Gang; Wang, Ziwei
2018-05-01
Non-attractor inflation is known as the only single field inflationary scenario that can violate non-Gaussianity consistency relation with the Bunch-Davies vacuum state and generate large local non-Gaussianity. However, it is also known that the non-attractor inflation by itself is incomplete and should be followed by a phase of slow-roll attractor. Moreover, there is a transition process between these two phases. In the past literature, this transition was approximated as instant and the evolution of non-Gaussianity in this phase was not fully studied. In this paper, we follow the detailed evolution of the non-Gaussianity through the transition phase into the slow-roll attractor phase, considering different types of transition. We find that the transition process has important effect on the size of the local non-Gaussianity. We first compute the net contribution of the non-Gaussianities at the end of inflation in canonical non-attractor models. If the curvature perturbations keep evolving during the transition—such as in the case of smooth transition or some sharp transition scenarios—the Script O(1) local non-Gaussianity generated in the non-attractor phase can be completely erased by the subsequent evolution, although the consistency relation remains violated. In extremal cases of sharp transition where the super-horizon modes freeze immediately right after the end of the non-attractor phase, the original non-attractor result can be recovered. We also study models with non-canonical kinetic terms, and find that the transition can typically contribute a suppression factor in the squeezed bispectrum, but the final local non-Gaussianity can still be made parametrically large.
Personalized identification of differentially expressed pathways in pediatric sepsis.
Li, Binjie; Zeng, Qiyi
2017-10-01
Sepsis is a leading killer of children worldwide with numerous differentially expressed genes reported to be associated with sepsis. Identifying core pathways in an individual is important for understanding septic mechanisms and for the future application of custom therapeutic decisions. Samples used in the study were from a control group (n=18) and pediatric sepsis group (n=52). Based on Kauffman's attractor theory, differentially expressed pathways associated with pediatric sepsis were detected as attractors. When the distribution results of attractors are consistent with the distribution of total data assessed using support vector machine, the individualized pathway aberrance score (iPAS) was calculated to distinguish differences. Through attractor and Kyoto Encyclopedia of Genes and Genomes functional analysis, 277 enriched pathways were identified as attractors. There were 81 pathways with P<0.05 and 59 pathways with P<0.01. Distribution outcomes of screened attractors were mostly consistent with the total data demonstrated by the six classifying parameters, which suggested the efficiency of attractors. Cluster analysis of pediatric sepsis using the iPAS method identified seven pathway clusters and four sample clusters. Thus, in the majority pediatric sepsis samples, core pathways can be detected as different from accumulated normal samples. In conclusion, a novel procedure that identified the dysregulated attractors in individuals with pediatric sepsis was constructed. Attractors can be markers to identify pathways involved in pediatric sepsis. iPAS may provide a correlation score for each of the signaling pathways present in an individual patient. This process may improve the personalized interpretation of disease mechanisms and may be useful in the forthcoming era of personalized medicine.
The route to chaos for the Kuramoto-Sivashinsky equation
NASA Technical Reports Server (NTRS)
Papageorgiou, Demetrios T.; Smyrlis, Yiorgos
1990-01-01
The results of extensive numerical experiments of the spatially periodic initial value problem for the Kuramoto-Sivashinsky equation. This paper is concerned with the asymptotic nonlinear dynamics at the dissipation parameter decreases and spatio-temporal chaos sets in. To this end the initial condition is taken to be the same for all numerical experiments (a single sine wave is used) and the large time evolution of the system is followed numerically. Numerous computations were performed to establish the existence of windows, in parameter space, in which the solution has the following characteristics as the viscosity is decreased: a steady fully modal attractor to a steady bimodal attractor to another steady fully modal attractor to a steady trimodal attractor to a periodic attractor, to another steady fully modal attractor, to another periodic attractor, to a steady tetramodal attractor, to another periodic attractor having a full sequence of period-doublings (in parameter space) to chaos. Numerous solutions are presented which provide conclusive evidence of the period-doubling cascades which precede chaos for this infinite-dimensional dynamical system. These results permit a computation of the length of subwindows which in turn provide an estimate for their successive ratios as the cascade develops. A calculation based on the numerical results is also presented to show that the period doubling sequences found here for the Kuramoto-Sivashinsky equation, are in complete agreement with Feigenbaum's universal constant of 4,669201609... . Some preliminary work shows several other windows following the first chaotic one including periodic, chaotic, and a steady octamodal window; however, the windows shrink significantly in size to enable concrete quantitative conclusions to be made.
Coherent organization in gene regulation: a study on six networks
NASA Astrophysics Data System (ADS)
Aral, Neşe; Kabakçıoğlu, Alkan
2016-04-01
Structural and dynamical fingerprints of evolutionary optimization in biological networks are still unclear. Here we analyze the dynamics of genetic regulatory networks responsible for the regulation of cell cycle and cell differentiation in three organisms or cell types each, and show that they follow a version of Hebb's rule which we have termed coherence. More precisely, we find that simultaneously expressed genes with a common target are less likely to act antagonistically at the attractors of the regulatory dynamics. We then investigate the dependence of coherence on structural parameters, such as the mean number of inputs per node and the activatory/repressory interaction ratio, as well as on dynamically determined quantities, such as the basin size and the number of expressed genes.
Logical Modeling and Dynamical Analysis of Cellular Networks
Abou-Jaoudé, Wassim; Traynard, Pauline; Monteiro, Pedro T.; Saez-Rodriguez, Julio; Helikar, Tomáš; Thieffry, Denis; Chaouiya, Claudine
2016-01-01
The logical (or logic) formalism is increasingly used to model regulatory and signaling networks. Complementing these applications, several groups contributed various methods and tools to support the definition and analysis of logical models. After an introduction to the logical modeling framework and to several of its variants, we review here a number of recent methodological advances to ease the analysis of large and intricate networks. In particular, we survey approaches to determine model attractors and their reachability properties, to assess the dynamical impact of variations of external signals, and to consistently reduce large models. To illustrate these developments, we further consider several published logical models for two important biological processes, namely the differentiation of T helper cells and the control of mammalian cell cycle. PMID:27303434
Robin, Jessica; Hirshhorn, Marnie; Rosenbaum, R Shayna; Winocur, Gordon; Moscovitch, Morris; Grady, Cheryl L
2015-01-01
Several recent studies have compared episodic and spatial memory in neuroimaging paradigms in order to understand better the contribution of the hippocampus to each of these tasks. In the present study, we build on previous findings showing common neural activation in default network areas during episodic and spatial memory tasks based on familiar, real-world environments (Hirshhorn et al. (2012) Neuropsychologia 50:3094-3106). Following previous demonstrations of the presence of functionally connected sub-networks within the default network, we performed seed-based functional connectivity analyses to determine how, depending on the task, the hippocampus and prefrontal cortex differentially couple with one another and with distinct whole-brain networks. We found evidence for a medial prefrontal-parietal network and a medial temporal lobe network, which were functionally connected to the prefrontal and hippocampal seeds, respectively, regardless of the nature of the memory task. However, these two networks were functionally connected with one another during the episodic memory task, but not during spatial memory tasks. Replicating previous reports of fractionation of the default network into stable sub-networks, this study also shows how these sub-networks may flexibly couple and uncouple with one another based on task demands. These findings support the hypothesis that episodic memory and spatial memory share a common medial temporal lobe-based neural substrate, with episodic memory recruiting additional prefrontal sub-networks. © 2014 Wiley Periodicals, Inc.
Extreme multistability in a memristor-based multi-scroll hyper-chaotic system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Fang, E-mail: yf210yf@163.com; Wang, Guangyi, E-mail: wanggyi@163.com; Wang, Xiaowei
In this paper, a new memristor-based multi-scroll hyper-chaotic system is designed. The proposed memristor-based system possesses multiple complex dynamic behaviors compared with other chaotic systems. Various coexisting attractors and hidden coexisting attractors are observed in this system, which means extreme multistability arises. Besides, by adjusting parameters of the system, this chaotic system can perform single-scroll attractors, double-scroll attractors, and four-scroll attractors. Basic dynamic characteristics of the system are investigated, including equilibrium points and stability, bifurcation diagrams, Lyapunov exponents, and so on. In addition, the presented system is also realized by an analog circuit to confirm the correction of the numericalmore » simulations.« less
Giles, Lynne C.; Anstey, Kaarin J.; Walker, Ruth B.; Luszcz, Mary A.
2012-01-01
The purpose was to examine the relationship between different types of social networks and memory over 15 years of followup in a large cohort of older Australians who were cognitively intact at study baseline. Our specific aims were to investigate whether social networks were associated with memory, determine if different types of social networks had different relationships with memory, and examine if changes in memory over time differed according to types of social networks. We used five waves of data from the Australian Longitudinal Study of Ageing, and followed 706 participants with an average age of 78.6 years (SD 5.7) at baseline. The relationships between five types of social networks and changes in memory were assessed. The results suggested a gradient of effect; participants in the upper tertile of friends or overall social networks had better memory scores than those in the mid tertile, who in turn had better memory scores than participants in the lower tertile. There was evidence of a linear, but not quadratic, effect of time on memory, and an interaction between friends' social networks and time was apparent. Findings are discussed with respect to mechanisms that might explain the observed relationships between social networks and memory. PMID:22988510
NASA Astrophysics Data System (ADS)
Leonov, G. A.; Kuznetsov, N. V.
From a computational point of view, in nonlinear dynamical systems, attractors can be regarded as self-excited and hidden attractors. Self-excited attractors can be localized numerically by a standard computational procedure, in which after a transient process a trajectory, starting from a point of unstable manifold in a neighborhood of equilibrium, reaches a state of oscillation, therefore one can easily identify it. In contrast, for a hidden attractor, a basin of attraction does not intersect with small neighborhoods of equilibria. While classical attractors are self-excited, attractors can therefore be obtained numerically by the standard computational procedure. For localization of hidden attractors it is necessary to develop special procedures, since there are no similar transient processes leading to such attractors. At first, the problem of investigating hidden oscillations arose in the second part of Hilbert's 16th problem (1900). The first nontrivial results were obtained in Bautin's works, which were devoted to constructing nested limit cycles in quadratic systems, that showed the necessity of studying hidden oscillations for solving this problem. Later, the problem of analyzing hidden oscillations arose from engineering problems in automatic control. In the 50-60s of the last century, the investigations of widely known Markus-Yamabe's, Aizerman's, and Kalman's conjectures on absolute stability have led to the finding of hidden oscillations in automatic control systems with a unique stable stationary point. In 1961, Gubar revealed a gap in Kapranov's work on phase locked-loops (PLL) and showed the possibility of the existence of hidden oscillations in PLL. At the end of the last century, the difficulties in analyzing hidden oscillations arose in simulations of drilling systems and aircraft's control systems (anti-windup) which caused crashes. Further investigations on hidden oscillations were greatly encouraged by the present authors' discovery, in 2010 (for the first time), of chaotic hidden attractor in Chua's circuit. This survey is dedicated to efficient analytical-numerical methods for the study of hidden oscillations. Here, an attempt is made to reflect the current trends in the synthesis of analytical and numerical methods.
Cancer as quasi-attractor in the gene expression phase space
NASA Astrophysics Data System (ADS)
Giuliani, A.
2017-09-01
It takes no more than 250 tissue types to build up a metazoan, and each tissue has a specific and largely invariant gene expression signature. This implies the `viable configurations' correspondent to a given activated/inactivated expression pattern over the entire genome are very few. This points to the presence of few `low energy deep valleys' correspondent to the allowed states of the system and is a direct consequence of the fact genes do not work by alone but embedded into genetic expression networks. Statistical thermodynamics formalism focusing on the changes in the degree of correlation of the studied systems allows to detect transition behavior in gene expression phase space resembling the phase transition of physical-chemistry studies. In this realm cancer can be intended as a sort of `parasite' sub-attractor of the corresponding healthy tissue that, in the case of disease, is `kinetically entrapped' into a sub-optimal solution. The consequences of such a state of affair for cancer therapies are potentially huge.
Attractor Signaling Models for Discovery of Combinatorial Therapies
2013-09-01
year!survival!rate!for!this! disease ! less!than!15%.!Over!the!years,!many!specific!mechanisms!associated!with!drug!resistance!in!lung!cancer! have!been...reprogramming of pluripotent stem cells [4]. More- over, it has been suggested that a biological system in a chronic or therapy-resistant disease state can...designing new therapeutic methods for complex diseases such as can- cer. Even if our knowledge of biological networks is in- complete, fast progress
Anisotropic nonequilibrium hydrodynamic attractor
NASA Astrophysics Data System (ADS)
Strickland, Michael; Noronha, Jorge; Denicol, Gabriel S.
2018-02-01
We determine the dynamical attractors associated with anisotropic hydrodynamics (aHydro) and the DNMR equations for a 0 +1 d conformal system using kinetic theory in the relaxation time approximation. We compare our results to the nonequilibrium attractor obtained from the exact solution of the 0 +1 d conformal Boltzmann equation, the Navier-Stokes theory, and the second-order Mueller-Israel-Stewart theory. We demonstrate that the aHydro attractor equation resums an infinite number of terms in the inverse Reynolds number. The resulting resummed aHydro attractor possesses a positive longitudinal-to-transverse pressure ratio and is virtually indistinguishable from the exact attractor. This suggests that an optimized hydrodynamic treatment of kinetic theory involves a resummation not only in gradients (Knudsen number) but also in the inverse Reynolds number. We also demonstrate that the DNMR result provides a better approximation of the exact kinetic theory attractor than the Mueller-Israel-Stewart theory. Finally, we introduce a new method for obtaining approximate aHydro equations which relies solely on an expansion in the inverse Reynolds number. We then carry this expansion out to the third order, and compare these third-order results to the exact kinetic theory solution.
Ultrametric properties of the attractor spaces for random iterated linear function systems
NASA Astrophysics Data System (ADS)
Buchovets, A. G.; Moskalev, P. V.
2018-03-01
We investigate attractors of random iterated linear function systems as independent spaces embedded in the ordinary Euclidean space. The introduction on the set of attractor points of a metric that satisfies the strengthened triangle inequality makes this space ultrametric. Then inherent in ultrametric spaces the properties of disconnectedness and hierarchical self-similarity make it possible to define an attractor as a fractal. We note that a rigorous proof of these properties in the case of an ordinary Euclidean space is very difficult.
Internal Waves and Wave Attractors in Enceladus' Subsurface Ocean
NASA Astrophysics Data System (ADS)
van Oers, A. M.; Maas, L. R.; Vermeersen, B. L. A.
2016-12-01
One of the most peculiar features on Saturn moon Enceladus is its so-called tiger stripe pattern at the geologically active South Polar Terrain (SPT), as first observed in detail by the Cassini spacecraft early 2005. It is generally assumed that the four almost parallel surface lines that constitute this pattern are faults in the icy surface overlying a confined salty water reservoir. In 2013, we formulated the original idea [Vermeersen et al., AGU Fall Meeting 2013, abstract #P53B-1848] that the tiger stripe pattern is formed and maintained by induced, tidally and rotationally driven, wave-attractor motions in the ocean underneath the icy surface of the tiger-stripe region. Such wave-attractor motions are observed in water tank experiments in laboratories on Earth and in numerical experiments [Maas et al., Nature, 338, 557-561, 1997; Drijfhout and Maas, J. Phys. Oceanogr., 37, 2740-2763, 2007; Hazewinkel et al., Phys. Fluids, 22, 107102, 2010]. Numerical simulations show the persistence of wave attractors for a range of ocean shapes and stratifications. The intensification of the wave field near the location of the surface reflections of wave attractors has been numerically and experimentally confirmed. We measured the forces a wave attractor exerts on a solid surface, near a reflection point. These reflection points would correspond to the location of the tiger stripes. Combining experiments and numerical simulations we conclude that (1) wave attractors can exist in Enceladus' subsurface sea, (2) their shape can be matched to the tiger stripes, (3) the wave attractors cause a localized force at the water-ice boundaries, (4) this force could have been large enough to contribute to fracturing the ice and (5) the wave attractors localize energy (and particles) and cause dissipation along its path, helping explain Enceladus' enigmatic heat output at the tiger stripes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiaojun; School of Mathematics and Statistics, Tianshui Normal University, Tianshui 741001; Hong, Ling, E-mail: hongling@mail.xjtu.edu.cn
Global bifurcations include sudden changes in chaotic sets due to crises. There are three types of crises defined by Grebogi et al. [Physica D 7, 181 (1983)]: boundary crisis, interior crisis, and metamorphosis. In this paper, by means of the extended generalized cell mapping (EGCM), boundary and interior crises of a fractional-order Duffing system are studied as one of the system parameters or the fractional derivative order is varied. It is found that a crisis can be generally defined as a collision between a chaotic basic set and a basic set, either periodic or chaotic, to cause a sudden discontinuousmore » change in chaotic sets. Here chaotic sets involve three different kinds: a chaotic attractor, a chaotic saddle on a fractal basin boundary, and a chaotic saddle in the interior of a basin and disjoint from the attractor. A boundary crisis results from the collision of a periodic (or chaotic) attractor with a chaotic (or regular) saddle in the fractal (or smooth) boundary. In such a case, the attractor, together with its basin of attraction, is suddenly destroyed as the control parameter passes through a critical value, leaving behind a chaotic saddle in the place of the original attractor and saddle after the crisis. An interior crisis happens when an unstable chaotic set in the basin of attraction collides with a periodic attractor, which causes the appearance of a new chaotic attractor, while the original attractor and the unstable chaotic set are converted to the part of the chaotic attractor after the crisis. These results further demonstrate that the EGCM is a powerful tool to reveal the mechanism of crises in fractional-order systems.« less
Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model
NASA Astrophysics Data System (ADS)
Ovsyannikov, I. I.; Turaev, D. V.
2017-01-01
We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.
Hidden Attractors in a Model of a Bubble Contrast Agent Oscillating Near an Elastic Wall
NASA Astrophysics Data System (ADS)
Garashchuk, Ivan; Sinelshchikov, Dmitry; Kudryashov, Nikolay
2018-02-01
A model describing the dynamics of a spherical gas bubble in a compressible viscous liquid is studied. The bubble is oscillating close to an elastic wall of finite thickness under the influence of an external pressure field which simulates a contrast agent oscillating close to a blood vessel wall. Here we investigate numerically the coexistence of chaotic and periodic attractors in this model. One of the tools applied for seeking coexisting attractors is the perpetual points method. This method can be helpful for localizing coexisting attractors, occurring in various physically realistic ranges of variation of the control parameters. We provide some examples of coexisting attractors to demonstrate the importance of the multistability problem for the applications.
Iconic memory and parietofrontal network: fMRI study using temporal integration.
Saneyoshi, Ayako; Niimi, Ryosuke; Suetsugu, Tomoko; Kaminaga, Tatsuro; Yokosawa, Kazuhiko
2011-08-03
We investigated the neural basis of iconic memory using functional magnetic resonance imaging. The parietofrontal network of selective attention is reportedly relevant to readout from iconic memory. We adopted a temporal integration task that requires iconic memory but not selective attention. The results showed that the task activated the parietofrontal network, confirming that the network is involved in readout from iconic memory. We further tested a condition in which temporal integration was performed by visual short-term memory but not by iconic memory. However, no brain region revealed higher activation for temporal integration by iconic memory than for temporal integration by visual short-term memory. This result suggested that there is no localized brain region specialized for iconic memory per se.
Plastic modulation of episodic memory networks in the aging brain with cognitive decline.
Bai, Feng; Yuan, Yonggui; Yu, Hui; Zhang, Zhijun
2016-07-15
Social-cognitive processing has been posited to underlie general functions such as episodic memory. Episodic memory impairment is a recognized hallmark of amnestic mild cognitive impairment (aMCI) who is at a high risk for dementia. Three canonical networks, self-referential processing, executive control processing and salience processing, have distinct roles in episodic memory retrieval processing. It remains unclear whether and how these sub-networks of the episodic memory retrieval system would be affected in aMCI. This task-state fMRI study constructed systems-level episodic memory retrieval sub-networks in 28 aMCI and 23 controls using two computational approaches: a multiple region-of-interest based approach and a voxel-level functional connectivity-based approach, respectively. These approaches produced the remarkably similar findings that the self-referential processing network made critical contributions to episodic memory retrieval in aMCI. More conspicuous alterations in self-referential processing of the episodic memory retrieval network were identified in aMCI. In order to complete a given episodic memory retrieval task, increases in cooperation between the self-referential processing network and other sub-networks were mobilized in aMCI. Self-referential processing mediate the cooperation of the episodic memory retrieval sub-networks as it may help to achieve neural plasticity and may contribute to the prevention and treatment of dementia. Copyright © 2016 Elsevier B.V. All rights reserved.
Detecting changes in forced climate attractors with Wasserstein distance
NASA Astrophysics Data System (ADS)
Robin, Yoann; Yiou, Pascal; Naveau, Philippe
2017-07-01
The climate system can been described by a dynamical system and its associated attractor. The dynamics of this attractor depends on the external forcings that influence the climate. Such forcings can affect the mean values or variances, but regions of the attractor that are seldom visited can also be affected. It is an important challenge to measure how the climate attractor responds to different forcings. Currently, the Euclidean distance or similar measures like the Mahalanobis distance have been favored to measure discrepancies between two climatic situations. Those distances do not have a natural building mechanism to take into account the attractor dynamics. In this paper, we argue that a Wasserstein distance, stemming from optimal transport theory, offers an efficient and practical way to discriminate between dynamical systems. After treating a toy example, we explore how the Wasserstein distance can be applied and interpreted to detect non-autonomous dynamics from a Lorenz system driven by seasonal cycles and a warming trend.
Philippe, Frederick L; Koestner, Richard; Lecours, Serge; Beaulieu-Pelletier, Genevieve; Bois, Katy
2011-12-01
The present research examined the role of autobiographical memory networks on negative emotional experiences. Results from 2 studies found support for an active but also discriminant role of autobiographical memories and their related networked memories on negative emotions. In addition, in line with self-determination theory, thwarting of the psychological needs for competence, autonomy, and relatedness was found to be the critical component of autobiographical memory affecting negative emotional experiences. Study 1 revealed that need thwarting in a specific autobiographical memory network related to the theme of loss was positively associated with depressive negative emotions, but not with other negative emotions. Study 2 showed within a prospective design a differential predictive validity between 2 autobiographical memory networks (an anger-related vs. a guilt-related memory) on situational anger reactivity with respect to unfair treatment. All of these results held after controlling for neuroticism (Studies 1 and 2), self-control (Study 2), and for the valence (Study 1) and emotions (Study 2) found in the measured autobiographical memory network. These findings highlight the ongoing emotional significance of representations of need thwarting in autobiographical memory networks. (c) 2011 APA, all rights reserved.
Information Processing in Cognition Process and New Artificial Intelligent Systems
NASA Astrophysics Data System (ADS)
Zheng, Nanning; Xue, Jianru
In this chapter, we discuss, in depth, visual information processing and a new artificial intelligent (AI) system that is based upon cognitive mechanisms. The relationship between a general model of intelligent systems and cognitive mechanisms is described, and in particular we explore visual information processing with selective attention. We also discuss a methodology for studying the new AI system and propose some important basic research issues that have emerged in the intersecting fields of cognitive science and information science. To this end, a new scheme for associative memory and a new architecture for an AI system with attractors of chaos are addressed.
A Mathematical Model of Demand-Supply Dynamics with Collectability and Saturation Factors
NASA Astrophysics Data System (ADS)
Li, Y. Charles; Yang, Hong
We introduce a mathematical model on the dynamics of demand and supply incorporating collectability and saturation factors. Our analysis shows that when the fluctuation of the determinants of demand and supply is strong enough, there is chaos in the demand-supply dynamics. Our numerical simulation shows that such a chaos is not an attractor (i.e. dynamics is not approaching the chaos), instead a periodic attractor (of period-3 under the Poincaré period map) exists near the chaos, and coexists with another periodic attractor (of period-1 under the Poincaré period map) near the market equilibrium. Outside the basins of attraction of the two periodic attractors, the dynamics approaches infinity indicating market irrational exuberance or flash crash. The period-3 attractor represents the product’s market cycle of growth and recession, while period-1 attractor near the market equilibrium represents the regular fluctuation of the product’s market. Thus our model captures more market phenomena besides Marshall’s market equilibrium. When the fluctuation of the determinants of demand and supply is strong enough, a three leaf danger zone exists where the basins of attraction of all attractors intertwine and fractal basin boundaries are formed. Small perturbations in the danger zone can lead to very different attractors. That is, small perturbations in the danger zone can cause the market to experience oscillation near market equilibrium, large growth and recession cycle, and irrational exuberance or flash crash.
Non-linear principal component analysis applied to Lorenz models and to North Atlantic SLP
NASA Astrophysics Data System (ADS)
Russo, A.; Trigo, R. M.
2003-04-01
A non-linear generalisation of Principal Component Analysis (PCA), denoted Non-Linear Principal Component Analysis (NLPCA), is introduced and applied to the analysis of three data sets. Non-Linear Principal Component Analysis allows for the detection and characterisation of low-dimensional non-linear structure in multivariate data sets. This method is implemented using a 5-layer feed-forward neural network introduced originally in the chemical engineering literature (Kramer, 1991). The method is described and details of its implementation are addressed. Non-Linear Principal Component Analysis is first applied to a data set sampled from the Lorenz attractor (1963). It is found that the NLPCA approximations are more representative of the data than are the corresponding PCA approximations. The same methodology was applied to the less known Lorenz attractor (1984). However, the results obtained weren't as good as those attained with the famous 'Butterfly' attractor. Further work with this model is underway in order to assess if NLPCA techniques can be more representative of the data characteristics than are the corresponding PCA approximations. The application of NLPCA to relatively 'simple' dynamical systems, such as those proposed by Lorenz, is well understood. However, the application of NLPCA to a large climatic data set is much more challenging. Here, we have applied NLPCA to the sea level pressure (SLP) field for the entire North Atlantic area and the results show a slight imcrement of explained variance associated. Finally, directions for future work are presented.%}
On the Connectedness of Attractors for Dynamical Systems
NASA Astrophysics Data System (ADS)
Gobbino, Massimo; Sardella, Mirko
1997-01-01
For a dynamical system on a connected metric spaceX, the global attractor (when it exists) is connected provided that either the semigroup is time-continuous orXis locally connected. Moreover, there exists an example of a dynamical system on a connected metric space which admits a disconnected global attractor.
Flattening Property and the Existence of Global Attractors in Banach Space
NASA Astrophysics Data System (ADS)
Aris, Naimah; Maharani, Sitti; Jusmawati, Massalesse; Nurwahyu, Budi
2018-03-01
This paper analyses the existence of global attractor in infinite dimensional system using flattening property. The earlier stage we show the existence of the global attractor in complete metric space by using concept of the ω-limit compact concept with measure of non-compactness methods. Then we show that the ω-limit compact concept is equivalent with the flattening property in Banach space. If we can prove there exist an absorbing set in the system and the flattening property holds, then the global attractor exist in the system.
NASA Astrophysics Data System (ADS)
d'Onofrio, Alberto; Caravagna, Giulio; de Franciscis, Sebastiano
2018-02-01
In this work we consider, from a statistical mechanics point of view, the effects of bounded stochastic perturbations of the protein decay rate for a bistable biomolecular network module. Namely, we consider the perturbations of the protein decay/binding rate constant (DBRC) in a circuit modeling the positive feedback of a transcription factor (TF) on its own synthesis. The DBRC models both the spontaneous degradation of the TF and its linking to other unknown biomolecular factors or drugs. We show that bounded perturbations of the DBRC preserve the positivity of the parameter value (and also its limited variation), and induce effects of interest. First, the noise amplitude induces a first-order phase transition. This is of interest since the system in study has neither spatial components nor it is composed by multiple interacting networks. In particular, we observe that the system passes from two to a unique stochastic attractor, and vice-versa. This behavior is different from noise-induced transitions (also termed phenomenological bifurcations), where a unique stochastic attractor changes its shape depending on the values of a parameter. Moreover, we observe irreversible jumps as a consequence of the above-mentioned phase transition. We show that the illustrated mechanism holds for general models with the same deterministic hysteresis bifurcation structure. Finally, we illustrate the possible implications of our findings to the intracellular pharmacodynamics of drugs delivered in continuous infusion.
Long-lasting desynchronization in rat hippocampal slice induced by coordinated reset stimulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tass, P. A.; Barnikol, U. B.; Department of Stereotaxic and Functional Neurosurgery, University of Cologne, D-50931 Cologne
2009-07-15
In computational models it has been shown that appropriate stimulation protocols may reshape the connectivity pattern of neural or oscillator networks with synaptic plasticity in a way that the network learns or unlearns strong synchronization. The underlying mechanism is that a network is shifted from one attractor to another, so that long-lasting stimulation effects are caused which persist after the cessation of stimulation. Here we study long-lasting effects of multisite electrical stimulation in a rat hippocampal slice rendered epileptic by magnesium withdrawal. We show that desynchronizing coordinated reset stimulation causes a long-lasting desynchronization between hippocampal neuronal populations together with amore » widespread decrease in the amplitude of the epileptiform activity. In contrast, periodic stimulation induces a long-lasting increase in both synchronization and amplitude.« less
From network heterogeneities to familiarity detection and hippocampal memory management
Wang, Jane X.; Poe, Gina; Zochowski, Michal
2009-01-01
Hippocampal-neocortical interactions are key to the rapid formation of novel associative memories in the hippocampus and consolidation to long term storage sites in the neocortex. We investigated the role of network correlates during information processing in hippocampal-cortical networks. We found that changes in the intrinsic network dynamics due to the formation of structural network heterogeneities alone act as a dynamical and regulatory mechanism for stimulus novelty and familiarity detection, thereby controlling memory management in the context of memory consolidation. This network dynamic, coupled with an anatomically established feedback between the hippocampus and the neocortex, recovered heretofore unexplained properties of neural activity patterns during memory management tasks which we observed during sleep in multiunit recordings from behaving animals. Our simple dynamical mechanism shows an experimentally matched progressive shift of memory activation from the hippocampus to the neocortex and thus provides the means to achieve an autonomous off-line progression of memory consolidation. PMID:18999453
Mnemonic training reshapes brain networks to support superior memory
Dresler, Martin; Shirer, William R.; Konrad, Boris N.; Müller, Nils C.J.; Wagner, Isabella C.; Fernández, Guillén; Czisch, Michael; Greicius, Michael D.
2017-01-01
Summary Memory skills strongly differ across the general population, however little is known about the brain characteristics supporting superior memory performance. Here, we assess functional brain network organization of 23 of the world’s most successful memory athletes and matched controls by fMRI during both task-free resting state baseline and active memory encoding. We demonstrate that in a group of naïve controls, functional connectivity changes induced by six weeks of mnemonic training were correlated with the network organization that distinguishes athletes from controls. During rest, this effect was mainly driven by connections between rather than within the visual, medial temporal lobe and default mode networks, whereas during task it was driven by connectivity within these networks. Similarity with memory athlete connectivity patterns predicted memory improvements up to 4 months after training. In conclusion, mnemonic training drives distributed rather than regional changes, reorganizing the brain’s functional network organization to enable superior memory performance. PMID:28279356
Stabilizing embedology: Geometry-preserving delay-coordinate maps
NASA Astrophysics Data System (ADS)
Eftekhari, Armin; Yap, Han Lun; Wakin, Michael B.; Rozell, Christopher J.
2018-02-01
Delay-coordinate mapping is an effective and widely used technique for reconstructing and analyzing the dynamics of a nonlinear system based on time-series outputs. The efficacy of delay-coordinate mapping has long been supported by Takens' embedding theorem, which guarantees that delay-coordinate maps use the time-series output to provide a reconstruction of the hidden state space that is a one-to-one embedding of the system's attractor. While this topological guarantee ensures that distinct points in the reconstruction correspond to distinct points in the original state space, it does not characterize the quality of this embedding or illuminate how the specific parameters affect the reconstruction. In this paper, we extend Takens' result by establishing conditions under which delay-coordinate mapping is guaranteed to provide a stable embedding of a system's attractor. Beyond only preserving the attractor topology, a stable embedding preserves the attractor geometry by ensuring that distances between points in the state space are approximately preserved. In particular, we find that delay-coordinate mapping stably embeds an attractor of a dynamical system if the stable rank of the system is large enough to be proportional to the dimension of the attractor. The stable rank reflects the relation between the sampling interval and the number of delays in delay-coordinate mapping. Our theoretical findings give guidance to choosing system parameters, echoing the tradeoff between irrelevancy and redundancy that has been heuristically investigated in the literature. Our initial result is stated for attractors that are smooth submanifolds of Euclidean space, with extensions provided for the case of strange attractors.
Stabilizing embedology: Geometry-preserving delay-coordinate maps.
Eftekhari, Armin; Yap, Han Lun; Wakin, Michael B; Rozell, Christopher J
2018-02-01
Delay-coordinate mapping is an effective and widely used technique for reconstructing and analyzing the dynamics of a nonlinear system based on time-series outputs. The efficacy of delay-coordinate mapping has long been supported by Takens' embedding theorem, which guarantees that delay-coordinate maps use the time-series output to provide a reconstruction of the hidden state space that is a one-to-one embedding of the system's attractor. While this topological guarantee ensures that distinct points in the reconstruction correspond to distinct points in the original state space, it does not characterize the quality of this embedding or illuminate how the specific parameters affect the reconstruction. In this paper, we extend Takens' result by establishing conditions under which delay-coordinate mapping is guaranteed to provide a stable embedding of a system's attractor. Beyond only preserving the attractor topology, a stable embedding preserves the attractor geometry by ensuring that distances between points in the state space are approximately preserved. In particular, we find that delay-coordinate mapping stably embeds an attractor of a dynamical system if the stable rank of the system is large enough to be proportional to the dimension of the attractor. The stable rank reflects the relation between the sampling interval and the number of delays in delay-coordinate mapping. Our theoretical findings give guidance to choosing system parameters, echoing the tradeoff between irrelevancy and redundancy that has been heuristically investigated in the literature. Our initial result is stated for attractors that are smooth submanifolds of Euclidean space, with extensions provided for the case of strange attractors.
Sun, Felicia W; Stepanovic, Michael R; Andreano, Joseph; Barrett, Lisa Feldman; Touroutoglou, Alexandra; Dickerson, Bradford C
2016-09-14
Decline in cognitive skills, especially in memory, is often viewed as part of "normal" aging. Yet some individuals "age better" than others. Building on prior research showing that cortical thickness in one brain region, the anterior midcingulate cortex, is preserved in older adults with memory performance abilities equal to or better than those of people 20-30 years younger (i.e., "superagers"), we examined the structural integrity of two large-scale intrinsic brain networks in superaging: the default mode network, typically engaged during memory encoding and retrieval tasks, and the salience network, typically engaged during attention, motivation, and executive function tasks. We predicted that superagers would have preserved cortical thickness in critical nodes in these networks. We defined superagers (60-80 years old) based on their performance compared to young adults (18-32 years old) on the California Verbal Learning Test Long Delay Free Recall test. We found regions within the networks of interest where the cerebral cortex of superagers was thicker than that of typical older adults, and where superagers were anatomically indistinguishable from young adults; hippocampal volume was also preserved in superagers. Within the full group of older adults, thickness of a number of regions, including the anterior temporal cortex, rostral medial prefrontal cortex, and anterior midcingulate cortex, correlated with memory performance, as did the volume of the hippocampus. These results indicate older adults with youthful memory abilities have youthful brain regions in key paralimbic and limbic nodes of the default mode and salience networks that support attentional, executive, and mnemonic processes subserving memory function. Memory performance typically declines with age, as does cortical structural integrity, yet some older adults maintain youthful memory. We tested the hypothesis that superagers (older individuals with youthful memory performance) would exhibit preserved neuroanatomy in key brain networks subserving memory. We found that superagers not only perform similarly to young adults on memory testing, they also do not show the typical patterns of brain atrophy in certain regions. These regions are contained largely within two major intrinsic brain networks: the default mode network, implicated in memory encoding, storage, and retrieval, and the salience network, associated with attention and executive processes involved in encoding and retrieval. Preserved neuroanatomical integrity in these networks is associated with better memory performance among older adults. Copyright © 2016 Sun, Stepanovic et al.
Random attractor of non-autonomous stochastic Boussinesq lattice system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Min, E-mail: zhaomin1223@126.com; Zhou, Shengfan, E-mail: zhoushengfan@yahoo.com
2015-09-15
In this paper, we first consider the existence of tempered random attractor for second-order non-autonomous stochastic lattice dynamical system of nonlinear Boussinesq equations effected by time-dependent coupled coefficients and deterministic forces and multiplicative white noise. Then, we establish the upper semicontinuity of random attractors as the intensity of noise approaches zero.
NASA Astrophysics Data System (ADS)
Denis-le Coarer, Florian; Quirce, Ana; Valle, Angel; Pesquera, Luis; Rodríguez, Miguel A.; Panajotov, Krassimir; Sciamanna, Marc
2018-03-01
We present experimental and theoretical results of noise-induced attractor hopping between dynamical states found in a single transverse mode vertical-cavity surface-emitting laser (VCSEL) subject to parallel optical injection. These transitions involve dynamical states with different polarizations of the light emitted by the VCSEL. We report an experimental map identifying, in the injected power-frequency detuning plane, regions where attractor hopping between two, or even three, different states occur. The transition between these behaviors is characterized by using residence time distributions. We find multistability regions that are characterized by heavy-tailed residence time distributions. These distributions are characterized by a -1.83 ±0.17 power law. Between these regions we find coherence enhancement of noise-induced attractor hopping in which transitions between states occur regularly. Simulation results show that frequency detuning variations and spontaneous emission noise play a role in causing switching between attractors. We also find attractor hopping between chaotic states with different polarization properties. In this case, simulation results show that spontaneous emission noise inherent to the VCSEL is enough to induce this hopping.
Complex network structure influences processing in long-term and short-term memory.
Vitevitch, Michael S; Chan, Kit Ying; Roodenrys, Steven
2012-07-01
Complex networks describe how entities in systems interact; the structure of such networks is argued to influence processing. One measure of network structure, clustering coefficient, C, measures the extent to which neighbors of a node are also neighbors of each other. Previous psycholinguistic experiments found that the C of phonological word-forms influenced retrieval from the mental lexicon (that portion of long-term memory dedicated to language) during the on-line recognition and production of spoken words. In the present study we examined how network structure influences other retrieval processes in long- and short-term memory. In a false-memory task-examining long-term memory-participants falsely recognized more words with low- than high-C. In a recognition memory task-examining veridical memories in long-term memory-participants correctly recognized more words with low- than high-C. However, participants in a serial recall task-examining redintegration in short-term memory-recalled lists comprised of high-C words more accurately than lists comprised of low-C words. These results demonstrate that network structure influences cognitive processes associated with several forms of memory including lexical, long-term, and short-term.
Free-Energy-Based Design Policy for Robust Network Control against Environmental Fluctuation.
Iwai, Takuya; Kominami, Daichi; Murata, Masayuki; Yomo, Tetsuya
2015-01-01
Bioinspired network control is a promising approach for realizing robust network controls. It relies on a probabilistic mechanism composed of positive and negative feedback that allows the system to eventually stabilize on the best solution. When the best solution fails due to environmental fluctuation, the system cannot keep its function until the system finds another solution again. To prevent the temporal loss of the function, the system should prepare some solution candidates and stochastically select available one from them. However, most bioinspired network controls are not designed with this issue in mind. In this paper, we propose a thermodynamics-based design policy that allows systems to retain an appropriate degree of randomness depending on the degree of environmental fluctuation, which prepares the system for the occurrence of environmental fluctuation. Furthermore, we verify the design policy by using an attractor selection model-based multipath routing to run simulation experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Huys, Otti, E-mail: otti.dhuys@phy.duke.edu; Haynes, Nicholas D.; Lohmann, Johannes
Autonomous Boolean networks are commonly used to model the dynamics of gene regulatory networks and allow for the prediction of stable dynamical attractors. However, most models do not account for time delays along the network links and noise, which are crucial features of real biological systems. Concentrating on two paradigmatic motifs, the toggle switch and the repressilator, we develop an experimental testbed that explicitly includes both inter-node time delays and noise using digital logic elements on field-programmable gate arrays. We observe transients that last millions to billions of characteristic time scales and scale exponentially with the amount of time delaysmore » between nodes, a phenomenon known as super-transient scaling. We develop a hybrid model that includes time delays along network links and allows for stochastic variation in the delays. Using this model, we explain the observed super-transient scaling of both motifs and recreate the experimentally measured transient distributions.« less
Chimeras and clusters in networks of hyperbolic chaotic oscillators
NASA Astrophysics Data System (ADS)
Cano, A. V.; Cosenza, M. G.
2017-03-01
We show that chimera states, where differentiated subsets of synchronized and desynchronized dynamical elements coexist, can emerge in networks of hyperbolic chaotic oscillators subject to global interactions. As local dynamics we employ Lozi maps, which possess hyperbolic chaotic attractors. We consider a globally coupled system of these maps and use two statistical quantities to describe its collective behavior: the average fraction of elements belonging to clusters and the average standard deviation of state variables. Chimera states, clusters, complete synchronization, and incoherence are thus characterized on the space of parameters of the system. We find that chimera states are related to the formation of clusters in the system. In addition, we show that chimera states arise for a sufficiently long range of interactions in nonlocally coupled networks of these maps. Our results reveal that, under some circumstances, hyperbolicity does not impede the formation of chimera states in networks of coupled chaotic systems, as it had been previously hypothesized.
Leavitt, Victoria M; Wylie, Glenn R; Girgis, Peter A; DeLuca, John; Chiaravalloti, Nancy D
2014-09-01
Identifying effective behavioral treatments to improve memory in persons with learning and memory impairment is a primary goal for neurorehabilitation researchers. Memory deficits are the most common cognitive symptom in multiple sclerosis (MS), and hold negative professional and personal consequences for people who are often in the prime of their lives when diagnosed. A 10-session behavioral treatment, the modified Story Memory Technique (mSMT), was studied in a randomized, placebo-controlled clinical trial. Behavioral improvements and increased fMRI activation were shown after treatment. Here, connectivity within the neural networks underlying memory function was examined with resting-state functional connectivity (RSFC) in a subset of participants from the clinical trial. We hypothesized that the treatment would result in increased integrity of connections within two primary memory networks of the brain, the hippocampal memory network, and the default network (DN). Seeds were placed in left and right hippocampus, and the posterior cingulate cortex. Increased connectivity was found between left hippocampus and cortical regions specifically involved in memory for visual imagery, as well as among critical hubs of the DN. These results represent the first evidence for efficacy of a behavioral intervention to impact the integrity of neural networks subserving memory functions in persons with MS.
Dopamine D1 signaling organizes network dynamics underlying working memory.
Roffman, Joshua L; Tanner, Alexandra S; Eryilmaz, Hamdi; Rodriguez-Thompson, Anais; Silverstein, Noah J; Ho, New Fei; Nitenson, Adam Z; Chonde, Daniel B; Greve, Douglas N; Abi-Dargham, Anissa; Buckner, Randy L; Manoach, Dara S; Rosen, Bruce R; Hooker, Jacob M; Catana, Ciprian
2016-06-01
Local prefrontal dopamine signaling supports working memory by tuning pyramidal neurons to task-relevant stimuli. Enabled by simultaneous positron emission tomography-magnetic resonance imaging (PET-MRI), we determined whether neuromodulatory effects of dopamine scale to the level of cortical networks and coordinate their interplay during working memory. Among network territories, mean cortical D1 receptor densities differed substantially but were strongly interrelated, suggesting cross-network regulation. Indeed, mean cortical D1 density predicted working memory-emergent decoupling of the frontoparietal and default networks, which respectively manage task-related and internal stimuli. In contrast, striatal D1 predicted opposing effects within these two networks but no between-network effects. These findings specifically link cortical dopamine signaling to network crosstalk that redirects cognitive resources to working memory, echoing neuromodulatory effects of D1 signaling on the level of cortical microcircuits.
Vanishing of local non-Gaussianity in canonical single field inflation
NASA Astrophysics Data System (ADS)
Bravo, Rafael; Mooij, Sander; Palma, Gonzalo A.; Pradenas, Bastián
2018-05-01
We study the production of observable primordial local non-Gaussianity in two opposite regimes of canonical single field inflation: attractor (standard single field slow-roll inflation) and non attractor (ultra slow-roll inflation). In the attractor regime, the standard derivation of the bispectrum's squeezed limit using co-moving coordinates gives the well known Maldacena's consistency relation fNL = 5 (1‑ns) / 12. On the other hand, in the non-attractor regime, the squeezed limit offers a substantial violation of this relation given by fNL = 5/2. In this work we argue that, independently of whether inflation is attractor or non-attractor, the size of the observable primordial local non-Gaussianity is predicted to be fNLobs = 0 (a result that was already understood to hold in the case of attractor models). To show this, we follow the use of the so-called Conformal Fermi Coordinates (CFC), recently introduced in the literature. These coordinates parametrize the local environment of inertial observers in a perturbed FRW spacetime, allowing one to identify and compute gauge invariant quantities, such as n-point correlation functions. Concretely, we find that during inflation, after all the modes have exited the horizon, the squeezed limit of the 3-point correlation function of curvature perturbations vanishes in the CFC frame, regardless of the inflationary regime. We argue that such a cancellation should persist after inflation ends.
Cooperation in memory-based prisoner's dilemma game on interdependent networks
NASA Astrophysics Data System (ADS)
Luo, Chao; Zhang, Xiaolin; Liu, Hong; Shao, Rui
2016-05-01
Memory or so-called experience normally plays the important role to guide the human behaviors in real world, that is essential for rational decisions made by individuals. Hence, when the evolutionary behaviors of players with bounded rationality are investigated, it is reasonable to make an assumption that players in system are with limited memory. Besides, in order to unravel the intricate variability of complex systems in real world and make a highly integrative understanding of their dynamics, in recent years, interdependent networks as a comprehensive network structure have obtained more attention in this community. In this article, the evolution of cooperation in memory-based prisoner's dilemma game (PDG) on interdependent networks composed by two coupled square lattices is studied. Herein, all or part of players are endowed with finite memory ability, and we focus on the mutual influence of memory effect and interdependent network reciprocity on cooperation of spatial PDG. We show that the density of cooperation can be significantly promoted within an optimal region of memory length and interdependent strength. Furthermore, distinguished by whether having memory ability/external links or not, each kind of players on networks would have distinct evolutionary behaviors. Our work could be helpful to understand the emergence and maintenance of cooperation under the evolution of memory-based players on interdependent networks.
A simplified computational memory model from information processing.
Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang
2016-11-23
This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.
Bifurcation from an invariant to a non-invariant attractor
NASA Astrophysics Data System (ADS)
Mandal, D.
2016-12-01
Switching dynamical systems are very common in many areas of physics and engineering. We consider a piecewise linear map that periodically switches between more than one different functional forms. We show that in such systems it is possible to have a border collision bifurcation where the system transits from an invariant attractor to a non-invariant attractor.
Dynamic Neural Networks Supporting Memory Retrieval
St. Jacques, Peggy L.; Kragel, Philip A.; Rubin, David C.
2011-01-01
How do separate neural networks interact to support complex cognitive processes such as remembrance of the personal past? Autobiographical memory (AM) retrieval recruits a consistent pattern of activation that potentially comprises multiple neural networks. However, it is unclear how such large-scale neural networks interact and are modulated by properties of the memory retrieval process. In the present functional MRI (fMRI) study, we combined independent component analysis (ICA) and dynamic causal modeling (DCM) to understand the neural networks supporting AM retrieval. ICA revealed four task-related components consistent with the previous literature: 1) Medial Prefrontal Cortex (PFC) Network, associated with self-referential processes, 2) Medial Temporal Lobe (MTL) Network, associated with memory, 3) Frontoparietal Network, associated with strategic search, and 4) Cingulooperculum Network, associated with goal maintenance. DCM analysis revealed that the medial PFC network drove activation within the system, consistent with the importance of this network to AM retrieval. Additionally, memory accessibility and recollection uniquely altered connectivity between these neural networks. Recollection modulated the influence of the medial PFC on the MTL network during elaboration, suggesting that greater connectivity among subsystems of the default network supports greater re-experience. In contrast, memory accessibility modulated the influence of frontoparietal and MTL networks on the medial PFC network, suggesting that ease of retrieval involves greater fluency among the multiple networks contributing to AM. These results show the integration between neural networks supporting AM retrieval and the modulation of network connectivity by behavior. PMID:21550407
Non-linguistic Conditions for Causativization as a Linguistic Attractor.
Nichols, Johanna
2017-01-01
An attractor, in complex systems theory, is any state that is more easily or more often entered or acquired than departed or lost; attractor states therefore accumulate more members than non-attractors, other things being equal. In the context of language evolution, linguistic attractors include sounds, forms, and grammatical structures that are prone to be selected when sociolinguistics and language contact make it possible for speakers to choose between competing forms. The reasons why an element is an attractor are linguistic (auditory salience, ease of processing, paradigm structure, etc.), but the factors that make selection possible and propagate selected items through the speech community are non-linguistic. This paper uses the consonants in personal pronouns to show what makes for an attractor and how selection and diffusion work, then presents a survey of several language families and areas showing that the derivational morphology of pairs of verbs like fear and frighten , or Turkish korkmak 'fear, be afraid' and korkutmak 'frighten, scare', or Finnish istua 'sit' and istutta 'seat (someone)', or Spanish sentarse 'sit down' and sentar 'seat (someone)' is susceptible to selection. Specifically, the Turkish and Finnish pattern, where 'seat' is derived from 'sit' by addition of a suffix-is an attractor and a favored target of selection. This selection occurs chiefly in sociolinguistic contexts of what is defined here as linguistic symbiosis, where languages mingle in speech, which in turn is favored by certain demographic, sociocultural, and environmental factors here termed frontier conditions. Evidence is surveyed from northern Eurasia, the Caucasus, North and Central America, and the Pacific and from both modern and ancient languages to raise the hypothesis that frontier conditions and symbiosis favor causativization.
Non-linguistic Conditions for Causativization as a Linguistic Attractor
Nichols, Johanna
2018-01-01
An attractor, in complex systems theory, is any state that is more easily or more often entered or acquired than departed or lost; attractor states therefore accumulate more members than non-attractors, other things being equal. In the context of language evolution, linguistic attractors include sounds, forms, and grammatical structures that are prone to be selected when sociolinguistics and language contact make it possible for speakers to choose between competing forms. The reasons why an element is an attractor are linguistic (auditory salience, ease of processing, paradigm structure, etc.), but the factors that make selection possible and propagate selected items through the speech community are non-linguistic. This paper uses the consonants in personal pronouns to show what makes for an attractor and how selection and diffusion work, then presents a survey of several language families and areas showing that the derivational morphology of pairs of verbs like fear and frighten, or Turkish korkmak ‘fear, be afraid’ and korkutmak ‘frighten, scare’, or Finnish istua ‘sit’ and istutta ‘seat (someone)’, or Spanish sentarse ‘sit down’ and sentar ‘seat (someone)’ is susceptible to selection. Specifically, the Turkish and Finnish pattern, where ‘seat’ is derived from ‘sit’ by addition of a suffix—is an attractor and a favored target of selection. This selection occurs chiefly in sociolinguistic contexts of what is defined here as linguistic symbiosis, where languages mingle in speech, which in turn is favored by certain demographic, sociocultural, and environmental factors here termed frontier conditions. Evidence is surveyed from northern Eurasia, the Caucasus, North and Central America, and the Pacific and from both modern and ancient languages to raise the hypothesis that frontier conditions and symbiosis favor causativization. PMID:29410636
NASA Astrophysics Data System (ADS)
Takamatsu, Atsuko
2006-11-01
Three-oscillator systems with plasmodia of true slime mold, Physarum polycephalum, which is an oscillatory amoeba-like unicellular organism, were experimentally constructed and their spatio-temporal patterns were investigated. Three typical spatio-temporal patterns were found: rotation ( R), partial in-phase ( PI), and partial anti-phase with double frequency ( PA). In pattern R, phase differences between adjacent oscillators were almost 120 ∘. In pattern PI, two oscillators were in-phase and the third oscillator showed anti-phase against the two oscillators. In pattern PA, two oscillators showed anti-phase and the third oscillator showed frequency doubling oscillation with small amplitude. Actually each pattern is not perfectly stable but quasi-stable. Interestingly, the system shows spontaneous switching among the multiple quasi-stable patterns. Statistical analyses revealed a characteristic in the residence time of each pattern: the histograms seem to have Gamma-like distribution form but with a sharp peak and a tail on the side of long period. That suggests the attractor of this system has complex structure composed of at least three types of sub-attractors: a “Gamma attractor”-involved with several Poisson processes, a “deterministic attractor”-the residence time is deterministic, and a “stable attractor”-each pattern is stable. When the coupling strength was small, only the Gamma attractor was observed and switching behavior among patterns R, PI, and PA almost always via an asynchronous pattern named O. A conjecture is as follows: Internal/external noise exposes each pattern of R, PI, and PA coexisting around bifurcation points: That is observed as the Gamma attractor. As coupling strength increases, the deterministic attractor appears then followed by the stable attractor, always accompanied with the Gamma attractor. Switching behavior could be caused by regular existence of the Gamma attractor.
Self-attraction into spinning eigenstates of a mobile wave source by its emission back-reaction
NASA Astrophysics Data System (ADS)
Labousse, Matthieu; Perrard, Stéphane; Couder, Yves; Fort, Emmanuel
2016-10-01
The back-reaction of a radiated wave on the emitting source is a general problem. In the most general case, back-reaction on moving wave sources depends on their whole history. Here we study a model system in which a pointlike source is piloted by its own memory-endowed wave field. Such a situation is implemented experimentally using a self-propelled droplet bouncing on a vertically vibrated liquid bath and driven by the waves it generates along its trajectory. The droplet and its associated wave field form an entity having an intrinsic dual particle-wave character. The wave field encodes in its interference structure the past trajectory of the droplet. In the present article we show that this object can self-organize into a spinning state in which the droplet possesses an orbiting motion without any external interaction. The rotation is driven by the wave-mediated attractive interaction of the droplet with its own past. The resulting "memory force" is investigated and characterized experimentally, numerically, and theoretically. Orbiting with a radius of curvature close to half a wavelength is shown to be a memory-induced dynamical attractor for the droplet's motion.
Hippocampal functional connectivity and episodic memory in early childhood
Riggins, Tracy; Geng, Fengji; Blankenship, Sarah L.; Redcay, Elizabeth
2016-01-01
Episodic memory relies on a distributed network of brain regions, with the hippocampus playing a critical and irreplaceable role. Few studies have examined how changes in this network contribute to episodic memory development early in life. The present addressed this gap by examining relations between hippocampal functional connectivity and episodic memory in 4-and 6-year-old children (n=40). Results revealed similar hippocampal functional connectivity between age groups, which included lateral temporal regions, precuneus, and multiple parietal and prefrontal regions, and functional specialization along the longitudinal axis. Despite these similarities, developmental differences were also observed. Specifically, 3 (of 4) regions within the hippocampal memory network were positively associated with episodic memory in 6-year-old children, but negatively associated with episodic memory in 4-year-old children. In contrast, all 3 regions outside the hippocampal memory network were negatively associated with episodic memory in older children, but positively associated with episodic memory in younger children. These interactions are interpreted within an interactive specialization framework and suggest the hippocampus becomes functionally integrated with cortical regions that are part of the hippocampal memory network in adults and functionally segregated from regions unrelated to memory in adults, both of which are associated with age-related improvements in episodic memory ability. PMID:26900967
Hippocampal functional connectivity and episodic memory in early childhood.
Riggins, Tracy; Geng, Fengji; Blankenship, Sarah L; Redcay, Elizabeth
2016-06-01
Episodic memory relies on a distributed network of brain regions, with the hippocampus playing a critical and irreplaceable role. Few studies have examined how changes in this network contribute to episodic memory development early in life. The present addressed this gap by examining relations between hippocampal functional connectivity and episodic memory in 4- and 6-year-old children (n=40). Results revealed similar hippocampal functional connectivity between age groups, which included lateral temporal regions, precuneus, and multiple parietal and prefrontal regions, and functional specialization along the longitudinal axis. Despite these similarities, developmental differences were also observed. Specifically, 3 (of 4) regions within the hippocampal memory network were positively associated with episodic memory in 6-year-old children, but negatively associated with episodic memory in 4-year-old children. In contrast, all 3 regions outside the hippocampal memory network were negatively associated with episodic memory in older children, but positively associated with episodic memory in younger children. These interactions are interpreted within an interactive specialization framework and suggest the hippocampus becomes functionally integrated with cortical regions that are part of the hippocampal memory network in adults and functionally segregated from regions unrelated to memory in adults, both of which are associated with age-related improvements in episodic memory ability. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Memory and betweenness preference in temporal networks induced from time series
NASA Astrophysics Data System (ADS)
Weng, Tongfeng; Zhang, Jie; Small, Michael; Zheng, Rui; Hui, Pan
2017-02-01
We construct temporal networks from time series via unfolding the temporal information into an additional topological dimension of the networks. Thus, we are able to introduce memory entropy analysis to unravel the memory effect within the considered signal. We find distinct patterns in the entropy growth rate of the aggregate network at different memory scales for time series with different dynamics ranging from white noise, 1/f noise, autoregressive process, periodic to chaotic dynamics. Interestingly, for a chaotic time series, an exponential scaling emerges in the memory entropy analysis. We demonstrate that the memory exponent can successfully characterize bifurcation phenomenon, and differentiate the human cardiac system in healthy and pathological states. Moreover, we show that the betweenness preference analysis of these temporal networks can further characterize dynamical systems and separate distinct electrocardiogram recordings. Our work explores the memory effect and betweenness preference in temporal networks constructed from time series data, providing a new perspective to understand the underlying dynamical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saiki, Yoshitaka, E-mail: yoshi.saiki@r.hit-u.ac.jp; Yamada, Michio; Chian, Abraham C.-L.
The unstable periodic orbits (UPOs) embedded in a chaotic attractor after an attractor merging crisis (MC) are classified into three subsets, and employed to reconstruct chaotic saddles in the Kuramoto-Sivashinsky equation. It is shown that in the post-MC regime, the two chaotic saddles evolved from the two coexisting chaotic attractors before crisis can be reconstructed from the UPOs embedded in the pre-MC chaotic attractors. The reconstruction also involves the detection of the mediating UPO responsible for the crisis, and the UPOs created after crisis that fill the gap regions of the chaotic saddles. We show that the gap UPOs originatemore » from saddle-node, period-doubling, and pitchfork bifurcations inside the periodic windows in the post-MC chaotic region of the bifurcation diagram. The chaotic attractor in the post-MC regime is found to be the closure of gap UPOs.« less
Multistability in Chua's circuit with two stable node-foci
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, B. C.; Wang, N.; Xu, Q.
2016-04-15
Only using one-stage op-amp based negative impedance converter realization, a simplified Chua's diode with positive outer segment slope is introduced, based on which an improved Chua's circuit realization with more simpler circuit structure is designed. The improved Chua's circuit has identical mathematical model but completely different nonlinearity to the classical Chua's circuit, from which multiple attractors including coexisting point attractors, limit cycle, double-scroll chaotic attractor, or coexisting chaotic spiral attractors are numerically simulated and experimentally captured. Furthermore, with dimensionless Chua's equations, the dynamical properties of the Chua's system are studied including equilibrium and stability, phase portrait, bifurcation diagram, Lyapunov exponentmore » spectrum, and attraction basin. The results indicate that the system has two symmetric stable nonzero node-foci in global adjusting parameter regions and exhibits the unusual and striking dynamical behavior of multiple attractors with multistability.« less
Saiki, Yoshitaka; Yamada, Michio; Chian, Abraham C-L; Miranda, Rodrigo A; Rempel, Erico L
2015-10-01
The unstable periodic orbits (UPOs) embedded in a chaotic attractor after an attractor merging crisis (MC) are classified into three subsets, and employed to reconstruct chaotic saddles in the Kuramoto-Sivashinsky equation. It is shown that in the post-MC regime, the two chaotic saddles evolved from the two coexisting chaotic attractors before crisis can be reconstructed from the UPOs embedded in the pre-MC chaotic attractors. The reconstruction also involves the detection of the mediating UPO responsible for the crisis, and the UPOs created after crisis that fill the gap regions of the chaotic saddles. We show that the gap UPOs originate from saddle-node, period-doubling, and pitchfork bifurcations inside the periodic windows in the post-MC chaotic region of the bifurcation diagram. The chaotic attractor in the post-MC regime is found to be the closure of gap UPOs.
Oyarzún, Javiera P; Morís, Joaquín; Luque, David; de Diego-Balaguer, Ruth; Fuentemilla, Lluís
2017-08-09
System memory consolidation is conceptualized as an active process whereby newly encoded memory representations are strengthened through selective memory reactivation during sleep. However, our learning experience is highly overlapping in content (i.e., shares common elements), and memories of these events are organized in an intricate network of overlapping associated events. It remains to be explored whether and how selective memory reactivation during sleep has an impact on these overlapping memories acquired during awake time. Here, we test in a group of adult women and men the prediction that selective memory reactivation during sleep entails the reactivation of associated events and that this may lead the brain to adaptively regulate whether these associated memories are strengthened or pruned from memory networks on the basis of their relative associative strength with the shared element. Our findings demonstrate the existence of efficient regulatory neural mechanisms governing how complex memory networks are shaped during sleep as a function of their associative memory strength. SIGNIFICANCE STATEMENT Numerous studies have demonstrated that system memory consolidation is an active, selective, and sleep-dependent process in which only subsets of new memories become stabilized through their reactivation. However, the learning experience is highly overlapping in content and thus events are encoded in an intricate network of related memories. It remains to be explored whether and how memory reactivation has an impact on overlapping memories acquired during awake time. Here, we show that sleep memory reactivation promotes strengthening and weakening of overlapping memories based on their associative memory strength. These results suggest the existence of an efficient regulatory neural mechanism that avoids the formation of cluttered memory representation of multiple events and promotes stabilization of complex memory networks. Copyright © 2017 the authors 0270-6474/17/377748-11$15.00/0.
Low-dimensional attractor for neural activity from local field potentials in optogenetic mice
Oprisan, Sorinel A.; Lynn, Patrick E.; Tompa, Tamas; Lavin, Antonieta
2015-01-01
We used optogenetic mice to investigate possible nonlinear responses of the medial prefrontal cortex (mPFC) local network to light stimuli delivered by a 473 nm laser through a fiber optics. Every 2 s, a brief 10 ms light pulse was applied and the local field potentials (LFPs) were recorded with a 10 kHz sampling rate. The experiment was repeated 100 times and we only retained and analyzed data from six animals that showed stable and repeatable response to optical stimulations. The presence of nonlinearity in our data was checked using the null hypothesis that the data were linearly correlated in the temporal domain, but were random otherwise. For each trail, 100 surrogate data sets were generated and both time reversal asymmetry and false nearest neighbor (FNN) were used as discriminating statistics for the null hypothesis. We found that nonlinearity is present in all LFP data. The first 0.5 s of each 2 s LFP recording were dominated by the transient response of the networks. For each trial, we used the last 1.5 s of steady activity to measure the phase resetting induced by the brief 10 ms light stimulus. After correcting the LFPs for the effect of phase resetting, additional preprocessing was carried out using dendrograms to identify “similar” groups among LFP trials. We found that the steady dynamics of mPFC in response to light stimuli could be reconstructed in a three-dimensional phase space with topologically similar “8”-shaped attractors across different animals. Our results also open the possibility of designing a low-dimensional model for optical stimulation of the mPFC local network. PMID:26483665
Hippocampal Network Modularity Is Associated With Relational Memory Dysfunction in Schizophrenia.
Avery, Suzanne N; Rogers, Baxter P; Heckers, Stephan
2018-05-01
Functional dysconnectivity has been proposed as a major pathophysiological mechanism for cognitive dysfunction in schizophrenia. The hippocampus is a focal point of dysconnectivity in schizophrenia, with decreased hippocampal functional connectivity contributing to the marked memory deficits observed in patients. Normal memory function relies on the interaction of complex corticohippocampal networks. However, only recent technological advances have enabled the large-scale exploration of functional networks with accuracy and precision. We investigated the modularity of hippocampal resting-state functional networks in a sample of 45 patients with schizophrenia spectrum disorders and 38 healthy control subjects. Modularity was calculated for two distinct functional networks: a core hippocampal-medial temporal lobe cortex network and an extended hippocampal-cortical network. As hippocampal function differs along its longitudinal axis, follow-up analyses examined anterior and posterior networks separately. To explore effects of resting network function on behavior, we tested associations between modularity and relational memory ability. Age, sex, handedness, and parental education were similar between groups. Network modularity was lower in schizophrenia patients, especially in the posterior hippocampal network. Schizophrenia patients also showed markedly lower relational memory ability compared with control subjects. We found a distinct brain-behavior relationship in schizophrenia that differed from control subjects by network and anterior/posterior division-while relational memory in control subjects was associated with anterior hippocampal-cortical modularity, schizophrenia patients showed an association with posterior hippocampal-medial temporal lobe cortex network modularity. Our findings support a model of abnormal resting-state corticohippocampal network coherence in schizophrenia, which may contribute to relational memory deficits. Copyright © 2018 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
A simplified computational memory model from information processing
Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang
2016-01-01
This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847
Szathmáry, E
2000-01-01
Replicators of interest in chemistry, biology and culture are briefly surveyed from a conceptual point of view. Systems with limited heredity have only a limited evolutionary potential because the number of available types is too low. Chemical cycles, such as the formose reaction, are holistic replicators since replication is not based on the successive addition of modules. Replicator networks consisting of catalytic molecules (such as reflexively autocatalytic sets of proteins, or reproducing lipid vesicles) are hypothetical ensemble replicators, and their functioning rests on attractors of their dynamics. Ensemble replicators suffer from the paradox of specificity: while their abstract feasibility seems to require a high number of molecular types, the harmful effect of side reactions calls for a small system size. No satisfactory solution to this problem is known. Phenotypic replicators do not pass on their genotypes, only some aspects of the phenotype are transmitted. Phenotypic replicators with limited heredity include genetic membranes, prions and simple memetic systems. Memes in human culture are unlimited hereditary, phenotypic replicators, based on language. The typical path of evolution goes from limited to unlimited heredity, and from attractor-based to modular (digital) replicators. PMID:11127914
Szathmáry, E
2000-11-29
Replicators of interest in chemistry, biology and culture are briefly surveyed from a conceptual point of view. Systems with limited heredity have only a limited evolutionary potential because the number of available types is too low. Chemical cycles, such as the formose reaction, are holistic replicators since replication is not based on the successive addition of modules. Replicator networks consisting of catalytic molecules (such as reflexively autocatalytic sets of proteins, or reproducing lipid vesicles) are hypothetical ensemble replicators, and their functioning rests on attractors of their dynamics. Ensemble replicators suffer from the paradox of specificity: while their abstract feasibility seems to require a high number of molecular types, the harmful effect of side reactions calls for a small system size. No satisfactory solution to this problem is known. Phenotypic replicators do not pass on their genotypes, only some aspects of the phenotype are transmitted. Phenotypic replicators with limited heredity include genetic membranes, prions and simple memetic systems. Memes in human culture are unlimited hereditary, phenotypic replicators, based on language. The typical path of evolution goes from limited to unlimited heredity, and from attractor-based to modular (digital) replicators.
NASA Astrophysics Data System (ADS)
Sakata, Katsumi; Ohyanagi, Hajime; Sato, Shinji; Nobori, Hiroya; Hayashi, Akiko; Ishii, Hideshi; Daub, Carsten O.; Kawai, Jun; Suzuki, Harukazu; Saito, Toshiyuki
2015-02-01
We present a system-wide transcriptional network structure that controls cell types in the context of expression pattern transitions that correspond to cell type transitions. Co-expression based analyses uncovered a system-wide, ladder-like transcription factor cluster structure composed of nearly 1,600 transcription factors in a human transcriptional network. Computer simulations based on a transcriptional regulatory model deduced from the system-wide, ladder-like transcription factor cluster structure reproduced expression pattern transitions when human THP-1 myelomonocytic leukaemia cells cease proliferation and differentiate under phorbol myristate acetate stimulation. The behaviour of MYC, a reprogramming Yamanaka factor that was suggested to be essential for induced pluripotent stem cells during dedifferentiation, could be interpreted based on the transcriptional regulation predicted by the system-wide, ladder-like transcription factor cluster structure. This study introduces a novel system-wide structure to transcriptional networks that provides new insights into network topology.
Analyzing neuronal networks using discrete-time dynamics
NASA Astrophysics Data System (ADS)
Ahn, Sungwoo; Smith, Brian H.; Borisyuk, Alla; Terman, David
2010-05-01
We develop mathematical techniques for analyzing detailed Hodgkin-Huxley like models for excitatory-inhibitory neuronal networks. Our strategy for studying a given network is to first reduce it to a discrete-time dynamical system. The discrete model is considerably easier to analyze, both mathematically and computationally, and parameters in the discrete model correspond directly to parameters in the original system of differential equations. While these networks arise in many important applications, a primary focus of this paper is to better understand mechanisms that underlie temporally dynamic responses in early processing of olfactory sensory information. The models presented here exhibit several properties that have been described for olfactory codes in an insect’s Antennal Lobe. These include transient patterns of synchronization and decorrelation of sensory inputs. By reducing the model to a discrete system, we are able to systematically study how properties of the dynamics, including the complex structure of the transients and attractors, depend on factors related to connectivity and the intrinsic and synaptic properties of cells within the network.
Toward a Unified Sub-symbolic Computational Theory of Cognition
Butz, Martin V.
2016-01-01
This paper proposes how various disciplinary theories of cognition may be combined into a unifying, sub-symbolic, computational theory of cognition. The following theories are considered for integration: psychological theories, including the theory of event coding, event segmentation theory, the theory of anticipatory behavioral control, and concept development; artificial intelligence and machine learning theories, including reinforcement learning and generative artificial neural networks; and theories from theoretical and computational neuroscience, including predictive coding and free energy-based inference. In the light of such a potential unification, it is discussed how abstract cognitive, conceptualized knowledge and understanding may be learned from actively gathered sensorimotor experiences. The unification rests on the free energy-based inference principle, which essentially implies that the brain builds a predictive, generative model of its environment. Neural activity-oriented inference causes the continuous adaptation of the currently active predictive encodings. Neural structure-oriented inference causes the longer term adaptation of the developing generative model as a whole. Finally, active inference strives for maintaining internal homeostasis, causing goal-directed motor behavior. To learn abstract, hierarchical encodings, however, it is proposed that free energy-based inference needs to be enhanced with structural priors, which bias cognitive development toward the formation of particular, behaviorally suitable encoding structures. As a result, it is hypothesized how abstract concepts can develop from, and thus how they are structured by and grounded in, sensorimotor experiences. Moreover, it is sketched-out how symbol-like thought can be generated by a temporarily active set of predictive encodings, which constitute a distributed neural attractor in the form of an interactive free-energy minimum. The activated, interactive network attractor essentially characterizes the semantics of a concept or a concept composition, such as an actual or imagined situation in our environment. Temporal successions of attractors then encode unfolding semantics, which may be generated by a behavioral or mental interaction with an actual or imagined situation in our environment. Implications, further predictions, possible verification, and falsifications, as well as potential enhancements into a fully spelled-out unified theory of cognition are discussed at the end of the paper. PMID:27445895
ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra
2011-01-01
Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817
Short-Term Memory in Orthogonal Neural Networks
NASA Astrophysics Data System (ADS)
White, Olivia L.; Lee, Daniel D.; Sompolinsky, Haim
2004-04-01
We study the ability of linear recurrent networks obeying discrete time dynamics to store long temporal sequences that are retrievable from the instantaneous state of the network. We calculate this temporal memory capacity for both distributed shift register and random orthogonal connectivity matrices. We show that the memory capacity of these networks scales with system size.
Identification of a Functional Connectome for Long-Term Fear Memory in Mice
Wheeler, Anne L.; Teixeira, Cátia M.; Wang, Afra H.; Xiong, Xuejian; Kovacevic, Natasa; Lerch, Jason P.; McIntosh, Anthony R.; Parkinson, John; Frankland, Paul W.
2013-01-01
Long-term memories are thought to depend upon the coordinated activation of a broad network of cortical and subcortical brain regions. However, the distributed nature of this representation has made it challenging to define the neural elements of the memory trace, and lesion and electrophysiological approaches provide only a narrow window into what is appreciated a much more global network. Here we used a global mapping approach to identify networks of brain regions activated following recall of long-term fear memories in mice. Analysis of Fos expression across 84 brain regions allowed us to identify regions that were co-active following memory recall. These analyses revealed that the functional organization of long-term fear memories depends on memory age and is altered in mutant mice that exhibit premature forgetting. Most importantly, these analyses indicate that long-term memory recall engages a network that has a distinct thalamic-hippocampal-cortical signature. This network is concurrently integrated and segregated and therefore has small-world properties, and contains hub-like regions in the prefrontal cortex and thalamus that may play privileged roles in memory expression. PMID:23300432
Theory of correlation in a network with synaptic depression
NASA Astrophysics Data System (ADS)
Igarashi, Yasuhiko; Oizumi, Masafumi; Okada, Masato
2012-01-01
Synaptic depression affects not only the mean responses of neurons but also the correlation of response variability in neural populations. Although previous studies have constructed a theory of correlation in a spiking neuron model by using the mean-field theory framework, synaptic depression has not been taken into consideration. We expanded the previous theoretical framework in this study to spiking neuron models with short-term synaptic depression. On the basis of this theory we analytically calculated neural correlations in a ring attractor network with Mexican-hat-type connectivity, which was used as a model of the primary visual cortex. The results revealed that synaptic depression reduces neural correlation, which could be beneficial for sensory coding. Furthermore, our study opens the way for theoretical studies on the effect of interaction change on the linear response function in large stochastic networks.
BoolNet--an R package for generation, reconstruction and analysis of Boolean networks.
Müssel, Christoph; Hopfensitz, Martin; Kestler, Hans A
2010-05-15
As the study of information processing in living cells moves from individual pathways to complex regulatory networks, mathematical models and simulation become indispensable tools for analyzing the complex behavior of such networks and can provide deep insights into the functioning of cells. The dynamics of gene expression, for example, can be modeled with Boolean networks (BNs). These are mathematical models of low complexity, but have the advantage of being able to capture essential properties of gene-regulatory networks. However, current implementations of BNs only focus on different sub-aspects of this model and do not allow for a seamless integration into existing preprocessing pipelines. BoolNet efficiently integrates methods for synchronous, asynchronous and probabilistic BNs. This includes reconstructing networks from time series, generating random networks, robustness analysis via perturbation, Markov chain simulations, and identification and visualization of attractors. The package BoolNet is freely available from the R project at http://cran.r-project.org/ or http://www.informatik.uni-ulm.de/ni/mitarbeiter/HKestler/boolnet/ under Artistic License 2.0. hans.kestler@uni-ulm.de Supplementary data are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Huang, S.; Ingber, D. E.
2000-01-01
Development of characteristic tissue patterns requires that individual cells be switched locally between different phenotypes or "fates;" while one cell may proliferate, its neighbors may differentiate or die. Recent studies have revealed that local switching between these different gene programs is controlled through interplay between soluble growth factors, insoluble extracellular matrix molecules, and mechanical forces which produce cell shape distortion. Although the precise molecular basis remains unknown, shape-dependent control of cell growth and function appears to be mediated by tension-dependent changes in the actin cytoskeleton. However, the question remains: how can a generalized physical stimulus, such as cell distortion, activate the same set of genes and signaling proteins that are triggered by molecules which bind to specific cell surface receptors. In this article, we use computer simulations based on dynamic Boolean networks to show that the different cell fates that a particular cell can exhibit may represent a preprogrammed set of common end programs or "attractors" which self-organize within the cell's regulatory networks. In this type of dynamic network model of information processing, generalized stimuli (e.g., mechanical forces) and specific molecular cues elicit signals which follow different trajectories, but eventually converge onto one of a small set of common end programs (growth, quiescence, differentiation, apoptosis, etc.). In other words, if cells use this type of information processing system, then control of cell function would involve selection of preexisting (latent) behavioral modes of the cell, rather than instruction by specific binding molecules. Importantly, the results of the computer simulation closely mimic experimental data obtained with living endothelial cells. The major implication of this finding is that current methods used for analysis of cell function that rely on characterization of linear signaling pathways or clusters of genes with common activity profiles may overlook the most critical features of cellular information processing which normally determine how signal specificity is established and maintained in living cells. Copyright 2000 Academic Press.
Social Transmission of False Memory in Small Groups and Large Networks.
Maswood, Raeya; Rajaram, Suparna
2018-05-21
Sharing information and memories is a key feature of social interactions, making social contexts important for developing and transmitting accurate memories and also false memories. False memory transmission can have wide-ranging effects, including shaping personal memories of individuals as well as collective memories of a network of people. This paper reviews a collection of key findings and explanations in cognitive research on the transmission of false memories in small groups. It also reviews the emerging experimental work on larger networks and collective false memories. Given the reconstructive nature of memory, the abundance of misinformation in everyday life, and the variety of social structures in which people interact, an understanding of transmission of false memories has both scientific and societal implications. © 2018 Cognitive Science Society, Inc.
Understanding the Role of Chaos Theory in Military Decision Making
2009-01-01
Because chaos is bounded, planners can create allowances for system noise. The existence of strange and normal chaotic attractors helps explain why... strange and normal chaotic attractors helps explain why system turbulence is uneven or concentrated around specific solution regions. Finally, the...give better understanding of the implications of chaos: sensitivity to initial conditions, strange attractors , and constants of motion. By showing the
Spreading activation in nonverbal memory networks.
Foster, Paul S; Wakefield, Candias; Pryjmak, Scott; Roosa, Katelyn M; Branch, Kaylei K; Drago, Valeria; Harrison, David W; Ruff, Ronald
2017-09-01
Theories of spreading activation primarily involve semantic memory networks. However, the existence of separate verbal and visuospatial memory networks suggests that spreading activation may also occur in visuospatial memory networks. The purpose of the present investigation was to explore this possibility. Specifically, this study sought to create and describe the design frequency corpus and to determine whether this measure of visuospatial spreading activation was related to right hemisphere functioning and spreading activation in verbal memory networks. We used word frequencies taken from the Controlled Oral Word Association Test and design frequencies taken from the Ruff Figural Fluency Test as measures of verbal and visuospatial spreading activation, respectively. Average word and design frequencies were then correlated with measures of left and right cerebral functioning. The results indicated that a significant relationship exists between performance on a test of right posterior functioning (Block Design) and design frequency. A significant negative relationship also exists between spreading activation in semantic memory networks and design frequency. Based on our findings, the hypotheses were supported. Further research will need to be conducted to examine whether spreading activation exists in visuospatial memory networks as well as the parameters that might modulate this spreading activation, such as the influence of neurotransmitters.
Park, Hae-Jeong; Chun, Ji-Won; Park, Bumhee; Park, Haeil; Kim, Joong Il; Lee, Jong Doo; Kim, Jae-Jin
2011-05-01
Although blind people heavily depend on working memory to manage daily life without visual information, it is not clear yet whether their working memory processing involves functional reorganization of the memory-related cortical network. To explore functional reorganization of the cortical network that supports various types of working memory processes in the early blind, we investigated activation differences between 2-back tasks and 0-back tasks using fMRI in 10 congenitally blind subjects and 10 sighted subjects. We used three types of stimulus sequences: words for a verbal task, pitches for a non-verbal task, and sound locations for a spatial task. When compared to the sighted, the blind showed additional activations in the occipital lobe for all types of stimulus sequences for working memory and more significant deactivation in the posterior cingulate cortex of the default mode network. The blind had increased effective connectivity from the default mode network to the left parieto-frontal network and from the occipital cortex to the right parieto-frontal network during the 2-back tasks than the 0-back tasks. These findings suggest not only cortical plasticity of the occipital cortex but also reorganization of the cortical network for the executive control of working memory.
Memory Dynamics in Cross-linked Actin Networks
NASA Astrophysics Data System (ADS)
Scheff, Danielle; Majumdar, Sayantan; Gardel, Margaret
Cells demonstrate the remarkable ability to adapt to mechanical stimuli through rearrangement of the actin cytoskeleton, a cross-linked network of actin filaments. In addition to its importance in cell biology, understanding this mechanical response provides strategies for creation of novel materials. A recent study has demonstrated that applied stress can encode mechanical memory in these networks through changes in network geometry, which gives rise to anisotropic shear response. Under later shear, the network is stiffer in the direction of the previously applied stress. However, the dynamics behind the encoding of this memory are unknown. To address this question, we explore the effect of varying either the rigidity of the cross-linkers or the length of actin filament on the time scales required for both memory encoding and over which it later decays. While previous experiments saw only a long-lived memory, initial results suggest another mechanism where memories relax relatively quickly. Overall, our study is crucial for understanding the process by which an external stress can impact network arrangement and thus the dynamics of memory formation.
Properties of a memory network in psychology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wedemann, Roseli S.; Donangelo, Raul; Carvalho, Luis A. V. de
We have previously described neurotic psychopathology and psychoanalytic working-through by an associative memory mechanism, based on a neural network model, where memory was modelled by a Boltzmann machine (BM). Since brain neural topology is selectively structured, we simulated known microscopic mechanisms that control synaptic properties, showing that the network self-organizes to a hierarchical, clustered structure. Here, we show some statistical mechanical properties of the complex networks which result from this self-organization. They indicate that a generalization of the BM may be necessary to model memory.
Properties of a memory network in psychology
NASA Astrophysics Data System (ADS)
Wedemann, Roseli S.; Donangelo, Raul; de Carvalho, Luís A. V.
2007-12-01
We have previously described neurotic psychopathology and psychoanalytic working-through by an associative memory mechanism, based on a neural network model, where memory was modelled by a Boltzmann machine (BM). Since brain neural topology is selectively structured, we simulated known microscopic mechanisms that control synaptic properties, showing that the network self-organizes to a hierarchical, clustered structure. Here, we show some statistical mechanical properties of the complex networks which result from this self-organization. They indicate that a generalization of the BM may be necessary to model memory.
The Art of Grid Fields: Geometry of Neuronal Time
Shilnikov, Andrey L.; Maurer, Andrew Porter
2016-01-01
The discovery of grid cells in the entorhinal cortex has both elucidated our understanding of spatial representations in the brain, and germinated a large number of theoretical models regarding the mechanisms of these cells’ striking spatial firing characteristics. These models cross multiple neurobiological levels that include intrinsic membrane resonance, dendritic integration, after hyperpolarization characteristics and attractor dynamics. Despite the breadth of the models, to our knowledge, parallels can be drawn between grid fields and other temporal dynamics observed in nature, much of which was described by Art Winfree and colleagues long before the initial description of grid fields. Using theoretical and mathematical investigations of oscillators, in a wide array of mediums far from the neurobiology of grid cells, Art Winfree has provided a substantial amount of research with significant and profound similarities. These theories provide specific inferences into the biological mechanisms and extraordinary resemblances across phenomenon. Therefore, this manuscript provides a novel interpretation on the phenomenon of grid fields, from the perspective of coupled oscillators, postulating that grid fields are the spatial representation of phase resetting curves in the brain. In contrast to prior models of gird cells, the current manuscript provides a sketch by which a small network of neurons, each with oscillatory components can operate to form grid cells, perhaps providing a unique hybrid between the competing attractor neural network and oscillatory interference models. The intention of this new interpretation of the data is to encourage novel testable hypotheses. PMID:27013981
Gyurko, David M; Soti, Csaba; Stetak, Attila; Csermely, Peter
2014-05-01
During the last decade, network approaches became a powerful tool to describe protein structure and dynamics. Here, we describe first the protein structure networks of molecular chaperones, then characterize chaperone containing sub-networks of interactomes called as chaperone-networks or chaperomes. We review the role of molecular chaperones in short-term adaptation of cellular networks in response to stress, and in long-term adaptation discussing their putative functions in the regulation of evolvability. We provide a general overview of possible network mechanisms of adaptation, learning and memory formation. We propose that changes of network rigidity play a key role in learning and memory formation processes. Flexible network topology provides ' learning-competent' state. Here, networks may have much less modular boundaries than locally rigid, highly modular networks, where the learnt information has already been consolidated in a memory formation process. Since modular boundaries are efficient filters of information, in the 'learning-competent' state information filtering may be much smaller, than after memory formation. This mechanism restricts high information transfer to the 'learning competent' state. After memory formation, modular boundary-induced segregation and information filtering protect the stored information. The flexible networks of young organisms are generally in a 'learning competent' state. On the contrary, locally rigid networks of old organisms have lost their 'learning competent' state, but store and protect their learnt information efficiently. We anticipate that the above mechanism may operate at the level of both protein-protein interaction and neuronal networks.
Paucity of attractors in nonlinear systems driven with complex signals.
Pethel, Shawn D; Blakely, Jonathan N
2011-04-01
We study the probability of multistability in a quadratic map driven repeatedly by a random signal of length N, where N is taken as a measure of the signal complexity. We first establish analytically that the number of coexisting attractors is bounded above by N. We then numerically estimate the probability p of a randomly chosen signal resulting in a multistable response as a function of N. Interestingly, with increasing drive signal complexity the system exhibits a paucity of attractors. That is, almost any drive signal beyond a certain complexity level will result in a single attractor response (p=0). This mechanism may play a role in allowing sensitive multistable systems to respond consistently to external influences.
Stevens, Alexander A.; Tappon, Sarah C.; Garg, Arun; Fair, Damien A.
2012-01-01
Background Cognitive abilities, such as working memory, differ among people; however, individuals also vary in their own day-to-day cognitive performance. One potential source of cognitive variability may be fluctuations in the functional organization of neural systems. The degree to which the organization of these functional networks is optimized may relate to the effective cognitive functioning of the individual. Here we specifically examine how changes in the organization of large-scale networks measured via resting state functional connectivity MRI and graph theory track changes in working memory capacity. Methodology/Principal Findings Twenty-two participants performed a test of working memory capacity and then underwent resting-state fMRI. Seventeen subjects repeated the protocol three weeks later. We applied graph theoretic techniques to measure network organization on 34 brain regions of interest (ROI). Network modularity, which measures the level of integration and segregation across sub-networks, and small-worldness, which measures global network connection efficiency, both predicted individual differences in memory capacity; however, only modularity predicted intra-individual variation across the two sessions. Partial correlations controlling for the component of working memory that was stable across sessions revealed that modularity was almost entirely associated with the variability of working memory at each session. Analyses of specific sub-networks and individual circuits were unable to consistently account for working memory capacity variability. Conclusions/Significance The results suggest that the intrinsic functional organization of an a priori defined cognitive control network measured at rest provides substantial information about actual cognitive performance. The association of network modularity to the variability in an individual's working memory capacity suggests that the organization of this network into high connectivity within modules and sparse connections between modules may reflect effective signaling across brain regions, perhaps through the modulation of signal or the suppression of the propagation of noise. PMID:22276205
Electronic implementation of associative memory based on neural network models
NASA Technical Reports Server (NTRS)
Moopenn, A.; Lambe, John; Thakoor, A. P.
1987-01-01
An electronic embodiment of a neural network based associative memory in the form of a binary connection matrix is described. The nature of false memory errors, their effect on the information storage capacity of binary connection matrix memories, and a novel technique to eliminate such errors with the help of asymmetrical extra connections are discussed. The stability of the matrix memory system incorporating a unique local inhibition scheme is analyzed in terms of local minimization of an energy function. The memory's stability, dynamic behavior, and recall capability are investigated using a 32-'neuron' electronic neural network memory with a 1024-programmable binary connection matrix.
NASA Astrophysics Data System (ADS)
Ohlson Timoudas, Thomas
2017-12-01
Let Φ be a quasi-periodically forced quadratic map, where the rotation constant ω is a Diophantine irrational. A strange non-chaotic attractor (SNA) is an invariant (under Φ) attracting graph of a nowhere continuous measurable function ψ from the circle {T} to [0, 1] . This paper investigates how a smooth attractor degenerates into a strange one, as a parameter \
Timing of transients: quantifying reaching times and transient behavior in complex systems
NASA Astrophysics Data System (ADS)
Kittel, Tim; Heitzig, Jobst; Webster, Kevin; Kurths, Jürgen
2017-08-01
In dynamical systems, one may ask how long it takes for a trajectory to reach the attractor, i.e. how long it spends in the transient phase. Although for a single trajectory the mathematically precise answer may be infinity, it still makes sense to compare different trajectories and quantify which of them approaches the attractor earlier. In this article, we categorize several problems of quantifying such transient times. To treat them, we propose two metrics, area under distance curve and regularized reaching time, that capture two complementary aspects of transient dynamics. The first, area under distance curve, is the distance of the trajectory to the attractor integrated over time. It measures which trajectories are ‘reluctant’, i.e. stay distant from the attractor for long, or ‘eager’ to approach it right away. Regularized reaching time, on the other hand, quantifies the additional time (positive or negative) that a trajectory starting at a chosen initial condition needs to approach the attractor as compared to some reference trajectory. A positive or negative value means that it approaches the attractor by this much ‘earlier’ or ‘later’ than the reference, respectively. We demonstrated their substantial potential for application with multiple paradigmatic examples uncovering new features.
NASA Astrophysics Data System (ADS)
Yu, Yue; Zhang, Zhengdi; Han, Xiujing
2018-03-01
In this work, we aim to demonstrate the novel routes to periodic and chaotic bursting, i.e., the different bursting dynamics via delayed pitchfork bifurcations around stable attractors, in the classical controlled Lü system. First, by computing the corresponding characteristic polynomial, we determine where some critical values about bifurcation behaviors appear in the Lü system. Moreover, the transition mechanism among different stable attractors has been introduced including homoclinic-type connections or chaotic attractors. Secondly, taking advantage of the above analytical results, we carry out a study of the mechanism for bursting dynamics in the Lü system with slowly periodic variation of certain control parameter. A distinct delayed supercritical pitchfork bifurcation behavior can be discussed when the control item passes through bifurcation points periodically. This delayed dynamical behavior may terminate at different parameter areas, which leads to different spiking modes around different stable attractors (equilibriums, limit cycles, or chaotic attractors). In particular, the chaotic attractor may appear by Shilnikov connections or chaos boundary crisis, which leads to the occurrence of impressive chaotic bursting oscillations. Our findings enrich the study of bursting dynamics and deepen the understanding of some similar sorts of delayed bursting phenomena. Finally, some numerical simulations are included to illustrate the validity of our study.
Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun
2015-08-10
With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.
[Extraction and recognition of attractors in three-dimensional Lorenz plot].
Hu, Min; Jang, Chengfan; Wang, Suxia
2018-02-01
Lorenz plot (LP) method which gives a global view of long-time electrocardiogram signals, is an efficient simple visualization tool to analyze cardiac arrhythmias, and the morphologies and positions of the extracted attractors may reveal the underlying mechanisms of the onset and termination of arrhythmias. But automatic diagnosis is still impossible because it is lack of the method of extracting attractors by now. We presented here a methodology of attractor extraction and recognition based upon homogeneously statistical properties of the location parameters of scatter points in three dimensional LP (3DLP), which was constructed by three successive RR intervals as X , Y and Z axis in Cartesian coordinate system. Validation experiments were tested in a group of RR-interval time series and tags data with frequent unifocal premature complexes exported from a 24-hour Holter system. The results showed that this method had excellent effective not only on extraction of attractors, but also on automatic recognition of attractors by the location parameters such as the azimuth of the points peak frequency ( A PF ) of eccentric attractors once stereographic projection of 3DLP along the space diagonal. Besides, A PF was still a powerful index of differential diagnosis of atrial and ventricular extrasystole. Additional experiments proved that this method was also available on several other arrhythmias. Moreover, there were extremely relevant relationships between 3DLP and two dimensional LPs which indicate any conventional achievement of LPs could be implanted into 3DLP. It would have a broad application prospect to integrate this method into conventional long-time electrocardiogram monitoring and analysis system.
Associative memory in phasing neuron networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nair, Niketh S; Bochove, Erik J.; Braiman, Yehuda
2014-01-01
We studied pattern formation in a network of coupled Hindmarsh-Rose model neurons and introduced a new model for associative memory retrieval using networks of Kuramoto oscillators. Hindmarsh-Rose Neural Networks can exhibit a rich set of collective dynamics that can be controlled by their connectivity. Specifically, we showed an instance of Hebb's rule where spiking was correlated with network topology. Based on this, we presented a simple model of associative memory in coupled phase oscillators.
Spreading activation in emotional memory networks and the cumulative effects of somatic markers.
Foster, Paul S; Hubbard, Tyler; Campbell, Ransom W; Poole, Jonathan; Pridmore, Michael; Bell, Chris; Harrison, David W
2017-06-01
The theory of spreading activation proposes that the activation of a semantic memory node may spread along bidirectional associative links to other related nodes. Although this theory was originally proposed to explain semantic memory networks, a similar process may be said to exist with episodic or emotional memory networks. The Somatic Marker hypothesis proposes that remembering an emotional memory activates the somatic sensations associated with the memory. An integration of these two models suggests that as spreading activation in emotional memory networks increases, a greater number of associated somatic markers would become activated. This process would then result in greater changes in physiological functioning. We sought to investigate this possibility by having subjects recall words associated with sad and happy memories, in addition to a neutral condition. The average ages of the memories and the number of word memories recalled were then correlated with measures of heart rate and skin conductance. The results indicated significant positive correlations between the number of happy word memories and heart rate (r = .384, p = .022) and between the average ages of the sad memories and skin conductance (r = .556, p = .001). Unexpectedly, a significant negative relationship was found between the number of happy word memories and skin conductance (r = -.373, p = .025). The results provide partial support for our hypothesis, indicating that increasing spreading activation in emotional memory networks activates an increasing number of somatic markers and this is then reflected in greater physiological activity at the time of recalling the memories.
[Cognitive advantages of the third age: a neural network model of brain aging].
Karpenko, M P; Kachalova, L M; Budilova, E V; Terekhin, A T
2009-01-01
We consider a neural network model of age-related cognitive changes in aging brain based on Hopfield network with a sigmoid function of neuron activation. Age is included in the activation function as a parameter in the form of exponential rate denominator, which makes it possible to take into account the weakening of interneuronal links really observed in the aging brain. Analysis of properties of the Lyapunov function associated with the network shows that, with increasing parameter of age, its relief becomes smoother and the number of local minima (network attractors) decreases. As a result, the network gets less frequently stuck in the nearest local minima of the Lyapunov function and reaches a global minimum corresponding to the most effective solution of the cognitive task. It is reasonable to assume that similar changes really occur in the aging brain. Phenomenologically, these changes can be manifested as emergence in aged people of a cognitive quality such as wisdom i.e. ability to find optimal decisions in difficult controversial situations, to distract from secondary aspects and to see the problem as a whole.
Ergodic properties of spiking neuronal networks with delayed interactions
NASA Astrophysics Data System (ADS)
Palmigiano, Agostina; Wolf, Fred
The dynamical stability of neuronal networks, and the possibility of chaotic dynamics in the brain pose profound questions to the mechanisms underlying perception. Here we advance on the tractability of large neuronal networks of exactly solvable neuronal models with delayed pulse-coupled interactions. Pulse coupled delayed systems with an infinite dimensional phase space can be studied in equivalent systems of fixed and finite degrees of freedom by introducing a delayer variable for each neuron. A Jacobian of the equivalent system can be analytically obtained, and numerically evaluated. We find that depending on the action potential onset rapidness and the level of heterogeneities, the asynchronous irregular regime characteristic of balanced state networks loses stability with increasing delays to either a slow synchronous irregular or a fast synchronous irregular state. In networks of neurons with slow action potential onset, the transition to collective oscillations leads to an increase of the exponential rate of divergence of nearby trajectories and of the entropy production rate of the chaotic dynamics. The attractor dimension, instead of increasing linearly with increasing delay as reported in many other studies, decreases until eventually the network reaches full synchrony
Jaeger, Johannes; Crombach, Anton
2012-01-01
We propose an approach to evolutionary systems biology which is based on reverse engineering of gene regulatory networks and in silico evolutionary simulations. We infer regulatory parameters for gene networks by fitting computational models to quantitative expression data. This allows us to characterize the regulatory structure and dynamical repertoire of evolving gene regulatory networks with a reasonable amount of experimental and computational effort. We use the resulting network models to identify those regulatory interactions that are conserved, and those that have diverged between different species. Moreover, we use the models obtained by data fitting as starting points for simulations of evolutionary transitions between species. These simulations enable us to investigate whether such transitions are random, or whether they show stereotypical series of regulatory changes which depend on the structure and dynamical repertoire of an evolving network. Finally, we present a case study-the gap gene network in dipterans (flies, midges, and mosquitoes)-to illustrate the practical application of the proposed methodology, and to highlight the kind of biological insights that can be gained by this approach.
Visuospatial working memory in very preterm and term born children--impact of age and performance.
Mürner-Lavanchy, I; Ritter, B C; Spencer-Smith, M M; Perrig, W J; Schroth, G; Steinlin, M; Everts, R
2014-07-01
Working memory is crucial for meeting the challenges of daily life and performing academic tasks, such as reading or arithmetic. Very preterm born children are at risk of low working memory capacity. The aim of this study was to examine the visuospatial working memory network of school-aged preterm children and to determine the effect of age and performance on the neural working memory network. Working memory was assessed in 41 very preterm born children and 36 term born controls (aged 7-12 years) using functional magnetic resonance imaging (fMRI) and neuropsychological assessment. While preterm children and controls showed equal working memory performance, preterm children showed less involvement of the right middle frontal gyrus, but higher fMRI activation in superior frontal regions than controls. The younger and low-performing preterm children presented an atypical working memory network whereas the older high-performing preterm children recruited a working memory network similar to the controls. Results suggest that younger and low-performing preterm children show signs of less neural efficiency in frontal brain areas. With increasing age and performance, compensational mechanisms seem to occur, so that in preterm children, the typical visuospatial working memory network is established by the age of 12 years. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world. PMID:26528176
`Unlearning' has a stabilizing effect in collective memories
NASA Astrophysics Data System (ADS)
Hopfield, J. J.; Feinstein, D. I.; Palmer, R. G.
1983-07-01
Crick and Mitchison1 have presented a hypothesis for the functional role of dream sleep involving an `unlearning' process. We have independently carried out mathematical and computer modelling of learning and `unlearning' in a collective neural network of 30-1,000 neurones. The model network has a content-addressable memory or `associative memory' which allows it to learn and store many memories. A particular memory can be evoked in its entirety when the network is stimulated by any adequate-sized subpart of the information of that memory2. But different memories of the same size are not equally easy to recall. Also, when memories are learned, spurious memories are also created and can also be evoked. Applying an `unlearning' process, similar to the learning processes but with a reversed sign and starting from a noise input, enhances the performance of the network in accessing real memories and in minimizing spurious ones. Although our model was not motivated by higher nervous function, our system displays behaviours which are strikingly parallel to those needed for the hypothesized role of `unlearning' in rapid eye movement (REM) sleep.
Dopamine D1 signaling organizes network dynamics underlying working memory
Roffman, Joshua L.; Tanner, Alexandra S.; Eryilmaz, Hamdi; Rodriguez-Thompson, Anais; Silverstein, Noah J.; Ho, New Fei; Nitenson, Adam Z.; Chonde, Daniel B.; Greve, Douglas N.; Abi-Dargham, Anissa; Buckner, Randy L.; Manoach, Dara S.; Rosen, Bruce R.; Hooker, Jacob M.; Catana, Ciprian
2016-01-01
Local prefrontal dopamine signaling supports working memory by tuning pyramidal neurons to task-relevant stimuli. Enabled by simultaneous positron emission tomography–magnetic resonance imaging (PET-MRI), we determined whether neuromodulatory effects of dopamine scale to the level of cortical networks and coordinate their interplay during working memory. Among network territories, mean cortical D1 receptor densities differed substantially but were strongly interrelated, suggesting cross-network regulation. Indeed, mean cortical D1 density predicted working memory–emergent decoupling of the frontoparietal and default networks, which respectively manage task-related and internal stimuli. In contrast, striatal D1 predicted opposing effects within these two networks but no between-network effects. These findings specifically link cortical dopamine signaling to network crosstalk that redirects cognitive resources to working memory, echoing neuromodulatory effects of D1 signaling on the level of cortical microcircuits. PMID:27386561
Eyre, Harris A; Acevedo, Bianca; Yang, Hongyu; Siddarth, Prabha; Van Dyk, Kathleen; Ercoli, Linda; Leaver, Amber M; Cyr, Natalie St; Narr, Katherine; Baune, Bernhard T; Khalsa, Dharma S; Lavretsky, Helen
2016-01-01
No study has explored the effect of yoga on cognitive decline and resting-state functional connectivity. This study explored the relationship between performance on memory tests and resting-state functional connectivity before and after a yoga intervention versus active control for subjects with mild cognitive impairment (MCI). Participants ( ≥ 55 y) with MCI were randomized to receive a yoga intervention or active "gold-standard" control (i.e., memory enhancement training (MET)) for 12 weeks. Resting-state functional magnetic resonance imaging was used to map correlations between brain networks and memory performance changes over time. Default mode networks (DMN), language and superior parietal networks were chosen as networks of interest to analyze the association with changes in verbal and visuospatial memory performance. Fourteen yoga and 11 MET participants completed the study. The yoga group demonstrated a statistically significant improvement in depression and visuospatial memory. We observed improved verbal memory performance correlated with increased connectivity between the DMN and frontal medial cortex, pregenual anterior cingulate cortex, right middle frontal cortex, posterior cingulate cortex, and left lateral occipital cortex. Improved verbal memory performance positively correlated with increased connectivity between the language processing network and the left inferior frontal gyrus. Improved visuospatial memory performance correlated inversely with connectivity between the superior parietal network and the medial parietal cortex. Yoga may be as effective as MET in improving functional connectivity in relation to verbal memory performance. These findings should be confirmed in larger prospective studies.
A revised limbic system model for memory, emotion and behaviour.
Catani, Marco; Dell'acqua, Flavio; Thiebaut de Schotten, Michel
2013-09-01
Emotion, memories and behaviour emerge from the coordinated activities of regions connected by the limbic system. Here, we propose an update of the limbic model based on the seminal work of Papez, Yakovlev and MacLean. In the revised model we identify three distinct but partially overlapping networks: (i) the Hippocampal-diencephalic and parahippocampal-retrosplenial network dedicated to memory and spatial orientation; (ii) The temporo-amygdala-orbitofrontal network for the integration of visceral sensation and emotion with semantic memory and behaviour; (iii) the default-mode network involved in autobiographical memories and introspective self-directed thinking. The three networks share cortical nodes that are emerging as principal hubs in connectomic analysis. This revised network model of the limbic system reconciles recent functional imaging findings with anatomical accounts of clinical disorders commonly associated with limbic pathology. Copyright © 2013 Elsevier Ltd. All rights reserved.
Recurrent Network models of sequence generation and memory
Rajan, Kanaka; Harvey, Christopher D; Tank, David W
2016-01-01
SUMMARY Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here, we demonstrate that starting from random connectivity and modifying a small fraction of connections, a largely disordered recurrent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network training (PINning), to model and match cellular-resolution imaging data from the posterior parietal cortex during a virtual memory-guided two-alternative forced choice task [Harvey, Coen and Tank, 2012]. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic interactions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures. PMID:26971945
Coexisting multiple attractors and riddled basins of a memristive system.
Wang, Guangyi; Yuan, Fang; Chen, Guanrong; Zhang, Yu
2018-01-01
In this paper, a new memristor-based chaotic system is designed, analyzed, and implemented. Multistability, multiple attractors, and complex riddled basins are observed from the system, which are investigated along with other dynamical behaviors such as equilibrium points and their stabilities, symmetrical bifurcation diagrams, and sustained chaotic states. With different sets of system parameters, the system can also generate various multi-scroll attractors. Finally, the system is realized by experimental circuits.
A Search for Strange Attractors in the Saturation of Middle Atmosphere Gravity Waves
1990-09-01
Fraser, A. M. and H. L. Swinney, 1986: Independent coordinates for strange attractors from mutual information . Phvs. Rev. A, 33, 1134-1140. Fraser...vectors implies that the two are linearly independent . However, data characterized by a strange attractor are usually highly nonlinear, thus making...noise in this data set. The degree of autocorrelation and the lack of general independence as determined from the mutual information also reduces the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Márquez, Bicky A., E-mail: bmarquez@ivic.gob.ve; Suárez-Vargas, José J., E-mail: jjsuarez@ivic.gob.ve; Ramírez, Javier A.
2014-09-01
Controlled transitions between a hierarchy of n-scroll attractors are investigated in a nonlinear optoelectronic oscillator. Using the system's feedback strength as a control parameter, it is shown experimentally the transition from Van der Pol-like attractors to 6-scroll, but in general, this scheme can produce an arbitrary number of scrolls. The complexity of every state is characterized by Lyapunov exponents and autocorrelation coefficients.
Dynamical behavior for the three-dimensional generalized Hasegawa-Mima equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Ruifeng; Guo Boling; Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088
2007-01-15
The long time behavior of solution of the three-dimensional generalized Hasegawa-Mima [Phys. Fluids 21, 87 (1978)] equations with dissipation term is considered. The global attractor problem of the three-dimensional generalized Hasegawa-Mima equations with periodic boundary condition was studied. Applying the method of uniform a priori estimates, the existence of global attractor of this problem was proven, and also the dimensions of the global attractor are estimated.
Renormalization group independence of Cosmological Attractors
NASA Astrophysics Data System (ADS)
Fumagalli, Jacopo
2017-06-01
The large class of inflationary models known as α- and ξ-attractors gives identical cosmological predictions at tree level (at leading order in inverse power of the number of efolds). Working with the renormalization group improved action, we show that these predictions are robust under quantum corrections. This means that for all the models considered the inflationary parameters (ns , r) are (nearly) independent on the Renormalization Group flow. The result follows once the field dependence of the renormalization scale, fixed by demanding the leading log correction to vanish, satisfies a quite generic condition. In Higgs inflation (which is a particular ξ-attractor) this is indeed the case; in the more general attractor models this is still ensured by the renormalizability of the theory in the effective field theory sense.
NASA Astrophysics Data System (ADS)
Lien, C.-H.; Vaidyanathan, S.; Sambas, A.; Sukono; Mamat, M.; Sanjaya, W. S. M.; Subiyanto
2018-03-01
A 3-D new two-scroll chaotic attractor with three quadratic nonlinearities is investigated in this paper. First, the qualitative and dynamical properties of the new two-scroll chaotic system are described in terms of phase portraits, equilibrium points, Lyapunov exponents, Kaplan-Yorke dimension, dissipativity, etc. We show that the new two-scroll dissipative chaotic system has three unstable equilibrium points. As an engineering application, global chaos control of the new two-scroll chaotic system with unknown system parameters is designed via adaptive feedback control and Lyapunov stability theory. Furthermore, an electronic circuit realization of the new chaotic attractor is presented in detail to confirm the feasibility of the theoretical chaotic two-scroll attractor model.
A study of roll attractor and wing rock of delta wings at high angles of attack
NASA Technical Reports Server (NTRS)
Niranjana, T.; Rao, D. M.; Pamadi, Bandu N.
1993-01-01
Wing rock is a high angle of attack dynamic phenomenon of limited cycle motion predominantly in roll. The wing rock is one of the limitations to combat effectiveness of the fighter aircraft. Roll Attractor is the steady state or equilibrium trim angle (phi(sub trim)) attained by the free-to-roll model, held at some angle of attack, and released form rest at a given initial roll (bank) angle (phi(sub O)). Multiple roll attractors are attained at different trim angles depending on initial roll angle. The test facility (Vigyan's low speed wind tunnel) and experimental work is presented here along with mathematical modelling of roll attractor phenomenon and analysis and comparison of predictions with experimental data.
Burianová, Hana; Ciaramelli, Elisa; Grady, Cheryl L; Moscovitch, Morris
2012-11-15
The objective of this study was to examine the functional connectivity of brain regions active during cued and uncued recognition memory to test the idea that distinct networks would underlie these memory processes, as predicted by the attention-to-memory (AtoM) hypothesis. The AtoM hypothesis suggests that dorsal parietal cortex (DPC) allocates effortful top-down attention to memory retrieval during cued retrieval, whereas ventral parietal cortex (VPC) mediates spontaneous bottom-up capture of attention by memory during uncued retrieval. To identify networks associated with these two processes, we conducted a functional connectivity analysis of a left DPC and a left VPC region, both identified by a previous analysis of task-related regional activations. We hypothesized that the two parietal regions would be functionally connected with distinct neural networks, reflecting their engagement in the differential mnemonic processes. We found two spatially dissociated networks that overlapped only in the precuneus. During cued trials, DPC was functionally connected with dorsal attention areas, including the superior parietal lobules, right precuneus, and premotor cortex, as well as relevant memory areas, such as the left hippocampus and the middle frontal gyri. During uncued trials, VPC was functionally connected with ventral attention areas, including the supramarginal gyrus, cuneus, and right fusiform gyrus, as well as the parahippocampal gyrus. In addition, activity in the DPC network was associated with faster response times for cued retrieval. This is the first study to show a dissociation of the functional connectivity of posterior parietal regions during episodic memory retrieval, characterized by a top-down AtoM network involving DPC and a bottom-up AtoM network involving VPC. Copyright © 2012 Elsevier Inc. All rights reserved.
The Future of Memory: Remembering, Imagining, and the Brain
Schacter, Daniel L.; Addis, Donna Rose; Hassabis, Demis; Martin, Victoria C.; Spreng, R. Nathan; Szpunar, Karl K.
2013-01-01
During the past few years, there has been a dramatic increase in research examining the role of memory in imagination and future thinking. This work has revealed striking similarities between remembering the past and imagining or simulating the future, including the finding that a common brain network underlies both memory and imagination. Here we discuss a number of key points that have emerged during recent years, focusing in particular on the importance of distinguishing between temporal and non-temporal factors in analyses of memory and imagination, the nature of differences between remembering the past and imagining the future, the identification of component processes that comprise the default network supporting memory-based simulations, and the finding that this network can couple flexibly with other networks to support complex goal-directed simulations. This growing area of research has broadened our conception of memory by highlighting the many ways in which memory supports adaptive functioning. PMID:23177955
Analytical estimation of the correlation dimension of integer lattices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacasa, Lucas, E-mail: l.lacasa@qmul.ac.uk; Gómez-Gardeñes, Jesús, E-mail: gardenes@gmail.com; Departamento de Fisica de la Materia Condensada, Universidad de Zaragoza, Zaragoza
2014-12-01
Recently [L. Lacasa and J. Gómez-Gardeñes, Phys. Rev. Lett. 110, 168703 (2013)], a fractal dimension has been proposed to characterize the geometric structure of networks. This measure is an extension to graphs of the so called correlation dimension, originally proposed by Grassberger and Procaccia to describe the geometry of strange attractors in dissipative chaotic systems. The calculation of the correlation dimension of a graph is based on the local information retrieved from a random walker navigating the network. In this contribution, we study such quantity for some limiting synthetic spatial networks and obtain analytical results on agreement with the previouslymore » reported numerics. In particular, we show that up to first order, the correlation dimension β of integer lattices ℤ{sup d} coincides with the Haussdorf dimension of their coarsely equivalent Euclidean spaces, β = d.« less
From globally coupled maps to complex-systems biology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaneko, Kunihiko, E-mail: kaneko@complex.c.u-tokyo.ac.jp
Studies of globally coupled maps, introduced as a network of chaotic dynamics, are briefly reviewed with an emphasis on novel concepts therein, which are universal in high-dimensional dynamical systems. They include clustering of synchronized oscillations, hierarchical clustering, chimera of synchronization and desynchronization, partition complexity, prevalence of Milnor attractors, chaotic itinerancy, and collective chaos. The degrees of freedom necessary for high dimensionality are proposed to equal the number in which the combinatorial exceeds the exponential. Future analysis of high-dimensional dynamical systems with regard to complex-systems biology is briefly discussed.
Equilibrium paths analysis of materials with rheological properties by using the chaos theory
NASA Astrophysics Data System (ADS)
Bednarek, Paweł; Rządkowski, Jan
2018-01-01
The numerical equilibrium path analysis of the material with random rheological properties by using standard procedures and specialist computer programs was not successful. The proper solution for the analysed heuristic model of the material was obtained on the base of chaos theory elements and neural networks. The paper deals with mathematical reasons of used computer programs and also are elaborated the properties of the attractor used in analysis. There are presented results of conducted numerical analysis both in a numerical and in graphical form for the used procedures.
Default Mode Network Interference in Mild Traumatic Brain Injury – A Pilot Resting State Study
Sours, Chandler; Zhuo, Jiachen; Janowich, Jacqueline; Aarabi, Bizhan; Shanmuganathan, Kathirkamanthan; Gullapalli, Rao P
2013-01-01
In this study we investigated the functional connectivity in 23 Mild TBI (mTBI) patients with and without memory complaints using resting state fMRI in the sub-acute stage of injury as well as a group of control participants. Results indicate that mTBI patients with memory complaints performed significantly worse than patients without memory complaints on tests assessing memory from the Automated Neuropsychological Assessment Metrics (ANAM). Altered functional connectivity was observed between the three groups between the default mode network (DMN) and the nodes of the task positive network (TPN). Altered functional connectivity was also observed between both the TPN and DMN and nodes associated with the Salience Network (SN). Following mTBI there is a reduction in anti-correlated networks for both those with and without memory complaints for the DMN, but only a reduction in the anti-correlated network in mTBI patients with memory complaints for the TPN. Furthermore, an increased functional connectivity between the TPN and SN appears to be associated with reduced performance on memory assessments. Overall the results suggest that a disruption in the segregation of the DMN and the TPN at rest may be mediated through both a direct pathway of increased FC between various nodes of the TPN and DMN, and through an indirect pathway that links the TPN and DMN through nodes of the SN. This disruption between networks may cause a detrimental impact on memory functioning following mTBI, supporting the Default Mode Interference Hypothesis in the context of mTBI related memory deficits. PMID:23994210
Default mode network interference in mild traumatic brain injury - a pilot resting state study.
Sours, Chandler; Zhuo, Jiachen; Janowich, Jacqueline; Aarabi, Bizhan; Shanmuganathan, Kathirkamanthan; Gullapalli, Rao P
2013-11-06
In this study we investigated the functional connectivity in 23 Mild TBI (mTBI) patients with and without memory complaints using resting state fMRI in the sub-acute stage of injury as well as a group of control participants. Results indicate that mTBI patients with memory complaints performed significantly worse than patients without memory complaints on tests assessing memory from the Automated Neuropsychological Assessment Metrics (ANAM). Altered functional connectivity was observed between the three groups between the default mode network (DMN) and the nodes of the task positive network (TPN). Altered functional connectivity was also observed between both the TPN and DMN and nodes associated with the Salience Network (SN). Following mTBI there is a reduction in anti-correlated networks for both those with and without memory complaints for the DMN, but only a reduction in the anti-correlated network in mTBI patients with memory complaints for the TPN. Furthermore, an increased functional connectivity between the TPN and SN appears to be associated with reduced performance on memory assessments. Overall the results suggest that a disruption in the segregation of the DMN and the TPN at rest may be mediated through both a direct pathway of increased FC between various nodes of the TPN and DMN, and through an indirect pathway that links the TPN and DMN through nodes of the SN. This disruption between networks may cause a detrimental impact on memory functioning following mTBI, supporting the Default Mode Interference Hypothesis in the context of mTBI related memory deficits. © 2013 Elsevier B.V. All rights reserved.
2013-03-01
2000). The construction of autobiographical memories in the selfmemory system. Psychological Review, 107(2), 261288. Dennis, S., & Chapman, A. (2010...AFRL-OSR-VA-TR-2013-0131 Networks of Memories Simon Dennis, Mikhail Belkin Ohio State University March 2013 Final...Back (Rev. 8/98) 1 Networks of Memories FA95500910614 Professor Jay Myung PI: Simon Dennis Ohio State University February 15, 2013 2 Introduction
Self-organization and solution of shortest-path optimization problems with memristive networks
NASA Astrophysics Data System (ADS)
Pershin, Yuriy V.; Di Ventra, Massimiliano
2013-07-01
We show that memristive networks, namely networks of resistors with memory, can efficiently solve shortest-path optimization problems. Indeed, the presence of memory (time nonlocality) promotes self organization of the network into the shortest possible path(s). We introduce a network entropy function to characterize the self-organized evolution, show the solution of the shortest-path problem and demonstrate the healing property of the solution path. Finally, we provide an algorithm to solve the traveling salesman problem. Similar considerations apply to networks of memcapacitors and meminductors, and networks with memory in various dimensions.
Computation in Dynamically Bounded Asymmetric Systems
Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney
2015-01-01
Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems. PMID:25617645
Huang, Sui
2012-02-01
The Neo-Darwinian concept of natural selection is plausible when one assumes a straightforward causation of phenotype by genotype. However, such simple 1:1 mapping must now give place to the modern concepts of gene regulatory networks and gene expression noise. Both can, in the absence of genetic mutations, jointly generate a diversity of inheritable randomly occupied phenotypic states that could also serve as a substrate for natural selection. This form of epigenetic dynamics challenges Neo-Darwinism. It needs to incorporate the non-linear, stochastic dynamics of gene networks. A first step is to consider the mathematical correspondence between gene regulatory networks and Waddington's metaphoric 'epigenetic landscape', which actually represents the quasi-potential function of global network dynamics. It explains the coexistence of multiple stable phenotypes within one genotype. The landscape's topography with its attractors is shaped by evolution through mutational re-wiring of regulatory interactions - offering a link between genetic mutation and sudden, broad evolutionary changes. Copyright © 2012 WILEY Periodicals, Inc.
Predicting epileptic seizures from scalp EEG based on attractor state analysis.
Chu, Hyunho; Chung, Chun Kee; Jeong, Woorim; Cho, Kwang-Hyun
2017-05-01
Epilepsy is the second most common disease of the brain. Epilepsy makes it difficult for patients to live a normal life because it is difficult to predict when seizures will occur. In this regard, if seizures could be predicted a reasonable period of time before their occurrence, epilepsy patients could take precautions against them and improve their safety and quality of life. In this paper, we investigate a novel seizure precursor based on attractor state analysis for seizure prediction. We analyze the transition process from normal to seizure attractor state and investigate a precursor phenomenon seen before reaching the seizure attractor state. From the result of an analysis, we define a quantified spectral measure in scalp EEG for seizure prediction. From scalp EEG recordings, the Fourier coefficients of six EEG frequency bands are extracted, and the defined spectral measure is computed based on the coefficients for each half-overlapped 20-second-long window. The computed spectral measure is applied to seizure prediction using a low-complexity methodology. Within scalp EEG, we identified an early-warning indicator before an epileptic seizure occurs. Getting closer to the bifurcation point that triggers the transition from normal to seizure state, the power spectral density of low frequency bands of the perturbation of an attractor in the EEG, showed a relative increase. A low-complexity seizure prediction algorithm using this feature was evaluated, using ∼583h of scalp EEG in which 143 seizures in 16 patients were recorded. With the test dataset, the proposed method showed high sensitivity (86.67%) with a false prediction rate of 0.367h -1 and average prediction time of 45.3min. A novel seizure prediction method using scalp EEG, based on attractor state analysis, shows potential for application with real epilepsy patients. This is the first study in which the seizure-precursor phenomenon of an epileptic seizure is investigated based on attractor-based analysis of the macroscopic dynamics of the brain. With the scalp EEG, we first propose use of a spectral feature identified for seizure prediction, in which the dynamics of an attractor are excluded, and only the perturbation dynamics from the attractor are considered. Copyright © 2017 Elsevier B.V. All rights reserved.
Resonator memories and optical novelty filters
NASA Astrophysics Data System (ADS)
Anderson, Dana Z.; Erle, Marie C.
Optical resonators having holographic elements are potential candidates for storing information that can be accessed through content addressable or associative recall. Closely related to the resonator memory is the optical novelty filter, which can detect the differences between a test object and a set of reference objects. We discuss implementations of these devices using continuous optical media such as photorefractive materials. The discussion is framed in the context of neural network models. There are both formal and qualitative similarities between the resonator memory and optical novelty filter and network models. Mode competition arises in the theory of the resonator memory, much as it does in some network models. We show that the role of the phenomena of "daydreaming" in the real-time programmable optical resonator is very much akin to the role of "unlearning" in neural network memories. The theory of programming the real-time memory for a single mode is given in detail. This leads to a discussion of the optical novelty filter. Experimental results for the resonator memory, the real-time programmable memory, and the optical tracking novelty filter are reviewed. We also point to several issues that need to be addressed in order to implement more formal models of neural networks.
Resonator Memories And Optical Novelty Filters
NASA Astrophysics Data System (ADS)
Anderson, Dana Z.; Erie, Marie C.
1987-05-01
Optical resonators having holographic elements are potential candidates for storing information that can be accessed through content-addressable or associative recall. Closely related to the resonator memory is the optical novelty filter, which can detect the differences between a test object and a set of reference objects. We discuss implementations of these devices using continuous optical media such as photorefractive ma-terials. The discussion is framed in the context of neural network models. There are both formal and qualitative similarities between the resonator memory and optical novelty filter and network models. Mode competition arises in the theory of the resonator memory, much as it does in some network models. We show that the role of the phenomena of "daydream-ing" in the real-time programmable optical resonator is very much akin to the role of "unlearning" in neural network memories. The theory of programming the real-time memory for a single mode is given in detail. This leads to a discussion of the optical novelty filter. Experimental results for the resonator memory, the real-time programmable memory, and the optical tracking novelty filter are reviewed. We also point to several issues that need to be addressed in order to implement more formal models of neural networks.
On the dynamics of approximating schemes for dissipative nonlinear equations
NASA Technical Reports Server (NTRS)
Jones, Donald A.
1993-01-01
Since one can rarely write down the analytical solutions to nonlinear dissipative partial differential equations (PDE's), it is important to understand whether, and in what sense, the behavior of approximating schemes to these equations reflects the true dynamics of the original equations. Further, because standard error estimates between approximations of the true solutions coming from spectral methods - finite difference or finite element schemes, for example - and the exact solutions grow exponentially in time, this analysis provides little value in understanding the infinite time behavior of a given approximating scheme. The notion of the global attractor has been useful in quantifying the infinite time behavior of dissipative PDEs, such as the Navier-Stokes equations. Loosely speaking, the global attractor is all that remains of a sufficiently large bounded set in phase space mapped infinitely forward in time under the evolution of the PDE. Though the attractor has been shown to have some nice properties - it is compact, connected, and finite dimensional, for example - it is in general quite complicated. Nevertheless, the global attractor gives a way to understand how the infinite time behavior of approximating schemes such as the ones coming from a finite difference, finite element, or spectral method relates to that of the original PDE. Indeed, one can often show that such approximations also have a global attractor. We therefore only need to understand how the structure of the attractor for the PDE behaves under approximation. This is by no means a trivial task. Several interesting results have been obtained in this direction. However, we will not go into the details. We mention here that approximations generally lose information about the system no matter how accurate they are. There are examples that show certain parts of the attractor may be lost by arbitrary small perturbations of the original equations.
Attractors of three-dimensional fast-rotating Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Trahe, Markus
The three-dimensional (3-D) rotating Navier-Stokes equations describe the dynamics of rotating, incompressible, viscous fluids. In this work, they are considered with smooth, time-independent forces and the original statements implied by the classical "Taylor-Proudman Theorem" of geophysics are rigorously proved. It is shown that fully developed turbulence of 3-D fast-rotating fluids is essentially characterized by turbulence of two-dimensional (2-D) fluids in terms of numbers of degrees of freedom. In this context, the 3-D nonlinear "resonant limit equations", which arise in a non-linear averaging process as the rotation frequency O → infinity, are studied and optimal (2-D-type) upper bounds for fractal box and Hausdorff dimensions of the global attractor as well as upper bounds for box dimensions of exponential attractors are determined. Then, the convergence of exponential attractors for the full 3-D rotating Navier-Stokes equations to exponential attractors for the resonant limit equations as O → infinity in the sense of full Hausdorff-metric distances is established. This provides upper and lower semi-continuity of exponential attractors with respect to the rotation frequency and implies that the number of degrees of freedom (attractor dimension) of 3-D fast-rotating fluids is close to that of 2-D fluids. Finally, the algebraic-geometric structure of the Poincare curves, which control the resonances and small divisor estimates for partial differential equations, is further investigated; the 3-D nonlinear limit resonant operators are characterized by three-wave interactions governed by these curves. A new canonical transformation between those curves is constructed; with far-reaching consequences on the density of the latter.
Spiking neural network simulation: memory-optimal synaptic event scheduling.
Stewart, Robert D; Gurney, Kevin N
2011-06-01
Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.
Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang
In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.
Yue, Yuan; Miao, Pengcheng; Xie, Jianhua; Celso, Grebogi
2016-11-01
Quasiperiodic chaos (QC), which is a combination of quasiperiodic sets and a chaotic set, is uncovered in the six dimensional Poincaré map of a symmetric three-degree of freedom vibro-impact system. Accompanied by symmetry restoring bifurcation, this QC is the consequence of a novel intermittency that occurs between two conjugate quasiperiodic sets and a chaotic set. The six dimensional Poincaré map P is the 2-fold composition of another virtual implicit map Q, yielding the symmetry of the system. Map Q can capture two conjugate attractors, which is at the core of the dynamics of the vibro-impact system. Three types of symmetry restoring bifurcations are analyzed in detail. First, if two conjugate chaotic attractors join together, the chaos-chaos intermittency induced by attractor-merging crisis takes place. Second, if two conjugate quasiperiodic sets are suddenly embedded in a chaotic one, QC is induced by a new intermittency between the three attractors. Third, if two conjugate quasiperiodic attractors connect with each other directly, they merge to form a single symmetric quasiperiodic one. For the second case, the new intermittency is caused by the collision of two conjugate quasiperiodic attractors with an unstable symmetric limit set. As the iteration number is increased, the largest finite-time Lyapunov exponent of the QC does not converge to a constant, but fluctuates in the positive region.
Chaos and generalised multistability in a mesoscopic model of the electroencephalogram
NASA Astrophysics Data System (ADS)
Dafilis, Mathew P.; Frascoli, Federico; Cadusch, Peter J.; Liley, David T. J.
2009-06-01
We present evidence for chaos and generalised multistability in a mesoscopic model of the electroencephalogram (EEG). Two limit cycle attractors and one chaotic attractor were found to coexist in a two-dimensional plane of the ten-dimensional volume of initial conditions. The chaotic attractor was found to have a moderate value of the largest Lyapunov exponent (3.4 s -1 base e) with an associated Kaplan-Yorke (Lyapunov) dimension of 2.086. There are two different limit cycles appearing in conjunction with this particular chaotic attractor: one multiperiodic low amplitude limit cycle whose largest spectral peak is within the alpha band (8-13 Hz) of the EEG; and another multiperiodic large-amplitude limit cycle which may correspond to epilepsy. The cause of the coexistence of these structures is explained with a one-parameter bifurcation analysis. Each attractor has a basin of differing complexity: the large-amplitude limit cycle has a basin relatively uncomplicated in its structure while the small-amplitude limit cycle and chaotic attractor each have much more finely structured basins of attraction, but none of the basin boundaries appear to be fractal. The basins of attraction for the chaotic and small-amplitude limit cycle dynamics apparently reside within each other. We briefly discuss the implications of these findings in the context of theoretical attempts to understand the dynamics of brain function and behaviour.
A Systems Approach to Stress, Stressors and Resilience in Humans
Oken, Barry S.; Chamine, Irina; Wakeland, Wayne
2014-01-01
The paper focuses on the biology of stress and resilience and their biomarkers in humans from the system science perspective. A stressor pushes the physiological system away from its baseline state towards a lower utility state. The physiological system may return towards the original state in one attractor basin but may be shifted to a state in another, lower utility attractor basin. While some physiological changes induced by stressors may benefit health, there is often a chronic wear and tear cost due to implementing changes to enable the return of the system to its baseline state and maintain itself in the high utility baseline attractor basin following repeated perturbations. This cost, also called allostatic load, is the utility reduction associated with both a change in state and with alterations in the attractor basin that affect system responses following future perturbations. This added cost can increase the time course of the return to baseline or the likelihood of moving into a different attractor basin following a perturbation. Opposite to this is the system’s resilience which influences its ability to return to the high utility attractor basin following a perturbation by increasing the likelihood and/or speed of returning to the baseline state following a stressor. This review paper is a qualitative systematic review; it covers areas most relevant for moving the stress and resilience field forward from a more quantitative and neuroscientific perspective. PMID:25549855
Engström, Maria; Landtblom, Anne-Marie; Karlsson, Thomas
2013-01-01
Despite the interest in the neuroimaging of working memory, little is still known about the neurobiology of complex working memory in tasks that require simultaneous manipulation and storage of information. In addition to the central executive network, we assumed that the recently described salience network [involving the anterior insular cortex (AIC) and the anterior cingulate cortex (ACC)] might be of particular importance to working memory tasks that require complex, effortful processing. Healthy participants (n = 26) and participants suffering from working memory problems related to the Kleine-Levin syndrome (KLS) (a specific form of periodic idiopathic hypersomnia; n = 18) participated in the study. Participants were further divided into a high- and low-capacity group, according to performance on a working memory task (listening span). In a functional magnetic resonance imaging (fMRI) study, participants were administered the reading span complex working memory task tapping cognitive effort. The fMRI-derived blood oxygen level dependent (BOLD) signal was modulated by (1) effort in both the central executive and the salience network and (2) capacity in the salience network in that high performers evidenced a weaker BOLD signal than low performers. In the salience network there was a dichotomy between the left and the right hemisphere; the right hemisphere elicited a steeper increase of the BOLD signal as a function of increasing effort. There was also a stronger functional connectivity within the central executive network because of increased task difficulty. The ability to allocate cognitive effort in complex working memory is contingent upon focused resources in the executive and in particular the salience network. Individual capacity during the complex working memory task is related to activity in the salience (but not the executive) network so that high-capacity participants evidence a lower signal and possibly hence a larger dynamic response.
Engström, Maria; Landtblom, Anne-Marie; Karlsson, Thomas
2013-01-01
Despite the interest in the neuroimaging of working memory, little is still known about the neurobiology of complex working memory in tasks that require simultaneous manipulation and storage of information. In addition to the central executive network, we assumed that the recently described salience network [involving the anterior insular cortex (AIC) and the anterior cingulate cortex (ACC)] might be of particular importance to working memory tasks that require complex, effortful processing. Method: Healthy participants (n = 26) and participants suffering from working memory problems related to the Kleine–Levin syndrome (KLS) (a specific form of periodic idiopathic hypersomnia; n = 18) participated in the study. Participants were further divided into a high- and low-capacity group, according to performance on a working memory task (listening span). In a functional magnetic resonance imaging (fMRI) study, participants were administered the reading span complex working memory task tapping cognitive effort. Principal findings: The fMRI-derived blood oxygen level dependent (BOLD) signal was modulated by (1) effort in both the central executive and the salience network and (2) capacity in the salience network in that high performers evidenced a weaker BOLD signal than low performers. In the salience network there was a dichotomy between the left and the right hemisphere; the right hemisphere elicited a steeper increase of the BOLD signal as a function of increasing effort. There was also a stronger functional connectivity within the central executive network because of increased task difficulty. Conclusion: The ability to allocate cognitive effort in complex working memory is contingent upon focused resources in the executive and in particular the salience network. Individual capacity during the complex working memory task is related to activity in the salience (but not the executive) network so that high-capacity participants evidence a lower signal and possibly hence a larger dynamic response. PMID:23616756
The CRISP theory of hippocampal function in episodic memory
Cheng, Sen
2013-01-01
Over the past four decades, a “standard framework” has emerged to explain the neural mechanisms of episodic memory storage. This framework has been instrumental in driving hippocampal research forward and now dominates the design and interpretation of experimental and theoretical studies. It postulates that cortical inputs drive plasticity in the recurrent cornu ammonis 3 (CA3) synapses to rapidly imprint memories as attractor states in CA3. Here we review a range of experimental studies and argue that the evidence against the standard framework is mounting, notwithstanding the considerable evidence in its support. We propose CRISP as an alternative theory to the standard framework. CRISP is based on Context Reset by dentate gyrus (DG), Intrinsic Sequences in CA3, and Pattern completion in cornu ammonis 1 (CA1). Compared to previous models, CRISP uses a radically different mechanism for storing episodic memories in the hippocampus. Neural sequences are intrinsic to CA3, and inputs are mapped onto these intrinsic sequences through synaptic plasticity in the feedforward projections of the hippocampus. Hence, CRISP does not require plasticity in the recurrent CA3 synapses during the storage process. Like in other theories DG and CA1 play supporting roles, however, their function in CRISP have distinct implications. For instance, CA1 performs pattern completion in the absence of CA3 and DG contributes to episodic memory retrieval, increasing the speed, precision, and robustness of retrieval. We propose the conceptual theory, discuss its implications for experimental results and suggest testable predictions. It appears that CRISP not only accounts for those experimental results that are consistent with the standard framework, but also for results that are at odds with the standard framework. We therefore suggest that CRISP is a viable, and perhaps superior, theory for the hippocampal function in episodic memory. PMID:23653597
On some dynamical chameleon systems
NASA Astrophysics Data System (ADS)
Burkin, I. M.; Kuznetsova, O. I.
2018-03-01
It is now well known that dynamical systems can be categorized into systems with self-excited attractors and systems with hidden attractors. A self-excited attractor has a basin of attraction that is associated with an unstable equilibrium, while a hidden attractor has a basin of attraction that does not intersect with small neighborhoods of any equilibrium points. Hidden attractors play the important role in engineering applications because they allow unexpected and potentially disastrous responses to perturbations in a structure like a bridge or an airplane wing. In addition, complex behaviors of chaotic systems have been applied in various areas from image watermarking, audio encryption scheme, asymmetric color pathological image encryption, chaotic masking communication to random number generator. Recently, researchers have discovered the so-called “chameleon systems”. These systems were so named because they demonstrate self-excited or hidden oscillations depending on the value of parameters. The present paper offers a simple algorithm of synthesizing one-parameter chameleon systems. The authors trace the evolution of Lyapunov exponents and the Kaplan-Yorke dimension of such systems which occur when parameters change.
Eyre, Harris A.; Acevedo, Bianca; Yang, Hongyu; Siddarth, Prabha; Van Dyk, Kathleen; Ercoli, Linda; Leaver, Amber M.; Cyr, Natalie St.; Narr, Katherine; Baune, Bernhard T.; Khalsa, Dharma S.; Lavretsky, Helen
2016-01-01
Background: No study has explored the effect of yoga on cognitive decline and resting-state functional connectivity. Objectives: This study explored the relationship between performance on memory tests and resting-state functional connectivity before and after a yoga intervention versus active control for subjects with mild cognitive impairment (MCI). Methods: Participants ( ≥ 55 y) with MCI were randomized to receive a yoga intervention or active “gold-standard” control (i.e., memory enhancement training (MET)) for 12 weeks. Resting-state functional magnetic resonance imaging was used to map correlations between brain networks and memory performance changes over time. Default mode networks (DMN), language and superior parietal networks were chosen as networks of interest to analyze the association with changes in verbal and visuospatial memory performance. Results: Fourteen yoga and 11 MET participants completed the study. The yoga group demonstrated a statistically significant improvement in depression and visuospatial memory. We observed improved verbal memory performance correlated with increased connectivity between the DMN and frontal medial cortex, pregenual anterior cingulate cortex, right middle frontal cortex, posterior cingulate cortex, and left lateral occipital cortex. Improved verbal memory performance positively correlated with increased connectivity between the language processing network and the left inferior frontal gyrus. Improved visuospatial memory performance correlated inversely with connectivity between the superior parietal network and the medial parietal cortex. Conclusion:Yoga may be as effective as MET in improving functional connectivity in relation to verbal memory performance. These findings should be confirmed in larger prospective studies. PMID:27060939
Mnemonic convergence in social networks: The emergent properties of cognition at a collective level.
Coman, Alin; Momennejad, Ida; Drach, Rae D; Geana, Andra
2016-07-19
The development of shared memories, beliefs, and norms is a fundamental characteristic of human communities. These emergent outcomes are thought to occur owing to a dynamic system of information sharing and memory updating, which fundamentally depends on communication. Here we report results on the formation of collective memories in laboratory-created communities. We manipulated conversational network structure in a series of real-time, computer-mediated interactions in fourteen 10-member communities. The results show that mnemonic convergence, measured as the degree of overlap among community members' memories, is influenced by both individual-level information-processing phenomena and by the conversational social network structure created during conversational recall. By studying laboratory-created social networks, we show how large-scale social phenomena (i.e., collective memory) can emerge out of microlevel local dynamics (i.e., mnemonic reinforcement and suppression effects). The social-interactionist approach proposed herein points to optimal strategies for spreading information in social networks and provides a framework for measuring and forging collective memories in communities of individuals.
Non-BPS attractors in 5 d and 6 d extended supergravity
NASA Astrophysics Data System (ADS)
Andrianopoli, L.; Ferrara, S.; Marrani, A.; Trigiante, M.
2008-05-01
We connect the attractor equations of a certain class of N=2, d=5 supergravities with their (1,0), d=6 counterparts, by relating the moduli space of non-BPS d=5 black hole/black string attractors to the moduli space of extremal dyonic black string d=6 non-BPS attractors. For d=5 real special symmetric spaces and for N=4,6,8 theories, we explicitly compute the flat directions of the black object potential corresponding to vanishing eigenvalues of its Hessian matrix. In the case N=4, we study the relation to the (2,0), d=6 theory. We finally describe the embedding of the N=2, d=5 magic models in N=8, d=5 supergravity as well as the interconnection among the corresponding charge orbits.
Strange attractors in weakly turbulent Couette-Taylor flow
NASA Technical Reports Server (NTRS)
Brandstater, A.; Swinney, Harry L.
1987-01-01
An experiment is conducted on the transition from quasi-periodic to weakly turbulent flow of a fluid contained between concentric cylinders with the inner cylinder rotating and the outer cylinder at rest. Power spectra, phase-space portraits, and circle maps obtained from velocity time-series data indicate that the nonperiodic behavior observed is deterministic, that is, it is described by strange attractors. Various problems that arise in computing the dimension of strange attractors constructed from experimental data are discussed and it is shown that these problems impose severe requirements on the quantity and accuracy of data necessary for determining dimensions greater than about 5. In the present experiment the attractor dimension increases from 2 at the onset of turbulence to about 4 at a Reynolds number 50-percent above the onset of turbulence.
Extended write combining using a write continuation hint flag
Chen, Dong; Gara, Alan; Heidelberger, Philip; Ohmacht, Martin; Vranas, Pavlos
2013-06-04
A computing apparatus for reducing the amount of processing in a network computing system which includes a network system device of a receiving node for receiving electronic messages comprising data. The electronic messages are transmitted from a sending node. The network system device determines when more data of a specific electronic message is being transmitted. A memory device stores the electronic message data and communicating with the network system device. A memory subsystem communicates with the memory device. The memory subsystem stores a portion of the electronic message when more data of the specific message will be received, and the buffer combines the portion with later received data and moves the data to the memory device for accessible storage.
Holonomy Attractor Connecting Spaces of Different Curvature Responsible for ``Anomalies''
NASA Astrophysics Data System (ADS)
Binder, Bernd
2009-03-01
In this lecture paper we derive Magic Angle Precession (MAP) from first geometric principles. MAP can arise in situations, where precession is multiply related to spin, linearly by time or distance (dynamic phase, rolling, Gauss law) and transcendentally by the holonomy loop path (geometric phase). With linear spin-precession coupling, gyroscopes can be spun up and down to very high frequencies via low frequency holonomy control induced by external accelerations, which provides for extreme coupling strengths or "anomalies" that can be tested by the powerball or gyrotwister device. Geometrically, a gyroscopic manifold with spherical metric is tangentially aligned to a precession wave channel with conic or hyperbolic metric (like the relativistic Thomas precession). Transporting triangular spin/precession vector relations across the tangential boundary of contact with SO(3) Lorentz symmetry, we get extreme vector currents near the attractor fixed points in precession phase space, where spin currents remain intact while crossing the contact boundaries between regions of different curvature signature (-1, 0, +1). The problem can be geometrically solved by considering a curvature invariant triangular condition, which holds on surfaces with different curvature that are in contact and locally parallel. In this case two out of three angles are identical, whereas the third angle is different due to holonomy. If we require that the side length ratio corresponding to these angles are invariant we get a geodesic chaotic attractor, which is a cosine map cos(x)˜Mx in parameter space providing for fixed points, limit cycle bifurcations, and singularities. The situation could be quite natural and common in the context of vector currents in curved spacetime and gauge theories. MAP could even be part of the electromagnetic interaction, where the electric charge is the geometric U(1) precession spin current and gauge potential with magnetic effects given by extra rotations under the SO(3). MAP can be extended to a neural network, where the synaptic connection of the holonomy attractor is just the mathematical condition adjusting and bridging spaces with positive (spherical) and negative (hyperbolic) curvature allowing for lossless/supra spin currents. Another strategy is to look for existing spin/precession anomalies and corresponding nonlinear holonomy conditions at the most fundamental level from the quark level to the cosmic scale. In these sceneries the geodesic attractor could control holonomy and curvature near the fixed points. It was proposed in 2002 that this should happen with electrons in atomic orbits showing a Berry phase part of the Rydberg or Sommerfeld fine structure constant and in 2003 that this effect could be responsible for (in)stabilities in the nuclear range and in superconductors. In 2008 it was shown that the attractor is part of the chaotic mechanical dynamics successfully at work in the Gyro-twister fitness device, and in 2007-2009 that there could be some deep relevance to "anomalies" in many scenarios even on the cosmic scales. Thus, we will point to and discuss some possible future applications that could be utilized for metric engineering: generating artificial holonomy and curvature (DC effect) for propulsion, or forcing holonomy waves (AC effect) in hyperbolic space-time, which are just gravitational waves interesting for communication.
Woodward, Alexander; Froese, Tom; Ikegami, Takashi
2015-02-01
The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.
Stochastic Representation of Chaos using Terminal Attractors
NASA Technical Reports Server (NTRS)
Zak, Michail
2005-01-01
A nonlinear version of the Liouville equation based upon terminal attractors is proposed for describing post-instability motions of dynamical systems with exponential divergence of trajectories such as those leading to chaos and turbulence. As a result, the post-instability motions are represented by expectations, variances, and higher moments of the state variables as functions of time. The proposed approach can be applied to conservative chaos, and in particular, to n-bodies problem, as well as to dissipative systems, and in particular, to chaotic attractors and turbulence.
Inflaton fragmentation in E models of cosmological α -attractors
NASA Astrophysics Data System (ADS)
Hasegawa, Fuminori; Hong, Jeong-Pyong
2018-04-01
Cosmological α -attractors are observationally favored due to the asymptotic flatness of the potential. Since its flatness induces the negative pressure, the coherent oscillation of the inflaton field could fragment into quasistable localized objects called I-balls (or "oscillons"). We investigated the possibility of I-ball formation in E models of α -attractors. Using the linear analysis and the lattice simulations, we found that the instability sufficiently grows against the cosmic expansion and the inflaton actually fragments into the I-balls for α ≲10-3 .
Episodic Memory Retrieval Benefits from a Less Modular Brain Network Organization
2017-01-01
Most complex cognitive tasks require the coordinated interplay of multiple brain networks, but the act of retrieving an episodic memory may place especially heavy demands for communication between the frontoparietal control network (FPCN) and the default mode network (DMN), two networks that do not strongly interact with one another in many task contexts. We applied graph theoretical analysis to task-related fMRI functional connectivity data from 20 human participants and found that global brain modularity—a measure of network segregation—is markedly reduced during episodic memory retrieval relative to closely matched analogical reasoning and visuospatial perception tasks. Individual differences in modularity were correlated with memory task performance, such that lower modularity levels were associated with a lower false alarm rate. Moreover, the FPCN and DMN showed significantly elevated coupling with each other during the memory task, which correlated with the global reduction in brain modularity. Both networks also strengthened their functional connectivity with the hippocampus during the memory task. Together, these results provide a novel demonstration that reduced modularity is conducive to effective episodic retrieval, which requires close collaboration between goal-directed control processes supported by the FPCN and internally oriented self-referential processing supported by the DMN. SIGNIFICANCE STATEMENT Modularity, an index of the degree to which nodes of a complex system are organized into discrete communities, has emerged as an important construct in the characterization of brain connectivity dynamics. We provide novel evidence that the modularity of the human brain is reduced when individuals engage in episodic memory retrieval, relative to other cognitive tasks, and that this state of lower modularity is associated with improved memory performance. We propose a neural systems mechanism for this finding where the nodes of the frontoparietal control network and default mode network strengthen their interaction with one another during episodic retrieval. Such across-network communication likely facilitates effective access to internally generated representations of past event knowledge. PMID:28242796
Episodic Memory Retrieval Benefits from a Less Modular Brain Network Organization.
Westphal, Andrew J; Wang, Siliang; Rissman, Jesse
2017-03-29
Most complex cognitive tasks require the coordinated interplay of multiple brain networks, but the act of retrieving an episodic memory may place especially heavy demands for communication between the frontoparietal control network (FPCN) and the default mode network (DMN), two networks that do not strongly interact with one another in many task contexts. We applied graph theoretical analysis to task-related fMRI functional connectivity data from 20 human participants and found that global brain modularity-a measure of network segregation-is markedly reduced during episodic memory retrieval relative to closely matched analogical reasoning and visuospatial perception tasks. Individual differences in modularity were correlated with memory task performance, such that lower modularity levels were associated with a lower false alarm rate. Moreover, the FPCN and DMN showed significantly elevated coupling with each other during the memory task, which correlated with the global reduction in brain modularity. Both networks also strengthened their functional connectivity with the hippocampus during the memory task. Together, these results provide a novel demonstration that reduced modularity is conducive to effective episodic retrieval, which requires close collaboration between goal-directed control processes supported by the FPCN and internally oriented self-referential processing supported by the DMN. SIGNIFICANCE STATEMENT Modularity, an index of the degree to which nodes of a complex system are organized into discrete communities, has emerged as an important construct in the characterization of brain connectivity dynamics. We provide novel evidence that the modularity of the human brain is reduced when individuals engage in episodic memory retrieval, relative to other cognitive tasks, and that this state of lower modularity is associated with improved memory performance. We propose a neural systems mechanism for this finding where the nodes of the frontoparietal control network and default mode network strengthen their interaction with one another during episodic retrieval. Such across-network communication likely facilitates effective access to internally generated representations of past event knowledge. Copyright © 2017 the authors 0270-6474/17/373523-09$15.00/0.
Tanimizu, Toshiyuki; Kenney, Justin W; Okano, Emiko; Kadoma, Kazune; Frankland, Paul W; Kida, Satoshi
2017-04-12
Social recognition memory is an essential and basic component of social behavior that is used to discriminate familiar and novel animals/humans. Previous studies have shown the importance of several brain regions for social recognition memories; however, the mechanisms underlying the consolidation of social recognition memory at the molecular and anatomic levels remain unknown. Here, we show a brain network necessary for the generation of social recognition memory in mice. A mouse genetic study showed that cAMP-responsive element-binding protein (CREB)-mediated transcription is required for the formation of social recognition memory. Importantly, significant inductions of the CREB target immediate-early genes c-fos and Arc were observed in the hippocampus (CA1 and CA3 regions), medial prefrontal cortex (mPFC), anterior cingulate cortex (ACC), and amygdala (basolateral region) when social recognition memory was generated. Pharmacological experiments using a microinfusion of the protein synthesis inhibitor anisomycin showed that protein synthesis in these brain regions is required for the consolidation of social recognition memory. These findings suggested that social recognition memory is consolidated through the activation of CREB-mediated gene expression in the hippocampus/mPFC/ACC/amygdala. Network analyses suggested that these four brain regions show functional connectivity with other brain regions and, more importantly, that the hippocampus functions as a hub to integrate brain networks and generate social recognition memory, whereas the ACC and amygdala are important for coordinating brain activity when social interaction is initiated by connecting with other brain regions. We have found that a brain network composed of the hippocampus/mPFC/ACC/amygdala is required for the consolidation of social recognition memory. SIGNIFICANCE STATEMENT Here, we identify brain networks composed of multiple brain regions for the consolidation of social recognition memory. We found that social recognition memory is consolidated through CREB-meditated gene expression in the hippocampus, medial prefrontal cortex, anterior cingulate cortex (ACC), and amygdala. Importantly, network analyses based on c-fos expression suggest that functional connectivity of these four brain regions with other brain regions is increased with time spent in social investigation toward the generation of brain networks to consolidate social recognition memory. Furthermore, our findings suggest that hippocampus functions as a hub to integrate brain networks and generate social recognition memory, whereas ACC and amygdala are important for coordinating brain activity when social interaction is initiated by connecting with other brain regions. Copyright © 2017 the authors 0270-6474/17/374103-14$15.00/0.
Neuroanatomy of episodic and semantic memory in humans: a brief review of neuroimaging studies.
García-Lázaro, Haydée G; Ramirez-Carmona, Rocio; Lara-Romero, Ruben; Roldan-Valadez, Ernesto
2012-01-01
One of the most basic functions in every individual and species is memory. Memory is the process by which information is saved as knowledge and retained for further use as needed. Learning is a neurobiological phenomenon by which we acquire certain information from the outside world and is a precursor to memory. Memory consists of the capacity to encode, store, consolidate, and retrieve information. Recently, memory has been defined as a network of connections whose function is primarily to facilitate the long-lasting persistence of learned environmental cues. In this review, we present a brief description of the current classifications of memory networks with a focus on episodic memory and its anatomical substrate. We also present a brief review of the anatomical basis of memory systems and the most commonly used neuroimaging methods to assess memory, illustrated with magnetic resonance imaging images depicting the hippocampus, temporal lobe, and hippocampal formation, which are the main brain structures participating in memory networks.
Cognitive Mapping Based on Conjunctive Representations of Space and Movement
Zeng, Taiping; Si, Bailu
2017-01-01
It is a challenge to build robust simultaneous localization and mapping (SLAM) system in dynamical large-scale environments. Inspired by recent findings in the entorhinal–hippocampal neuronal circuits, we propose a cognitive mapping model that includes continuous attractor networks of head-direction cells and conjunctive grid cells to integrate velocity information by conjunctive encodings of space and movement. Visual inputs from the local view cells in the model provide feedback cues to correct drifting errors of the attractors caused by the noisy velocity inputs. We demonstrate the mapping performance of the proposed cognitive mapping model on an open-source dataset of 66 km car journey in a 3 km × 1.6 km urban area. Experimental results show that the proposed model is robust in building a coherent semi-metric topological map of the entire urban area using a monocular camera, even though the image inputs contain various changes caused by different light conditions and terrains. The results in this study could inspire both neuroscience and robotic research to better understand the neural computational mechanisms of spatial cognition and to build robust robotic navigation systems in large-scale environments. PMID:29213234
Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950
Predicting physical time series using dynamic ridge polynomial neural networks.
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.
Statistical Properties of Lorenz-like Flows, Recent Developments and Perspectives
NASA Astrophysics Data System (ADS)
Araujo, Vitor; Galatolo, Stefano; Pacifico, Maria José
We comment on the mathematical results about the statistical behavior of Lorenz equations and its attractor, and more generally on the class of singular hyperbolic systems. The mathematical theory of such kind of systems turned out to be surprisingly difficult. It is remarkable that a rigorous proof of the existence of the Lorenz attractor was presented only around the year 2000 with a computer-assisted proof together with an extension of the hyperbolic theory developed to encompass attractors robustly containing equilibria. We present some of the main results on the statistical behavior of such systems. We show that for attractors of three-dimensional flows, robust chaotic behavior is equivalent to the existence of certain hyperbolic structures, known as singular-hyperbolicity. These structures, in turn, are associated with the existence of physical measures: in low dimensions, robust chaotic behavior for flows ensures the existence of a physical measure. We then give more details on recent results on the dynamics of singular-hyperbolic (Lorenz-like) attractors: (1) there exists an invariant foliation whose leaves are forward contracted by the flow (and further properties which are useful to understand the statistical properties of the dynamics); (2) there exists a positive Lyapunov exponent at every orbit; (3) there is a unique physical measure whose support is the whole attractor and which is the equilibrium state with respect to the center-unstable Jacobian; (4) this measure is exact dimensional; (5) the induced measure on a suitable family of cross-sections has exponential decay of correlations for Lipschitz observables with respect to a suitable Poincaré return time map; (6) the hitting time associated to Lorenz-like attractors satisfy a logarithm law; (7) the geometric Lorenz flow satisfies the Almost Sure Invariance Principle (ASIP) and the Central Limit Theorem (CLT); (8) the rate of decay of large deviations for the volume measure on the ergodic basin of a geometric Lorenz attractor is exponential; (9) a class of geometric Lorenz flows exhibits robust exponential decay of correlations; (10) all geometric Lorenz flows are rapidly mixing and their time-1 map satisfies both ASIP and CLT.
Changes in Brain Network Efficiency and Working Memory Performance in Aging
Stanley, Matthew L.; Simpson, Sean L.; Dagenbach, Dale; Lyday, Robert G.; Burdette, Jonathan H.; Laurienti, Paul J.
2015-01-01
Working memory is a complex psychological construct referring to the temporary storage and active processing of information. We used functional connectivity brain network metrics quantifying local and global efficiency of information transfer for predicting individual variability in working memory performance on an n-back task in both young (n = 14) and older (n = 15) adults. Individual differences in both local and global efficiency during the working memory task were significant predictors of working memory performance in addition to age (and an interaction between age and global efficiency). Decreases in local efficiency during the working memory task were associated with better working memory performance in both age cohorts. In contrast, increases in global efficiency were associated with much better working performance for young participants; however, increases in global efficiency were associated with a slight decrease in working memory performance for older participants. Individual differences in local and global efficiency during resting-state sessions were not significant predictors of working memory performance. Significant group whole-brain functional network decreases in local efficiency also were observed during the working memory task compared to rest, whereas no significant differences were observed in network global efficiency. These results are discussed in relation to recently developed models of age-related differences in working memory. PMID:25875001
Changes in brain network efficiency and working memory performance in aging.
Stanley, Matthew L; Simpson, Sean L; Dagenbach, Dale; Lyday, Robert G; Burdette, Jonathan H; Laurienti, Paul J
2015-01-01
Working memory is a complex psychological construct referring to the temporary storage and active processing of information. We used functional connectivity brain network metrics quantifying local and global efficiency of information transfer for predicting individual variability in working memory performance on an n-back task in both young (n = 14) and older (n = 15) adults. Individual differences in both local and global efficiency during the working memory task were significant predictors of working memory performance in addition to age (and an interaction between age and global efficiency). Decreases in local efficiency during the working memory task were associated with better working memory performance in both age cohorts. In contrast, increases in global efficiency were associated with much better working performance for young participants; however, increases in global efficiency were associated with a slight decrease in working memory performance for older participants. Individual differences in local and global efficiency during resting-state sessions were not significant predictors of working memory performance. Significant group whole-brain functional network decreases in local efficiency also were observed during the working memory task compared to rest, whereas no significant differences were observed in network global efficiency. These results are discussed in relation to recently developed models of age-related differences in working memory.
Newton, Allen T; Morgan, Victoria L; Rogers, Baxter P; Gore, John C
2011-10-01
Interregional correlations between blood oxygen level dependent (BOLD) magnetic resonance imaging (fMRI) signals in the resting state have been interpreted as measures of connectivity across the brain. Here we investigate whether such connectivity in the working memory and default mode networks is modulated by changes in cognitive load. Functional connectivity was measured in a steady-state verbal identity N-back task for three different conditions (N = 1, 2, and 3) as well as in the resting state. We found that as cognitive load increases, the functional connectivity within both the working memory the default mode network increases. To test whether functional connectivity between the working memory and the default mode networks changed, we constructed maps of functional connectivity to the working memory network as a whole and found that increasingly negative correlations emerged in a dorsal region of the posterior cingulate cortex. These results provide further evidence that low frequency fluctuations in BOLD signals reflect variations in neural activity and suggests interaction between the default mode network and other cognitive networks. Copyright © 2010 Wiley-Liss, Inc.
Still searching for the engram.
Eichenbaum, Howard
2016-09-01
For nearly a century, neurobiologists have searched for the engram-the neural representation of a memory. Early studies showed that the engram is widely distributed both within and across brain areas and is supported by interactions among large networks of neurons. Subsequent research has identified engrams that support memory within dedicated functional systems for habit learning and emotional memory, but the engram for declarative memories has been elusive. Nevertheless, recent years have brought progress from molecular biological approaches that identify neurons and networks that are necessary and sufficient to support memory, and from recording approaches and population analyses that characterize the information coded by large neural networks. These new directions offer the promise of revealing the engrams for episodic and semantic memories.
SUSTAIN: a network model of category learning.
Love, Bradley C; Medin, Douglas L; Gureckis, Todd M
2004-04-01
SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN recruits an additional cluster to represent the surprising event. Newly recruited clusters are available to explain future events and can themselves evolve into prototypes-attractors-rules. SUSTAIN's discovery of category substructure is affected not only by the structure of the world but by the nature of the learning task and the learner's goals. SUSTAIN successfully extends category learning models to studies of inference learning, unsupervised learning, category construction, and contexts in which identification learning is faster than classification learning.
Neiman, Tal; Loewenstein, Yonatan
2013-01-23
In free operant experiments, subjects alternate at will between targets that yield rewards stochastically. Behavior in these experiments is typically characterized by (1) an exponential distribution of stay durations, (2) matching of the relative time spent at a target to its relative share of the total number of rewards, and (3) adaptation after a change in the reward rates that can be very fast. The neural mechanism underlying these regularities is largely unknown. Moreover, current decision-making neural network models typically aim at explaining behavior in discrete-time experiments in which a single decision is made once in every trial, making these models hard to extend to the more natural case of free operant decisions. Here we show that a model based on attractor dynamics, in which transitions are induced by noise and preference is formed via covariance-based synaptic plasticity, can account for the characteristics of behavior in free operant experiments. We compare a specific instance of such a model, in which two recurrently excited populations of neurons compete for higher activity, to the behavior of rats responding on two levers for rewarding brain stimulation on a concurrent variable interval reward schedule (Gallistel et al., 2001). We show that the model is consistent with the rats' behavior, and in particular, with the observed fast adaptation to matching behavior. Further, we show that the neural model can be reduced to a behavioral model, and we use this model to deduce a novel "conservation law," which is consistent with the behavior of the rats.
Zhai, Tian-Ye; Shao, Yong-Cong; Xie, Chun-Ming; Ye, En-Mao; Zou, Feng; Fu, Li-Ping; Li, Wen-Jun; Chen, Gang; Chen, Guang-Yu; Zhang, Zheng-Guo; Li, Shi-Jiang; Yang, Zheng
2014-01-01
Converging evidence suggests that addiction can be considered a disease of aberrant learning and memory with impulsive decision-making. In the past decades, numerous studies have demonstrated that drug addiction is involved in multiple memory systems such as classical conditioned drug memory, instrumental learning memory and the habitual learning memory. However, most of these studies have focused on the contributions of non-declarative memory, and declarative memory has largely been neglected in the research of addiction. Based on a recent finding that hippocampus, as a core functioning region of declarative memory, was proved biased the decision-making process based on past experiences by spreading associated reward values throughout memory. Our present study focused on the hippocampus. By utilizing seed-based network analysis on the resting-state functional MRI datasets with the seed hippocampus we tested how the intrinsic hippocampal memory network altered towards drug addiction, and examined how the functional connectivity strength within the altered hippocampal network correlated with behavioral index ‘impulsivity’. Our results demonstrated that HD group showed enhanced coherence between hippocampus which represents declarative memory system and non-declarative rewardguided learning memory system, and also showed attenuated intrinsic functional link between hippocampus and top-down control system, compared to the CN group. This alteration was furthered found to have behavioral significance over the behavioral index ‘impulsivity’ measured with Barratt Impulsiveness Scale (BIS). These results provide insights into the mechanism of declarative memory underlying the impulsive behavior in drug addiction. PMID:25008351
Electronic device aspects of neural network memories
NASA Technical Reports Server (NTRS)
Lambe, J.; Moopenn, A.; Thakoor, A. P.
1985-01-01
The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.
Memory Network For Distributed Data Processors
NASA Technical Reports Server (NTRS)
Bolen, David; Jensen, Dean; Millard, ED; Robinson, Dave; Scanlon, George
1992-01-01
Universal Memory Network (UMN) is modular, digital data-communication system enabling computers with differing bus architectures to share 32-bit-wide data between locations up to 3 km apart with less than one millisecond of latency. Makes it possible to design sophisticated real-time and near-real-time data-processing systems without data-transfer "bottlenecks". This enterprise network permits transmission of volume of data equivalent to an encyclopedia each second. Facilities benefiting from Universal Memory Network include telemetry stations, simulation facilities, power-plants, and large laboratories or any facility sharing very large volumes of data. Main hub of UMN is reflection center including smaller hubs called Shared Memory Interfaces.
Identifying major depressive disorder using Hurst exponent of resting-state brain networks.
Wei, Maobin; Qin, Jiaolong; Yan, Rui; Li, Haoran; Yao, Zhijian; Lu, Qing
2013-12-30
Resting-state functional magnetic resonance imaging (fMRI) studies of major depressive disorder (MDD) have revealed abnormalities of functional connectivity within or among the resting-state networks. They provide valuable insight into the pathological mechanisms of depression. However, few reports were involved in the "long-term memory" of fMRI signals. This study was to investigate the "long-term memory" of resting-state networks by calculating their Hurst exponents for identifying depressed patients from healthy controls. Resting-state networks were extracted from fMRI data of 20 MDD and 20 matched healthy control subjects. The Hurst exponent of each network was estimated by Range Scale analysis for further discriminant analysis. 95% of depressed patients and 85% of healthy controls were correctly classified by Support Vector Machine with an accuracy of 90%. The right fronto-parietal and default mode network constructed a deficit network (lower memory and more irregularity in MDD), while the left fronto-parietal, ventromedial prefrontal and salience network belonged to an excess network (longer memory in MDD), suggesting these dysfunctional networks may be related to a portion of the complex of emotional and cognitive disturbances. The abnormal "long-term memory" of resting-state networks associated with depression may provide a new possibility towards the exploration of the pathophysiological mechanisms of MDD. © 2013 Elsevier Ireland Ltd. All rights reserved.
On the control of the chaotic attractors of the 2-d Navier-Stokes equations.
Smaoui, Nejib; Zribi, Mohamed
2017-03-01
The control problem of the chaotic attractors of the two dimensional (2-d) Navier-Stokes (N-S) equations is addressed in this paper. First, the Fourier Galerkin method based on a reduced-order modelling approach developed by Chen and Price is applied to the 2-d N-S equations to construct a fifth-order system of nonlinear ordinary differential equations (ODEs). The dynamics of the fifth-order system was studied by analyzing the system's attractor for different values of Reynolds number, R e . Then, control laws are proposed to drive the states of the ODE system to a desired attractor. Finally, an adaptive controller is designed to synchronize two reduced order ODE models having different Reynolds numbers and starting from different initial conditions. Simulation results indicate that the proposed control schemes work well.
NASA Technical Reports Server (NTRS)
Nese, Jon M.
1989-01-01
A dynamical systems approach is used to quantify the instantaneous and time-averaged predictability of a low-order moist general circulation model. Specifically, the effects on predictability of incorporating an active ocean circulation, implementing annual solar forcing, and asynchronously coupling the ocean and atmosphere are evaluated. The predictability and structure of the model attractors is compared using the Lyapunov exponents, the local divergence rates, and the correlation, fractal, and Lyapunov dimensions. The Lyapunov exponents measure the average rate of growth of small perturbations on an attractor, while the local divergence rates quantify phase-spatial variations of predictability. These local rates are exploited to efficiently identify and distinguish subtle differences in predictability among attractors. In addition, the predictability of monthly averaged and yearly averaged states is investigated by using attractor reconstruction techniques.
Long-time behavior for suspension bridge equations with time delay
NASA Astrophysics Data System (ADS)
Park, Sun-Hye
2018-04-01
In this paper, we consider suspension bridge equations with time delay of the form u_{tt}(x,t) + Δ ^2 u (x,t) + k u^+ (x,t) + a_0 u_t (x,t) + a_1 u_t (x, t- τ ) + f(u(x,t)) = g(x). Many researchers have studied well-posedness, decay rates of energy, and existence of attractors for suspension bridge equations without delay effects. But, as far as we know, there is no work about suspension equations with time delay. In addition, there are not many studies on attractors for other delayed systems. Thus we first provide well-posedness for suspension equations with time delay. And then show the existence of global attractors and the finite dimensionality of the attractors by establishing energy functionals which are related to the norm of the phase space to our problem.
Design and implementation of grid multi-scroll fractional-order chaotic attractors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Liping, E-mail: lip-chenhut@126.com; Pan, Wei; Wu, Ranchao
2016-08-15
This paper proposes a novel approach for generating multi-scroll chaotic attractors in multi-directions for fractional-order (FO) systems. The stair nonlinear function series and the saturated nonlinear function are combined to extend equilibrium points with index 2 in a new FO linear system. With the help of stability theory of FO systems, stability of its equilibrium points is analyzed, and the chaotic behaviors are validated through phase portraits, Lyapunov exponents, and Poincaré section. Choosing the order 0.96 as an example, a circuit for generating 2-D grid multiscroll chaotic attractors is designed, and 2-D 9 × 9 grid FO attractors are observed at most.more » Numerical simulations and circuit experimental results show that the method is feasible and the designed circuit is correct.« less
Disease-induced mortality in density-dependent discrete-time S-I-S epidemic models.
Franke, John E; Yakubu, Abdul-Aziz
2008-12-01
The dynamics of simple discrete-time epidemic models without disease-induced mortality are typically characterized by global transcritical bifurcation. We prove that in corresponding models with disease-induced mortality a tiny number of infectious individuals can drive an otherwise persistent population to extinction. Our model with disease-induced mortality supports multiple attractors. In addition, we use a Ricker recruitment function in an SIS model and obtained a three component discrete Hopf (Neimark-Sacker) cycle attractor coexisting with a fixed point attractor. The basin boundaries of the coexisting attractors are fractal in nature, and the example exhibits sensitive dependence of the long-term disease dynamics on initial conditions. Furthermore, we show that in contrast to corresponding models without disease-induced mortality, the disease-free state dynamics do not drive the disease dynamics.
Understanding health system reform - a complex adaptive systems perspective.
Sturmberg, Joachim P; O'Halloran, Di M; Martin, Carmel M
2012-02-01
Everyone wants a sustainable well-functioning health system. However, this notion has different meaning to policy makers and funders compared to clinicians and patients. The former perceive public policy and economic constraints, the latter clinical or patient-centred strategies as the means to achieving a desired outcome. Theoretical development and critical analysis of a complex health system model. We introduce the concept of the health care vortex as a metaphor by which to understand the complex adaptive nature of health systems, and the degree to which their behaviour is predetermined by their 'shared values' or attractors. We contrast the likely functions and outcomes of a health system with a people-centred attractor and one with a financial attractor. This analysis suggests a shift in the system's attractor is fundamental to progress health reform thinking. © 2012 Blackwell Publishing Ltd.
On the control of the chaotic attractors of the 2-d Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Smaoui, Nejib; Zribi, Mohamed
2017-03-01
The control problem of the chaotic attractors of the two dimensional (2-d) Navier-Stokes (N-S) equations is addressed in this paper. First, the Fourier Galerkin method based on a reduced-order modelling approach developed by Chen and Price is applied to the 2-d N-S equations to construct a fifth-order system of nonlinear ordinary differential equations (ODEs). The dynamics of the fifth-order system was studied by analyzing the system's attractor for different values of Reynolds number, Re. Then, control laws are proposed to drive the states of the ODE system to a desired attractor. Finally, an adaptive controller is designed to synchronize two reduced order ODE models having different Reynolds numbers and starting from different initial conditions. Simulation results indicate that the proposed control schemes work well.
Predicting atmospheric states from local dynamical properties of the underlying attractor
NASA Astrophysics Data System (ADS)
Faranda, Davide; Rodrigues, David; Alvarez-Castro, M. Carmen; Messori, Gabriele; Yiou, Pascal
2017-04-01
Mid-latitude flows are characterized by a chaotic dynamics and recurring patterns hinting to the existence of an atmospheric attractor. In 1963 Lorenz described this object as: "the collection of all states that the system can assume or approach again and again, as opposed to those that it will ultimately avoid" and analyzed a low dimensional system describing a convective dynamics whose attractor has the shape of a butterfly. Since then, many studies try to find equivalent of the Lorenz butterfly in the complex atmospheric dynamics. Most of the studies where focused to determine the average dimension D of the attractor i.e. the number of degrees of freedom sufficient to describe the atmospheric circulation. However, obtaining reliable estimates of D has proved challenging. Moreover, D does not provide information on transient atmospheric motions, such as those leading to weather extremes. Using recent developments in dynamical systems theory, we show that such motions can be classified through instantaneous rather than average properties of the attractor. The instantaneous properties are uniquely determined by instantaneous dimension and stability. Their extreme values correspond to specific atmospheric patterns, and match extreme weather occurrences. We further show the existence of a significant correlation between the time series of instantaneous stability and dimension and the mean spread of sea-level pressure fields in an operational ensemble weather forecast at lead times of over two weeks. Instantaneous properties of the attractor therefore provide an efficient way of evaluating and informing operational weather forecasts.
Concentration and limit behaviors of stationary measures
NASA Astrophysics Data System (ADS)
Huang, Wen; Ji, Min; Liu, Zhenxin; Yi, Yingfei
2018-04-01
In this paper, we study limit behaviors of stationary measures of the Fokker-Planck equations associated with a system of ordinary differential equations perturbed by a class of multiplicative noise including additive white noise case. As the noises are vanishing, various results on the invariance and concentration of the limit measures are obtained. In particular, we show that if the noise perturbed systems admit a uniform Lyapunov function, then the stationary measures form a relatively sequentially compact set whose weak∗-limits are invariant measures of the unperturbed system concentrated on its global attractor. In the case that the global attractor contains a strong local attractor, we further show that there exists a family of admissible multiplicative noises with respect to which all limit measures are actually concentrated on the local attractor; and on the contrary, in the presence of a strong local repeller in the global attractor, there exists a family of admissible multiplicative noises with respect to which no limit measure can be concentrated on the local repeller. Moreover, we show that if there is a strongly repelling equilibrium in the global attractor, then limit measures with respect to typical families of multiplicative noises are always concentrated away from the equilibrium. As applications of these results, an example of stochastic Hopf bifurcation and an example with non-decomposable ω-limit sets are provided. Our study is closely related to the problem of noise stability of compact invariant sets and invariant measures of the unperturbed system.
Altered Effective Connectivity of Hippocampus-Dependent Episodic Memory Network in mTBI Survivors
2016-01-01
Traumatic brain injuries (TBIs) are generally recognized to affect episodic memory. However, less is known regarding how external force altered the way functionally connected brain structures of the episodic memory system interact. To address this issue, we adopted an effective connectivity based analysis, namely, multivariate Granger causality approach, to explore causal interactions within the brain network of interest. Results presented that TBI induced increased bilateral and decreased ipsilateral effective connectivity in the episodic memory network in comparison with that of normal controls. Moreover, the left anterior superior temporal gyrus (aSTG, the concept forming hub), left hippocampus (the personal experience binding hub), and left parahippocampal gyrus (the contextual association hub) were no longer network hubs in TBI survivors, who compensated for hippocampal deficits by relying more on the right hippocampus (underlying perceptual memory) and the right medial frontal gyrus (MeFG) in the anterior prefrontal cortex (PFC). We postulated that the overrecruitment of the right anterior PFC caused dysfunction of the strategic component of episodic memory, which caused deteriorating episodic memory in mTBI survivors. Our findings also suggested that the pattern of brain network changes in TBI survivors presented similar functional consequences to normal aging. PMID:28074162
Vecchio, F; Miraglia, F; Quaranta, D; Granata, G; Romanello, R; Marra, C; Bramanti, P; Rossini, P M
2016-03-01
Functional brain abnormalities including memory loss are found to be associated with pathological changes in connectivity and network neural structures. Alzheimer's disease (AD) interferes with memory formation from the molecular level, to synaptic functions and neural networks organization. Here, we determined whether brain connectivity of resting-state networks correlate with memory in patients affected by AD and in subjects with mild cognitive impairment (MCI). One hundred and forty-four subjects were recruited: 70 AD (MMSE Mini Mental State Evaluation 21.4), 50 MCI (MMSE 25.2) and 24 healthy subjects (MMSE 29.8). Undirected and weighted cortical brain network was built to evaluate graph core measures to obtain Small World parameters. eLORETA lagged linear connectivity as extracted by electroencephalogram (EEG) signals was used to weight the network. A high statistical correlation between Small World and memory performance was found. Namely, higher Small World characteristic in EEG gamma frequency band during the resting state, better performance in short-term memory as evaluated by the digit span tests. Such Small World pattern might represent a biomarker of working memory impairment in older people both in physiological and pathological conditions. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Encoding mechano-memories in filamentous-actin networks
NASA Astrophysics Data System (ADS)
Majumdar, Sayantan; Foucard, Louis; Levine, Alex; Gardel, Margaret L.
History-dependent adaptation is a central feature of learning and memory. Incorporating such features into `adaptable materials' that can modify their mechanical properties in response to external cues, remains an outstanding challenge in materials science. Here, we study a novel mechanism of mechano-memory in cross-linked F-actin networks, the essential determinants of the mechanical behavior of eukaryotic cells. We find that the non-linear mechanical response of such networks can be reversibly programmed through induction of mechano-memories. In particular, the direction, magnitude, and duration of previously applied shear stresses can be encoded into the network architecture. The `memory' of the forcing history is long-lived, but it can be erased by force applied in the opposite direction. These results demonstrate that F-actin networks can encode analog read-write mechano-memories which can be used for adaptation to mechanical stimuli. We further show that the mechano-memory arises from changes in the nematic order of the constituent filaments. Our results suggest a new mechanism of mechanical sensing in eukaryotic cells and provide a strategy for designing a novel class of materials. S.M. acknowledges U. Chicago MRSEC for support through a Kadanoff-Rice fellowship.
Rizvi, Sanam Shahla; Chung, Tae-Sun
2010-01-01
Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.
K-chameleon and the coincidence problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei Hao; Cai Ronggen; Institute of Theoretical Physics, Chinese Academy of Sciences, P.O. Box 2735, Beijing 100080
2005-02-15
In this paper we present a hybrid model of k-essence and chameleon, named as k-chameleon. In this model, due to the chameleon mechanism, the directly strong coupling between the k-chameleon field and matters (cold dark matters and baryons) is allowed. In the radiation-dominated epoch, the interaction between the k-chameleon field and background matters can be neglected; the behavior of the k-chameleon therefore is the same as that of the ordinary k-essence. After the onset of matter domination, the strong coupling between the k-chameleon and matters dramatically changes the result of the ordinary k-essence. We find that during the matter-dominated epoch,more » only two kinds of attractors may exist: one is the familiar K attractor and the other is a completely new, dubbed C attractor. Once the Universe is attracted into the C attractor, the fraction energy densities of the k-chameleon {omega}{sub {phi}} and dust matter {omega}{sub m} are fixed and comparable, and the Universe will undergo a power-law accelerated expansion. One can adjust the model so that the K attractor does not appear. Thus, the k-chameleon model provides a natural solution to the cosmological coincidence problem.« less
Intermittent control of coexisting attractors.
Liu, Yang; Wiercigroch, Marian; Ing, James; Pavlovskaia, Ekaterina
2013-06-28
This paper proposes a new control method applicable for a class of non-autonomous dynamical systems that naturally exhibit coexisting attractors. The central idea is based on knowledge of a system's basins of attraction, with control actions being applied intermittently in the time domain when the actual trajectory satisfies a proximity constraint with regards to the desired trajectory. This intermittent control uses an impulsive force to perturb one of the system attractors in order to switch the system response onto another attractor. This is carried out by bringing the perturbed state into the desired basin of attraction. The method has been applied to control both smooth and non-smooth systems, with the Duffing and impact oscillators used as examples. The strength of the intermittent control force is also considered, and a constrained intermittent control law is introduced to investigate the effect of limited control force on the efficiency of the controller. It is shown that increasing the duration of the control action and/or the number of control actuations allows one to successfully switch between the stable attractors using a lower control force. Numerical and experimental results are presented to demonstrate the effectiveness of the proposed method.
Hidden hyperchaos and electronic circuit application in a 5D self-exciting homopolar disc dynamo
NASA Astrophysics Data System (ADS)
Wei, Zhouchao; Moroz, Irene; Sprott, J. C.; Akgul, Akif; Zhang, Wei
2017-03-01
We report on the finding of hidden hyperchaos in a 5D extension to a known 3D self-exciting homopolar disc dynamo. The hidden hyperchaos is identified through three positive Lyapunov exponents under the condition that the proposed model has just two stable equilibrium states in certain regions of parameter space. The new 5D hyperchaotic self-exciting homopolar disc dynamo has multiple attractors including point attractors, limit cycles, quasi-periodic dynamics, hidden chaos or hyperchaos, as well as coexisting attractors. We use numerical integrations to create the phase plane trajectories, produce bifurcation diagram, and compute Lyapunov exponents to verify the hidden attractors. Because no unstable equilibria exist in two parameter regions, the system has a multistability and six kinds of complex dynamic behaviors. To the best of our knowledge, this feature has not been previously reported in any other high-dimensional system. Moreover, the 5D hyperchaotic system has been simulated using a specially designed electronic circuit and viewed on an oscilloscope, thereby confirming the results of the numerical integrations. Both Matlab and the oscilloscope outputs produce similar phase portraits. Such implementations in real time represent a new type of hidden attractor with important consequences for engineering applications.
Sourcing dark matter and dark energy from α-attractors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Swagat S.; Sahni, Varun; Shtanov, Yuri, E-mail: swagat@iucaa.in, E-mail: varun@iucaa.in, E-mail: shtanov@bitp.kiev.ua
In [1], Kallosh and Linde drew attention to a new family of superconformal inflationary potentials, subsequently called α-attractors [2]. The α-attractor family can interpolate between a large class of inflationary models. It also has an important theoretical underpinning within the framework of supergravity. We demonstrate that the α-attractors have an even wider appeal since they may describe dark matter and perhaps even dark energy. The dark matter associated with the α-attractors, which we call α-dark matter (αDM), shares many of the attractive features of fuzzy dark matter, with V (φ) = ½ m {sup 2}φ{sup 2}, while having none ofmore » its drawbacks. Like fuzzy dark matter, αDM can have a large Jeans length which could resolve the cusp-core and substructure problems faced by standard cold dark matter. αDM also has an appealing tracker property which enables it to converge to the late-time dark matter asymptote, ( w ) ≅ 0, from a wide range of initial conditions. It thus avoids the enormous fine-tuning problems faced by the m {sup 2}φ{sup 2} potential in describing dark matter.« less
Hidden hyperchaos and electronic circuit application in a 5D self-exciting homopolar disc dynamo.
Wei, Zhouchao; Moroz, Irene; Sprott, J C; Akgul, Akif; Zhang, Wei
2017-03-01
We report on the finding of hidden hyperchaos in a 5D extension to a known 3D self-exciting homopolar disc dynamo. The hidden hyperchaos is identified through three positive Lyapunov exponents under the condition that the proposed model has just two stable equilibrium states in certain regions of parameter space. The new 5D hyperchaotic self-exciting homopolar disc dynamo has multiple attractors including point attractors, limit cycles, quasi-periodic dynamics, hidden chaos or hyperchaos, as well as coexisting attractors. We use numerical integrations to create the phase plane trajectories, produce bifurcation diagram, and compute Lyapunov exponents to verify the hidden attractors. Because no unstable equilibria exist in two parameter regions, the system has a multistability and six kinds of complex dynamic behaviors. To the best of our knowledge, this feature has not been previously reported in any other high-dimensional system. Moreover, the 5D hyperchaotic system has been simulated using a specially designed electronic circuit and viewed on an oscilloscope, thereby confirming the results of the numerical integrations. Both Matlab and the oscilloscope outputs produce similar phase portraits. Such implementations in real time represent a new type of hidden attractor with important consequences for engineering applications.
Mnemonic convergence in social networks: The emergent properties of cognition at a collective level
Coman, Alin; Momennejad, Ida; Drach, Rae D.; Geana, Andra
2016-01-01
The development of shared memories, beliefs, and norms is a fundamental characteristic of human communities. These emergent outcomes are thought to occur owing to a dynamic system of information sharing and memory updating, which fundamentally depends on communication. Here we report results on the formation of collective memories in laboratory-created communities. We manipulated conversational network structure in a series of real-time, computer-mediated interactions in fourteen 10-member communities. The results show that mnemonic convergence, measured as the degree of overlap among community members’ memories, is influenced by both individual-level information-processing phenomena and by the conversational social network structure created during conversational recall. By studying laboratory-created social networks, we show how large-scale social phenomena (i.e., collective memory) can emerge out of microlevel local dynamics (i.e., mnemonic reinforcement and suppression effects). The social-interactionist approach proposed herein points to optimal strategies for spreading information in social networks and provides a framework for measuring and forging collective memories in communities of individuals. PMID:27357678
Generalized memory associativity in a network model for the neuroses
NASA Astrophysics Data System (ADS)
Wedemann, Roseli S.; Donangelo, Raul; de Carvalho, Luís A. V.
2009-03-01
We review concepts introduced in earlier work, where a neural network mechanism describes some mental processes in neurotic pathology and psychoanalytic working-through, as associative memory functioning, according to the findings of Freud. We developed a complex network model, where modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's idea that consciousness is related to symbolic and linguistic memory activity in the brain. We have introduced a generalization of the Boltzmann machine to model memory associativity. Model behavior is illustrated with simulations and some of its properties are analyzed with methods from statistical mechanics.
Nanophotonic rare-earth quantum memory with optically controlled retrieval
NASA Astrophysics Data System (ADS)
Zhong, Tian; Kindem, Jonathan M.; Bartholomew, John G.; Rochman, Jake; Craiciu, Ioana; Miyazono, Evan; Bettinelli, Marco; Cavalli, Enrico; Verma, Varun; Nam, Sae Woo; Marsili, Francesco; Shaw, Matthew D.; Beyer, Andrew D.; Faraon, Andrei
2017-09-01
Optical quantum memories are essential elements in quantum networks for long-distance distribution of quantum entanglement. Scalable development of quantum network nodes requires on-chip qubit storage functionality with control of the readout time. We demonstrate a high-fidelity nanophotonic quantum memory based on a mesoscopic neodymium ensemble coupled to a photonic crystal cavity. The nanocavity enables >95% spin polarization for efficient initialization of the atomic frequency comb memory and time bin-selective readout through an enhanced optical Stark shift of the comb frequencies. Our solid-state memory is integrable with other chip-scale photon source and detector devices for multiplexed quantum and classical information processing at the network nodes.
Cortex and Memory: Emergence of a New Paradigm
ERIC Educational Resources Information Center
Fuster, Joaquin M.
2009-01-01
Converging evidence from humans and nonhuman primates is obliging us to abandon conventional models in favor of a radically different, distributed-network paradigm of cortical memory. Central to the new paradigm is the concept of memory network or cognit--that is, a memory or an item of knowledge defined by a pattern of connections between neuron…
Still searching for the engram
Eichenbaum, Howard
2016-01-01
For nearly a century neurobiologists have searched for the engram - the neural representation of a memory. Early studies showed that the engram is widely distributed both within and across brain areas and is supported by interactions among large networks of neurons. Subsequent research has identified engrams that support memory within dedicated functional systems for habit learning and emotional memory, but the engram for declarative memories has been elusive. Nevertheless, recent years have brought progress from molecular biological approaches that identify neurons and networks that are necessary and sufficient to support memory, and from recording approaches and population analyses that characterize the information coded by large neural networks. These new directions offer the promise of revealing the engrams for episodic and semantic memories. PMID:26944423
Poly(Capro-Lactone) Networks as Actively Moving Polymers
NASA Astrophysics Data System (ADS)
Meng, Yuan
Shape-memory polymers (SMPs), as a subset of actively moving polymers, form an exciting class of materials that can store and recover elastic deformation energy upon application of an external stimulus. Although engineering of SMPs nowadays has lead to robust materials that can memorize multiple temporary shapes, and can be triggered by various stimuli such as heat, light, moisture, or applied magnetic fields, further commercialization of SMPs is still constrained by the material's incapability to store large elastic energy, as well as its inherent one-way shape-change nature. This thesis develops a series of model semi-crystalline shape-memory networks that exhibit ultra-high energy storage capacity, with accurately tunable triggering temperature; by introducing a second competing network, or reconfiguring the existing network under strained state, configurational chain bias can be effectively locked-in, and give rise to two-way shape-actuators that, in the absence of an external load, elongates upon cooling and reversibly contracts upon heating. We found that well-defined network architecture plays essential role on strain-induced crystallization and on the performance of cold-drawn shape-memory polymers. Model networks with uniform molecular weight between crosslinks, and specified functionality of each net-point, results in tougher, more elastic materials with a high degree of crystallinity and outstanding shape-memory properties. The thermal behavior of the model networks can be finely modified by introducing non-crystalline small molecule linkers that effectively frustrates the crystallization of the network strands. This resulted in shape-memory networks that are ultra-sensitive to heat, as deformed materials can be efficiently triggered to revert to its permanent state upon only exposure to body temperature. We also coupled the same reaction adopted to create the model network with conventional free-radical polymerization to prepare a dual-cure "double network" that behaves as a real thermal "actuator". This approach places sub-chains under different degrees of configurational bias within the network to utilize the material's propensity to undergo stress-induced crystallization. Reconfiguration of model shape-memory networks containing photo-sensitive linkages can also be employed to program two-way actuator. Chain reshuffling of a partially reconfigurable network is initiated upon exposure to light under specific strains. Interesting photo-induced creep and stress relaxation behaviors were demonstrated and understood based on a novel transient network model we derived. In summary, delicate manipulation of shape-memory network architectures addressed critical issues constraining the application of this type of functional polymer material. Strategies developed in this thesis may provide new opportunity to the field of shape-memory polymers.