Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting
Ghazali, Rozaida; Herawan, Tutut
2016-01-01
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927
Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.
Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut
2016-01-01
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.
Method for neural network control of motion using real-time environmental feedback
NASA Technical Reports Server (NTRS)
Buckley, Theresa M. (Inventor)
1997-01-01
A method of motion control for robotics and other automatically controlled machinery using a neural network controller with real-time environmental feedback. The method is illustrated with a two-finger robotic hand having proximity sensors and force sensors that provide environmental feedback signals. The neural network controller is taught to control the robotic hand through training sets using back- propagation methods. The training sets are created by recording the control signals and the feedback signal as the robotic hand or a simulation of the robotic hand is moved through a representative grasping motion. The data recorded is divided into discrete increments of time and the feedback data is shifted out of phase with the control signal data so that the feedback signal data lag one time increment behind the control signal data. The modified data is presented to the neural network controller as a training set. The time lag introduced into the data allows the neural network controller to account for the temporal component of the robotic motion. Thus trained, the neural network controlled robotic hand is able to grasp a wide variety of different objects by generalizing from the training sets.
An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems
1991-12-01
neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input
Neural dynamic programming and its application to control systems
NASA Astrophysics Data System (ADS)
Seong, Chang-Yun
There are few general practical feedback control methods for nonlinear MIMO (multi-input-multi-output) systems, although such methods exist for their linear counterparts. Neural Dynamic Programming (NDP) is proposed as a practical design method of optimal feedback controllers for nonlinear MIMO systems. NDP is an offspring of both neural networks and optimal control theory. In optimal control theory, the optimal solution to any nonlinear MIMO control problem may be obtained from the Hamilton-Jacobi-Bellman equation (HJB) or the Euler-Lagrange equations (EL). The two sets of equations provide the same solution in different forms: EL leads to a sequence of optimal control vectors, called Feedforward Optimal Control (FOC); HJB yields a nonlinear optimal feedback controller, called Dynamic Programming (DP). DP produces an optimal solution that can reject disturbances and uncertainties as a result of feedback. Unfortunately, computation and storage requirements associated with DP solutions can be problematic, especially for high-order nonlinear systems. This dissertation presents an approximate technique for solving the DP problem based on neural network techniques that provides many of the performance benefits (e.g., optimality and feedback) of DP and benefits from the numerical properties of neural networks. We formulate neural networks to approximate optimal feedback solutions whose existence DP justifies. We show the conditions under which NDP closely approximates the optimal solution. Finally, we introduce the learning operator characterizing the learning process of the neural network in searching the optimal solution. The analysis of the learning operator provides not only a fundamental understanding of the learning process in neural networks but also useful guidelines for selecting the number of weights of the neural network. As a result, NDP finds---with a reasonable amount of computation and storage---the optimal feedback solutions to nonlinear MIMO control problems that would be very difficult to solve with DP. NDP was demonstrated on several applications such as the lateral autopilot logic for a Boeing 747, the minimum fuel control of a double-integrator plant with bounded control, the backward steering of a two-trailer truck, and the set-point control of a two-link robot arm.
Modeling neural circuits in Parkinson's disease.
Psiha, Maria; Vlamos, Panayiotis
2015-01-01
Parkinson's disease (PD) is caused by abnormal neural activity of the basal ganglia which are connected to the cerebral cortex in the brain surface through complex neural circuits. For a better understanding of the pathophysiological mechanisms of PD, it is important to identify the underlying PD neural circuits, and to pinpoint the precise nature of the crucial aberrations in these circuits. In this paper, the general architecture of a hybrid Multilayer Perceptron (MLP) network for modeling the neural circuits in PD is presented. The main idea of the proposed approach is to divide the parkinsonian neural circuitry system into three discrete subsystems: the external stimuli subsystem, the life-threatening events subsystem, and the basal ganglia subsystem. The proposed model, which includes the key roles of brain neural circuit in PD, is based on both feed-back and feed-forward neural networks. Specifically, a three-layer MLP neural network with feedback in the second layer was designed. The feedback in the second layer of this model simulates the dopamine modulatory effect of compacta on striatum.
Cummine, Jacqueline; Cribben, Ivor; Luu, Connie; Kim, Esther; Bahktiari, Reyhaneh; Georgiou, George; Boliek, Carol A
2016-05-01
The neural circuitry associated with language processing is complex and dynamic. Graphical models are useful for studying complex neural networks as this method provides information about unique connectivity between regions within the context of the entire network of interest. Here, the authors explored the neural networks during covert reading to determine the role of feedforward and feedback loops in covert speech production. Brain activity of skilled adult readers was assessed in real word and pseudoword reading tasks with functional MRI (fMRI). The authors provide evidence for activity coherence in the feedforward system (inferior frontal gyrus-supplementary motor area) during real word reading and in the feedback system (supramarginal gyrus-precentral gyrus) during pseudoword reading. Graphical models provided evidence of an extensive, highly connected, neural network when individuals read real words that relied on coordination of the feedforward system. In contrast, when individuals read pseudowords the authors found a limited/restricted network that relied on coordination of the feedback system. Together, these results underscore the importance of considering multiple pathways and articulatory loops during language tasks and provide evidence for a print-to-speech neural network. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.
Wan, Ying; Cao, Jinde; Wen, Guanghui
In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.
Zenke, Friedemann; Ganguli, Surya
2018-06-01
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.
NASA Astrophysics Data System (ADS)
Pfeil, Thomas; Jordan, Jakob; Tetzlaff, Tom; Grübl, Andreas; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2016-04-01
High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad, heavy-tailed firing-rate distributions. In line with former studies, cell heterogeneities reduce shared-input correlations. Overall, however, correlations in the recurrent system can increase with the level of heterogeneity as a consequence of diminished effective negative feedback.
Generalized Adaptive Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1993-01-01
Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.
Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.; Munhall, Kevin G.; Cusack, Rhodri; Johnsrude, Ingrid S.
2013-01-01
The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multi-voxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was employed to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared to during passive listening. One network of regions appears to encode an ‘error signal’ irrespective of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a fronto-temporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Taken together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems. PMID:23467350
NASA Astrophysics Data System (ADS)
Kim, Nakwan
Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.
Decorrelation of Neural-Network Activity by Inhibitory Feedback
Einevoll, Gaute T.; Diesmann, Markus
2012-01-01
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II). PMID:23133368
Catic, Aida; Gurbeta, Lejla; Kurtovic-Kozaric, Amina; Mehmedbasic, Senad; Badnjevic, Almir
2018-02-13
The usage of Artificial Neural Networks (ANNs) for genome-enabled classifications and establishing genome-phenotype correlations have been investigated more extensively over the past few years. The reason for this is that ANNs are good approximates of complex functions, so classification can be performed without the need for explicitly defined input-output model. This engineering tool can be applied for optimization of existing methods for disease/syndrome classification. Cytogenetic and molecular analyses are the most frequent tests used in prenatal diagnostic for the early detection of Turner, Klinefelter, Patau, Edwards and Down syndrome. These procedures can be lengthy, repetitive; and often employ invasive techniques so a robust automated method for classifying and reporting prenatal diagnostics would greatly help the clinicians with their routine work. The database consisted of data collected from 2500 pregnant woman that came to the Institute of Gynecology, Infertility and Perinatology "Mehmedbasic" for routine antenatal care between January 2000 and December 2016. During first trimester all women were subject to screening test where values of maternal serum pregnancy-associated plasma protein A (PAPP-A) and free beta human chorionic gonadotropin (β-hCG) were measured. Also, fetal nuchal translucency thickness and the presence or absence of the nasal bone was observed using ultrasound. The architectures of linear feedforward and feedback neural networks were investigated for various training data distributions and number of neurons in hidden layer. Feedback neural network architecture out performed feedforward neural network architecture in predictive ability for all five aneuploidy prenatal syndrome classes. Feedforward neural network with 15 neurons in hidden layer achieved classification sensitivity of 92.00%. Classification sensitivity of feedback (Elman's) neural network was 99.00%. Average accuracy of feedforward neural network was 89.6% and for feedback was 98.8%. The results presented in this paper prove that an expert diagnostic system based on neural networks can be efficiently used for classification of five aneuploidy syndromes, covered with this study, based on first trimester maternal serum screening data, ultrasonographic findings and patient demographics. Developed Expert System proved to be simple, robust, and powerful in properly classifying prenatal aneuploidy syndromes.
Neural cryptography with feedback.
Ruttor, Andreas; Kinzel, Wolfgang; Shacham, Lanir; Kanter, Ido
2004-04-01
Neural cryptography is based on a competition between attractive and repulsive stochastic forces. A feedback mechanism is added to neural cryptography which increases the repulsive forces. Using numerical simulations and an analytic approach, the probability of a successful attack is calculated for different model parameters. Scaling laws are derived which show that feedback improves the security of the system. In addition, a network with feedback generates a pseudorandom bit sequence which can be used to encrypt and decrypt a secret message.
Jewett, Kathryn A; Christian, Catherine A; Bacos, Jonathan T; Lee, Kwan Young; Zhu, Jiuhe; Tsai, Nien-Pei
2016-03-22
Neural network synchrony is a critical factor in regulating information transmission through the nervous system. Improperly regulated neural network synchrony is implicated in pathophysiological conditions such as epilepsy. Despite the awareness of its importance, the molecular signaling underlying the regulation of neural network synchrony, especially after stimulation, remains largely unknown. In this study, we show that elevation of neuronal activity by the GABA(A) receptor antagonist, Picrotoxin, increases neural network synchrony in primary mouse cortical neuron cultures. The elevation of neuronal activity triggers Mdm2-dependent degradation of the tumor suppressor p53. We show here that blocking the degradation of p53 further enhances Picrotoxin-induced neural network synchrony, while promoting the inhibition of p53 with a p53 inhibitor reduces Picrotoxin-induced neural network synchrony. These data suggest that Mdm2-p53 signaling mediates a feedback mechanism to fine-tune neural network synchrony after activity stimulation. Furthermore, genetically reducing the expression of a direct target gene of p53, Nedd4-2, elevates neural network synchrony basally and occludes the effect of Picrotoxin. Finally, using a kainic acid-induced seizure model in mice, we show that alterations of Mdm2-p53-Nedd4-2 signaling affect seizure susceptibility. Together, our findings elucidate a critical role of Mdm2-p53-Nedd4-2 signaling underlying the regulation of neural network synchrony and seizure susceptibility and reveal potential therapeutic targets for hyperexcitability-associated neurological disorders.
Electronic neural networks for global optimization
NASA Technical Reports Server (NTRS)
Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.
1990-01-01
An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.
NASA Technical Reports Server (NTRS)
Ross, Muriel D.
1991-01-01
The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.
Smith, David V.; Sip, Kamila E.; Delgado, Mauricio R.
2016-01-01
Multiple large-scale neural networks orchestrate a wide range of cognitive processes. For example, interoceptive processes related to self-referential thinking have been linked to the default-mode network (DMN); whereas exteroceptive processes related to cognitive control have been linked to the executive-control network (ECN). Although the DMN and ECN have been postulated to exert opposing effects on cognition, it remains unclear how connectivity with these spatially overlapping networks contribute to fluctuations in behavior. While previous work has suggested the medial prefrontal cortex (MPFC) is involved in behavioral change following feedback, these observations could be linked to interoceptive processes tied to DMN or exteroceptive processes tied to ECN because MPFC is positioned in both networks. To address this problem, we employed independent component analysis combined with dual-regression functional connectivity analysis. Participants made a series of financial decisions framed as monetary gains or losses. In some sessions, participants received feedback from a peer observing their choices; in other sessions, feedback was not provided. Following feedback, framing susceptibility—indexed as the increase in gambling behavior in loss frames compared to gain frames—was heightened in some participants and diminished in others. We examined whether these individual differences were linked to differences in connectivity by contrasting sessions containing feedback against those that did not contain feedback. We found two key results. As framing susceptibility increased, the MPFC increased connectivity with DMN; in contrast, temporal-parietal junction decreased connectivity with the ECN. Our results highlight how functional connectivity patterns with distinct neural networks contribute to idiosyncratic behavioral changes. PMID:25858445
Smith, David V; Sip, Kamila E; Delgado, Mauricio R
2015-07-01
Multiple large-scale neural networks orchestrate a wide range of cognitive processes. For example, interoceptive processes related to self-referential thinking have been linked to the default-mode network (DMN); whereas exteroceptive processes related to cognitive control have been linked to the executive-control network (ECN). Although the DMN and ECN have been postulated to exert opposing effects on cognition, it remains unclear how connectivity with these spatially overlapping networks contribute to fluctuations in behavior. While previous work has suggested the medial-prefrontal cortex (MPFC) is involved in behavioral change following feedback, these observations could be linked to interoceptive processes tied to DMN or exteroceptive processes tied to ECN because MPFC is positioned in both networks. To address this problem, we employed independent component analysis combined with dual-regression functional connectivity analysis. Participants made a series of financial decisions framed as monetary gains or losses. In some sessions, participants received feedback from a peer observing their choices; in other sessions, feedback was not provided. Following feedback, framing susceptibility-indexed as the increase in gambling behavior in loss frames compared to gain frames-was heightened in some participants and diminished in others. We examined whether these individual differences were linked to differences in connectivity by contrasting sessions containing feedback against those that did not contain feedback. We found two key results. As framing susceptibility increased, the MPFC increased connectivity with DMN; in contrast, temporal-parietal junction decreased connectivity with the ECN. Our results highlight how functional connectivity patterns with distinct neural networks contribute to idiosyncratic behavioral changes. © 2015 Wiley Periodicals, Inc.
Effect of inhibitory feedback on correlated firing of spiking neural network.
Xie, Jinli; Wang, Zhijie
2013-08-01
Understanding the properties and mechanisms that generate different forms of correlation is critical for determining their role in cortical processing. Researches on retina, visual cortex, sensory cortex, and computational model have suggested that fast correlation with high temporal precision appears consistent with common input, and correlation on a slow time scale likely involves feedback. Based on feedback spiking neural network model, we investigate the role of inhibitory feedback in shaping correlations on a time scale of 100 ms. Notably, the relationship between the correlation coefficient and inhibitory feedback strength is non-monotonic. Further, computational simulations show how firing rate and oscillatory activity form the basis of the mechanisms underlying this relationship. When the mean firing rate holds unvaried, the correlation coefficient increases monotonically with inhibitory feedback, but the correlation coefficient keeps decreasing when the network has no oscillatory activity. Our findings reveal that two opposing effects of the inhibitory feedback on the firing activity of the network contribute to the non-monotonic relationship between the correlation coefficient and the strength of the inhibitory feedback. The inhibitory feedback affects the correlated firing activity by modulating the intensity and regularity of the spike trains. Finally, the non-monotonic relationship is replicated with varying transmission delay and different spatial network structure, demonstrating the universality of the results.
Ding, Xiaoshuai; Cao, Jinde; Zhao, Xuan; Alsaadi, Fuad E
2017-08-01
This paper is concerned with the drive-response synchronization for a class of fractional-order bidirectional associative memory neural networks with time delays, as well as in the presence of discontinuous activation functions. The global existence of solution under the framework of Filippov for such networks is firstly obtained based on the fixed-point theorem for condensing map. Then the state feedback and impulsive controllers are, respectively, designed to ensure the Mittag-Leffler synchronization of these neural networks and two new synchronization criteria are obtained, which are expressed in terms of a fractional comparison principle and Razumikhin techniques. Numerical simulations are presented to validate the proposed methodologies.
Adaptive artificial neural network for autonomous robot control
NASA Technical Reports Server (NTRS)
Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.
1992-01-01
The topics are presented in viewgraph form and include: neural network controller for robot arm positioning with visual feedback; initial training of the arm; automatic recovery from cumulative fault scenarios; and error reduction by iterative fine movements.
Anatomically constrained neural network models for the categorization of facial expression
NASA Astrophysics Data System (ADS)
McMenamin, Brenton W.; Assadi, Amir H.
2004-12-01
The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.
Anatomically constrained neural network models for the categorization of facial expression
NASA Astrophysics Data System (ADS)
McMenamin, Brenton W.; Assadi, Amir H.
2005-01-01
The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.
Zheng, Mingwen; Li, Lixiang; Peng, Haipeng; Xiao, Jinghua; Yang, Yixian; Zhang, Yanping; Zhao, Hui
2018-01-01
This paper mainly studies the globally fixed-time synchronization of a class of coupled neutral-type neural networks with mixed time-varying delays via discontinuous feedback controllers. Compared with the traditional neutral-type neural network model, the model in this paper is more general. A class of general discontinuous feedback controllers are designed. With the help of the definition of fixed-time synchronization, the upper right-hand derivative and a defined simple Lyapunov function, some easily verifiable and extensible synchronization criteria are derived to guarantee the fixed-time synchronization between the drive and response systems. Finally, two numerical simulations are given to verify the correctness of the results.
2018-01-01
This paper mainly studies the globally fixed-time synchronization of a class of coupled neutral-type neural networks with mixed time-varying delays via discontinuous feedback controllers. Compared with the traditional neutral-type neural network model, the model in this paper is more general. A class of general discontinuous feedback controllers are designed. With the help of the definition of fixed-time synchronization, the upper right-hand derivative and a defined simple Lyapunov function, some easily verifiable and extensible synchronization criteria are derived to guarantee the fixed-time synchronization between the drive and response systems. Finally, two numerical simulations are given to verify the correctness of the results. PMID:29370248
Observer-Based Adaptive Neural Network Control for Nonlinear Systems in Nonstrict-Feedback Form.
Chen, Bing; Zhang, Huaguang; Lin, Chong
2016-01-01
This paper focuses on the problem of adaptive neural network (NN) control for a class of nonlinear nonstrict-feedback systems via output feedback. A novel adaptive NN backstepping output-feedback control approach is first proposed for nonlinear nonstrict-feedback systems. The monotonicity of system bounding functions and the structure character of radial basis function (RBF) NNs are used to overcome the difficulties that arise from nonstrict-feedback structure. A state observer is constructed to estimate the immeasurable state variables. By combining adaptive backstepping technique with approximation capability of radial basis function NNs, an output-feedback adaptive NN controller is designed through backstepping approach. It is shown that the proposed controller guarantees semiglobal boundedness of all the signals in the closed-loop systems. Two examples are used to illustrate the effectiveness of the proposed approach.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1992-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1995-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Wang, Dongshu; Huang, Lihong; Tang, Longkun
2015-08-01
This paper is concerned with the synchronization dynamical behaviors for a class of delayed neural networks with discontinuous neuron activations. Continuous and discontinuous state feedback controller are designed such that the neural networks model can realize exponential complete synchronization in view of functional differential inclusions theory, Lyapunov functional method and inequality technique. The new proposed results here are very easy to verify and also applicable to neural networks with continuous activations. Finally, some numerical examples show the applicability and effectiveness of our main results.
Neural Network Classifies Teleoperation Data
NASA Technical Reports Server (NTRS)
Fiorini, Paolo; Giancaspro, Antonio; Losito, Sergio; Pasquariello, Guido
1994-01-01
Prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on manipulator hand. Prototype is early, subsystem-level product of continuing effort to develop automated system that assists in training and supervising human control operator: provides symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to operator in real time during successive executions of same task. Also simplifies transition between teleoperation and autonomous modes of telerobotic system.
Tong, Shaocheng; Wang, Tong; Li, Yongming; Zhang, Huaguang
2014-06-01
This paper discusses the problem of adaptive neural network output feedback control for a class of stochastic nonlinear strict-feedback systems. The concerned systems have certain characteristics, such as unknown nonlinear uncertainties, unknown dead-zones, unmodeled dynamics and without the direct measurements of state variables. In this paper, the neural networks (NNs) are employed to approximate the unknown nonlinear uncertainties, and then by representing the dead-zone as a time-varying system with a bounded disturbance. An NN state observer is designed to estimate the unmeasured states. Based on both backstepping design technique and a stochastic small-gain theorem, a robust adaptive NN output feedback control scheme is developed. It is proved that all the variables involved in the closed-loop system are input-state-practically stable in probability, and also have robustness to the unmodeled dynamics. Meanwhile, the observer errors and the output of the system can be regulated to a small neighborhood of the origin by selecting appropriate design parameters. Simulation examples are also provided to illustrate the effectiveness of the proposed approach.
Neural Networks for Rapid Design and Analysis
NASA Technical Reports Server (NTRS)
Sparks, Dean W., Jr.; Maghami, Peiman G.
1998-01-01
Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.
A biologically inspired neural network for dynamic programming.
Francelin Romero, R A; Kacpryzk, J; Gomide, F
2001-12-01
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.
Hu, Jin; Zeng, Chunna
2017-02-01
The complex-valued Cohen-Grossberg neural network is a special kind of complex-valued neural network. In this paper, the synchronization problem of a class of complex-valued Cohen-Grossberg neural networks with known and unknown parameters is investigated. By using Lyapunov functionals and the adaptive control method based on parameter identification, some adaptive feedback schemes are proposed to achieve synchronization exponentially between the drive and response systems. The results obtained in this paper have extended and improved some previous works on adaptive synchronization of Cohen-Grossberg neural networks. Finally, two numerical examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Implementation of pulse-coupled neural networks in a CNAPS environment.
Kinser, J M; Lindblad, T
1999-01-01
Pulse coupled neural networks (PCNN's) are biologically inspired algorithms very well suited for image/signal preprocessing. While several analog implementations are proposed we suggest a digital implementation in an existing environment, the connected network of adapted processors system (CNAPS). The reason for this is two fold. First, CNAPS is a commercially available chip which has been used for several neural-network implementations. Second, the PCNN is, in almost all applications, a very efficient component of a system requiring subsequent and additional processing. This may include gating, Fourier transforms, neural classifiers, data mining, etc, with or without feedback to the PCNN.
Wei, Xile; Zhang, Danhong; Lu, Meili; Wang, Jiang; Yu, Haitao; Che, Yanqiu
2015-01-01
This paper presents the endogenous electric field in chemical or electrical synaptic coupled networks, aiming to study the role of endogenous field feedback in the signal propagation in neural systems. It shows that the feedback of endogenous fields to network activities can reduce the required energy of the noise and enhance the transmission of input signals in hybrid coupled populations. As a common and important nonsynaptic interactive method among neurons, particularly, the endogenous filed feedback can not only promote the detectability of exogenous weak signal in hybrid coupled neural population but also enhance the robustness of the detectability against noise. Furthermore, with the increasing of field coupling strengths, the endogenous field feedback is conductive to the stochastic resonance by facilitating the transition of cluster activities from the no spiking to spiking regions. Distinct from synaptic coupling, the endogenous field feedback can play a role as internal driving force to boost the population activities, which is similar to the noise. Thus, it can help to transmit exogenous weak signals within the network in the absence of noise drive via the stochastic-like resonance.
Balanced Cortical Microcircuitry for Spatial Working Memory Based on Corrective Feedback Control
2014-01-01
A hallmark of working memory is the ability to maintain graded representations of both the spatial location and amplitude of a memorized stimulus. Previous work has identified a neural correlate of spatial working memory in the persistent maintenance of spatially specific patterns of neural activity. How such activity is maintained by neocortical circuits remains unknown. Traditional models of working memory maintain analog representations of either the spatial location or the amplitude of a stimulus, but not both. Furthermore, although most previous models require local excitation and lateral inhibition to maintain spatially localized persistent activity stably, the substrate for lateral inhibitory feedback pathways is unclear. Here, we suggest an alternative model for spatial working memory that is capable of maintaining analog representations of both the spatial location and amplitude of a stimulus, and that does not rely on long-range feedback inhibition. The model consists of a functionally columnar network of recurrently connected excitatory and inhibitory neural populations. When excitation and inhibition are balanced in strength but offset in time, drifts in activity trigger spatially specific negative feedback that corrects memory decay. The resulting networks can temporally integrate inputs at any spatial location, are robust against many commonly considered perturbations in network parameters, and, when implemented in a spiking model, generate irregular neural firing characteristic of that observed experimentally during persistent activity. This work suggests balanced excitatory–inhibitory memory circuits implementing corrective negative feedback as a substrate for spatial working memory. PMID:24828633
NASA Astrophysics Data System (ADS)
Virkar, Yogesh S.; Shew, Woodrow L.; Restrepo, Juan G.; Ott, Edward
2016-10-01
Learning and memory are acquired through long-lasting changes in synapses. In the simplest models, such synaptic potentiation typically leads to runaway excitation, but in reality there must exist processes that robustly preserve overall stability of the neural system dynamics. How is this accomplished? Various approaches to this basic question have been considered. Here we propose a particularly compelling and natural mechanism for preserving stability of learning neural systems. This mechanism is based on the global processes by which metabolic resources are distributed to the neurons by glial cells. Specifically, we introduce and study a model composed of two interacting networks: a model neural network interconnected by synapses that undergo spike-timing-dependent plasticity; and a model glial network interconnected by gap junctions that diffusively transport metabolic resources among the glia and, ultimately, to neural synapses where they are consumed. Our main result is that the biophysical constraints imposed by diffusive transport of metabolic resources through the glial network can prevent runaway growth of synaptic strength, both during ongoing activity and during learning. Our findings suggest a previously unappreciated role for glial transport of metabolites in the feedback control stabilization of neural network dynamics during learning.
NASA Astrophysics Data System (ADS)
Wang, W.; Wang, D.; Peng, Z. H.
2017-09-01
Without assuming that the communication topologies among the neural network (NN) weights are to be undirected and the states of each agent are measurable, the cooperative learning NN output feedback control is addressed for uncertain nonlinear multi-agent systems with identical structures in strict-feedback form. By establishing directed communication topologies among NN weights to share their learned knowledge, NNs with cooperative learning laws are employed to identify the uncertainties. By designing NN-based κ-filter observers to estimate the unmeasurable states, a new cooperative learning output feedback control scheme is proposed to guarantee that the system outputs can track nonidentical reference signals with bounded tracking errors. A simulation example is given to demonstrate the effectiveness of the theoretical results.
Wu, Yuanyuan; Cao, Jinde; Li, Qingbo; Alsaedi, Ahmed; Alsaadi, Fuad E
2017-01-01
This paper deals with the finite-time synchronization problem for a class of uncertain coupled switched neural networks under asynchronous switching. By constructing appropriate Lyapunov-like functionals and using the average dwell time technique, some sufficient criteria are derived to guarantee the finite-time synchronization of considered uncertain coupled switched neural networks. Meanwhile, the asynchronous switching feedback controller is designed to finite-time synchronize the concerned networks. Finally, two numerical examples are introduced to show the validity of the main results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Balanced cortical microcircuitry for spatial working memory based on corrective feedback control.
Lim, Sukbin; Goldman, Mark S
2014-05-14
A hallmark of working memory is the ability to maintain graded representations of both the spatial location and amplitude of a memorized stimulus. Previous work has identified a neural correlate of spatial working memory in the persistent maintenance of spatially specific patterns of neural activity. How such activity is maintained by neocortical circuits remains unknown. Traditional models of working memory maintain analog representations of either the spatial location or the amplitude of a stimulus, but not both. Furthermore, although most previous models require local excitation and lateral inhibition to maintain spatially localized persistent activity stably, the substrate for lateral inhibitory feedback pathways is unclear. Here, we suggest an alternative model for spatial working memory that is capable of maintaining analog representations of both the spatial location and amplitude of a stimulus, and that does not rely on long-range feedback inhibition. The model consists of a functionally columnar network of recurrently connected excitatory and inhibitory neural populations. When excitation and inhibition are balanced in strength but offset in time, drifts in activity trigger spatially specific negative feedback that corrects memory decay. The resulting networks can temporally integrate inputs at any spatial location, are robust against many commonly considered perturbations in network parameters, and, when implemented in a spiking model, generate irregular neural firing characteristic of that observed experimentally during persistent activity. This work suggests balanced excitatory-inhibitory memory circuits implementing corrective negative feedback as a substrate for spatial working memory. Copyright © 2014 the authors 0270-6474/14/346790-17$15.00/0.
Noise in Neural Networks: Thresholds, Hysteresis, and Neuromodulation of Signal-To-Noise
NASA Astrophysics Data System (ADS)
Keeler, James D.; Pichler, Elgar E.; Ross, John
1989-03-01
We study a neural-network model including Gaussian noise, higher-order neuronal interactions, and neuromodulation. For a first-order network, there is a threshold in the noise level (phase transition) above which the network displays only disorganized behavior and critical slowing down near the noise threshold. The network can tolerate more noise if it has higher-order feedback interactions, which also lead to hysteresis and multistability in the network dynamics. The signal-to-noise ratio can be adjusted in a biological neural network by neuromodulators such as norepinephrine. Comparisons are made to experimental results and further investigations are suggested to test the effects of hysteresis and neuromodulation in pattern recognition and learning. We propose that norepinephrine may ``quench'' the neural patterns of activity to enhance the ability to learn details.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meier, E.; Morgan, M. J.; Biedron, S. G.
2009-01-01
This paper describes the implementation of a neural network hybrid controller for energy stabilization at the Australian Synchrotron Linac. The structure of the controller consists of a neural network (NNET) feed forward control, augmented by a conventional Proportional-Integral (PI) feedback controller to ensure stability of the system. The system is provided with past states of the machine in order to predict its future state, and therefore apply appropriate feed forward control. The NNET is able to cancel multiple frequency jitter in real-time. When it is not performing optimally due to jitter changes, the system can successfully be augmented by themore » PI controller to attenuate the remaining perturbations. With a view to control the energy and bunch length at the FERMI{at}Elettra Free Electron Laser (FEL), the present study considers a neural network hybrid feed forward-feedback type of control to rectify limitations related to feedback systems, such as poor response for high jitter frequencies or limited bandwidth, while ensuring robustness of control. The Australian Synchrotron Linac is equipped with a beam position monitor (BPM), that was provided by Sincrotrone Trieste from a former transport line thus allowing energy measurements and energy control experiments. The present study will consequently focus on correcting energy jitter induced by variations in klystron phase and voltage.« less
Zou, An-Min; Dev Kumar, Krishna; Hou, Zeng-Guang
2010-09-01
This paper investigates the problem of output feedback attitude control of an uncertain spacecraft. Two robust adaptive output feedback controllers based on Chebyshev neural networks (CNN) termed adaptive neural networks (NN) controller-I and adaptive NN controller-II are proposed for the attitude tracking control of spacecraft. The four-parameter representations (quaternion) are employed to describe the spacecraft attitude for global representation without singularities. The nonlinear reduced-order observer is used to estimate the derivative of the spacecraft output, and the CNN is introduced to further improve the control performance through approximating the spacecraft attitude motion. The implementation of the basis functions of the CNN used in the proposed controllers depends only on the desired signals, and the smooth robust compensator using the hyperbolic tangent function is employed to counteract the CNN approximation errors and external disturbances. The adaptive NN controller-II can efficiently avoid the over-estimation problem (i.e., the bound of the CNNs output is much larger than that of the approximated unknown function, and hence, the control input may be very large) existing in the adaptive NN controller-I. Both adaptive output feedback controllers using CNN can guarantee that all signals in the resulting closed-loop system are uniformly ultimately bounded. For performance comparisons, the standard adaptive controller using the linear parameterization of spacecraft attitude motion is also developed. Simulation studies are presented to show the advantages of the proposed CNN-based output feedback approach over the standard adaptive output feedback approach.
Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano
2008-09-01
This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy.
Autoshaping and automaintenance: a neural-network approach.
Burgos, José E
2007-07-01
This article presents an interpretation of autoshaping, and positive and negative automaintenance, based on a neural-network model. The model makes no distinction between operant and respondent learning mechanisms, and takes into account knowledge of hippocampal and dopaminergic systems. Four simulations were run, each one using an A-B-A design and four instances of feedfoward architectures. In A, networks received a positive contingency between inputs that simulated a conditioned stimulus (CS) and an input that simulated an unconditioned stimulus (US). Responding was simulated as an output activation that was neither elicited by nor required for the US. B was an omission-training procedure. Response directedness was defined as sensory feedback from responding, simulated as a dependence of other inputs on responding. In Simulation 1, the phenomena were simulated with a fully connected architecture and maximally intense response feedback. The other simulations used a partially connected architecture without competition between CS and response feedback. In Simulation 2, a maximally intense feedback resulted in substantial autoshaping and automaintenance. In Simulation 3, eliminating response feedback interfered substantially with autoshaping and automaintenance. In Simulation 4, intermediate autoshaping and automaintenance resulted from an intermediate response feedback. Implications for the operant-respondent distinction and the behavior-neuroscience relation are discussed.
Autoshaping and Automaintenance: A Neural-Network Approach
Burgos, José E
2007-01-01
This article presents an interpretation of autoshaping, and positive and negative automaintenance, based on a neural-network model. The model makes no distinction between operant and respondent learning mechanisms, and takes into account knowledge of hippocampal and dopaminergic systems. Four simulations were run, each one using an A-B-A design and four instances of feedfoward architectures. In A, networks received a positive contingency between inputs that simulated a conditioned stimulus (CS) and an input that simulated an unconditioned stimulus (US). Responding was simulated as an output activation that was neither elicited by nor required for the US. B was an omission-training procedure. Response directedness was defined as sensory feedback from responding, simulated as a dependence of other inputs on responding. In Simulation 1, the phenomena were simulated with a fully connected architecture and maximally intense response feedback. The other simulations used a partially connected architecture without competition between CS and response feedback. In Simulation 2, a maximally intense feedback resulted in substantial autoshaping and automaintenance. In Simulation 3, eliminating response feedback interfered substantially with autoshaping and automaintenance. In Simulation 4, intermediate autoshaping and automaintenance resulted from an intermediate response feedback. Implications for the operant–respondent distinction and the behavior–neuroscience relation are discussed. PMID:17725055
Loop Mirror Laser Neural Network with a Fast Liquid-Crystal Display
NASA Astrophysics Data System (ADS)
Mos, Evert C.; Schleipen, Jean J. H. B.; de Waardt, Huug; Khoe, Djan G. D.
1999-07-01
In our laser neural network (LNN) all-optical threshold action is obtained by application of controlled optical feedback to a laser diode. Here an extended experimental LNN is presented with as many as 32 neurons and 12 inputs. In the setup we use a fast liquid-crystal display to implement an optical matrix vector multiplier. This display, based on ferroelectric liquid-crystal material, enables us to present 125 training examples s to the LNN. To maximize the optical feedback efficiency of the setup, a loop mirror is introduced. We use a -rule learning algorithm to train the network to perform a number of functions toward the application area of telecommunication data switching.
Finite-Time Stabilization and Adaptive Control of Memristor-Based Delayed Neural Networks.
Wang, Leimin; Shen, Yi; Zhang, Guodong
Finite-time stability problem has been a hot topic in control and system engineering. This paper deals with the finite-time stabilization issue of memristor-based delayed neural networks (MDNNs) via two control approaches. First, in order to realize the stabilization of MDNNs in finite time, a delayed state feedback controller is proposed. Then, a novel adaptive strategy is applied to the delayed controller, and finite-time stabilization of MDNNs can also be achieved by using the adaptive control law. Some easily verified algebraic criteria are derived to ensure the stabilization of MDNNs in finite time, and the estimation of the settling time functional is given. Moreover, several finite-time stability results as our special cases for both memristor-based neural networks (MNNs) without delays and neural networks are given. Finally, three examples are provided for the illustration of the theoretical results.Finite-time stability problem has been a hot topic in control and system engineering. This paper deals with the finite-time stabilization issue of memristor-based delayed neural networks (MDNNs) via two control approaches. First, in order to realize the stabilization of MDNNs in finite time, a delayed state feedback controller is proposed. Then, a novel adaptive strategy is applied to the delayed controller, and finite-time stabilization of MDNNs can also be achieved by using the adaptive control law. Some easily verified algebraic criteria are derived to ensure the stabilization of MDNNs in finite time, and the estimation of the settling time functional is given. Moreover, several finite-time stability results as our special cases for both memristor-based neural networks (MNNs) without delays and neural networks are given. Finally, three examples are provided for the illustration of the theoretical results.
Liu, Jianbo; Khalil, Hassan K; Oweiss, Karim G
2011-10-01
In bi-directional brain-machine interfaces (BMIs), precisely controlling the delivery of microstimulation, both in space and in time, is critical to continuously modulate the neural activity patterns that carry information about the state of the brain-actuated device to sensory areas in the brain. In this paper, we investigate the use of neural feedback to control the spatiotemporal firing patterns of neural ensembles in a model of the thalamocortical pathway. Control of pyramidal (PY) cells in the primary somatosensory cortex (S1) is achieved based on microstimulation of thalamic relay cells through multiple-input multiple-output (MIMO) feedback controllers. This closed loop feedback control mechanism is achieved by simultaneously varying the stimulation parameters across multiple stimulation electrodes in the thalamic circuit based on continuous monitoring of the difference between reference patterns and the evoked responses of the cortical PY cells. We demonstrate that it is feasible to achieve a desired level of performance by controlling the firing activity pattern of a few "key" neural elements in the network. Our results suggest that neural feedback could be an effective method to facilitate the delivery of information to the cortex to substitute lost sensory inputs in cortically controlled BMIs.
Method of gear fault diagnosis based on EEMD and improved Elman neural network
NASA Astrophysics Data System (ADS)
Zhang, Qi; Zhao, Wei; Xiao, Shungen; Song, Mengmeng
2017-05-01
Aiming at crack and wear and so on of gears Fault information is difficult to diagnose usually due to its weak, a gear fault diagnosis method that is based on EEMD and improved Elman neural network fusion is proposed. A number of IMF components are obtained by decomposing denoised all kinds of fault signals with EEMD, and the pseudo IMF components is eliminated by using the correlation coefficient method to obtain the effective IMF component. The energy characteristic value of each effective component is calculated as the input feature quantity of Elman neural network, and the improved Elman neural network is based on standard network by adding a feedback factor. The fault data of normal gear, broken teeth, cracked gear and attrited gear were collected by field collecting. The results were analyzed by the diagnostic method proposed in this paper. The results show that compared with the standard Elman neural network, Improved Elman neural network has the advantages of high diagnostic efficiency.
Optimal exponential synchronization of general chaotic delayed neural networks: an LMI approach.
Liu, Meiqin
2009-09-01
This paper investigates the optimal exponential synchronization problem of general chaotic neural networks with or without time delays by virtue of Lyapunov-Krasovskii stability theory and the linear matrix inequality (LMI) technique. This general model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, and recurrent multilayer perceptrons (RMLPs) with or without delays. Using the drive-response concept, time-delay feedback controllers are designed to synchronize two identical chaotic neural networks as quickly as possible. The control design equations are shown to be a generalized eigenvalue problem (GEVP) which can be easily solved by various convex optimization algorithms to determine the optimal control law and the optimal exponential synchronization rate. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.
Qi, Donglian; Liu, Meiqin; Qiu, Meikang; Zhang, Senlin
2010-08-01
This brief studies exponential H(infinity) synchronization of a class of general discrete-time chaotic neural networks with external disturbance. On the basis of the drive-response concept and H(infinity) control theory, and using Lyapunov-Krasovskii (or Lyapunov) functional, state feedback controllers are established to not only guarantee exponential stable synchronization between two general chaotic neural networks with or without time delays, but also reduce the effect of external disturbance on the synchronization error to a minimal H(infinity) norm constraint. The proposed controllers can be obtained by solving the convex optimization problems represented by linear matrix inequalities. Most discrete-time chaotic systems with or without time delays, such as Hopfield neural networks, cellular neural networks, bidirectional associative memory networks, recurrent multilayer perceptrons, Cohen-Grossberg neural networks, Chua's circuits, etc., can be transformed into this general chaotic neural network to be H(infinity) synchronization controller designed in a unified way. Finally, some illustrated examples with their simulations have been utilized to demonstrate the effectiveness of the proposed methods.
Allam, Ahmed M; Abbas, Hazem M
2010-12-01
Neural cryptography deals with the problem of "key exchange" between two neural networks using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between the two communicating parties is eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process. Therefore, diminishing the probability of such a threat improves the reliability of exchanging the output bits through a public channel. The synchronization with feedback algorithm is one of the existing algorithms that enhances the security of neural cryptography. This paper proposes three new algorithms to enhance the mutual learning process. They mainly depend on disrupting the attacker confidence in the exchanged outputs and input patterns during training. The first algorithm is called "Do not Trust My Partner" (DTMP), which relies on one party sending erroneous output bits, with the other party being capable of predicting and correcting this error. The second algorithm is called "Synchronization with Common Secret Feedback" (SCSFB), where inputs are kept partially secret and the attacker has to train its network on input patterns that are different from the training sets used by the communicating parties. The third algorithm is a hybrid technique combining the features of the DTMP and SCSFB. The proposed approaches are shown to outperform the synchronization with feedback algorithm in the time needed for the parties to synchronize.
Chen, Weisheng
2009-07-01
This paper focuses on the problem of adaptive neural network tracking control for a class of discrete-time pure-feedback systems with unknown control direction under amplitude and rate actuator constraints. Two novel state-feedback and output-feedback dynamic control laws are established where the function tanh(.) is employed to solve the saturation constraint problem. Implicit function theorem and mean value theorem are exploited to deal with non-affine variables that are used as actual control. Radial basis function neural networks are used to approximate the desired input function. Discrete Nussbaum gain is used to estimate the unknown sign of control gain. The uniform boundedness of all closed-loop signals is guaranteed. The tracking error is proved to converge to a small residual set around the origin. A simulation example is provided to illustrate the effectiveness of control schemes proposed in this paper.
Adaptive nonlinear polynomial neural networks for control of boundary layer/structural interaction
NASA Technical Reports Server (NTRS)
Parker, B. Eugene, Jr.; Cellucci, Richard L.; Abbott, Dean W.; Barron, Roger L.; Jordan, Paul R., III; Poor, H. Vincent
1993-01-01
The acoustic pressures developed in a boundary layer can interact with an aircraft panel to induce significant vibration in the panel. Such vibration is undesirable due to the aerodynamic drag and structure-borne cabin noises that result. The overall objective of this work is to develop effective and practical feedback control strategies for actively reducing this flow-induced structural vibration. This report describes the results of initial evaluations using polynomial, neural network-based, feedback control to reduce flow induced vibration in aircraft panels due to turbulent boundary layer/structural interaction. Computer simulations are used to develop and analyze feedback control strategies to reduce vibration in a beam as a first step. The key differences between this work and that going on elsewhere are as follows: that turbulent and transitional boundary layers represent broadband excitation and thus present a more complex stochastic control scenario than that of narrow band (e.g., laminar boundary layer) excitation; and secondly, that the proposed controller structures are adaptive nonlinear infinite impulse response (IIR) polynomial neural network, as opposed to the traditional adaptive linear finite impulse response (FIR) filters used in most studies to date. The controllers implemented in this study achieved vibration attenuation of 27 to 60 dB depending on the type of boundary layer established by laminar, turbulent, and intermittent laminar-to-turbulent transitional flows. Application of multi-input, multi-output, adaptive, nonlinear feedback control of vibration in aircraft panels based on polynomial neural networks appears to be feasible today. Plans are outlined for Phase 2 of this study, which will include extending the theoretical investigation conducted in Phase 2 and verifying the results in a series of laboratory experiments involving both bum and plate models.
Bidirectional neural interface: Closed-loop feedback control for hybrid neural systems.
Chou, Zane; Lim, Jeffrey; Brown, Sophie; Keller, Melissa; Bugbee, Joseph; Broccard, Frédéric D; Khraiche, Massoud L; Silva, Gabriel A; Cauwenberghs, Gert
2015-01-01
Closed-loop neural prostheses enable bidirectional communication between the biological and artificial components of a hybrid system. However, a major challenge in this field is the limited understanding of how these components, the two separate neural networks, interact with each other. In this paper, we propose an in vitro model of a closed-loop system that allows for easy experimental testing and modification of both biological and artificial network parameters. The interface closes the system loop in real time by stimulating each network based on recorded activity of the other network, within preset parameters. As a proof of concept we demonstrate that the bidirectional interface is able to establish and control network properties, such as synchrony, in a hybrid system of two neural networks more significantly more effectively than the same system without the interface or with unidirectional alternatives. This success holds promise for the application of closed-loop systems in neural prostheses, brain-machine interfaces, and drug testing.
Long, Lijun; Zhao, Jun
2015-07-01
This paper investigates the problem of adaptive neural tracking control via output-feedback for a class of switched uncertain nonlinear systems without the measurements of the system states. The unknown control signals are approximated directly by neural networks. A novel adaptive neural control technique for the problem studied is set up by exploiting the average dwell time method and backstepping. A switched filter and different update laws are designed to reduce the conservativeness caused by adoption of a common observer and a common update law for all subsystems. The proposed controllers of subsystems guarantee that all closed-loop signals remain bounded under a class of switching signals with average dwell time, while the output tracking error converges to a small neighborhood of the origin. As an application of the proposed design method, adaptive output feedback neural tracking controllers for a mass-spring-damper system are constructed.
A neural network controller of a flotation process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durao, F.; Cortez, L.
1995-12-31
The dynamic control of a froth flotation section is simulated through a neural network feedback controller, trained in order to stabilize the concentrate metal grade and recovery by applying random step changes to the feed metal grade. The results of the application example show that this controller seems to be sufficiently robust and a good alternative to handle a non-linear process.
Multitask neurovision processor with extensive feedback and feedforward connections
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1991-11-01
A multi-task neuro-vision parameter which performs a variety of information processing operations associated with the early stages of biological vision is presented. The network architecture of this neuro-vision processor, called the positive-negative (PN) neural processor, is loosely based on the neural activity fields exhibited by thalamic and cortical nervous tissue layers. The computational operation performed by the processor arises from the strength of the recurrent feedback among the numerous positive and negative neural computing units. By adjusting the feedback connections it is possible to generate diverse dynamic behavior that may be used for short-term visual memory (STVM), spatio-temporal filtering (STF), and pulse frequency modulation (PFM). The information attributes that are to be processes may be regulated by modifying the feedforward connections from the signal space to the neural processor.
Hammer, Rubi; Tennekoon, Michael; Cooke, Gillian E; Gayda, Jessica; Stein, Mark A; Booth, James R
2015-08-01
We tested the interactive effect of feedback and reward on visuospatial working memory in children with ADHD. Seventeen boys with ADHD and 17 Normal Control (NC) boys underwent functional magnetic resonance imaging (fMRI) while performing four visuospatial 2-back tasks that required monitoring the spatial location of letters presented on a display. Tasks varied in reward size (large; small) and feedback availability (no-feedback; feedback). While the performance of NC boys was high in all conditions, boys with ADHD exhibited higher performance (similar to those of NC boys) only when they received feedback associated with large-reward. Performance pattern in both groups was mirrored by neural activity in an executive function neural network comprised of few distinct frontal brain regions. Specifically, neural activity in the left and right middle frontal gyri of boys with ADHD became normal-like only when feedback was available, mainly when feedback was associated with large-reward. When feedback was associated with small-reward, or when large-reward was expected but feedback was not available, boys with ADHD exhibited altered neural activity in the medial orbitofrontal cortex and anterior insula. This suggests that contextual support normalizes activity in executive brain regions in children with ADHD, which results in improved working memory. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Synchronization of fractional-order complex-valued neural networks with time delay.
Bao, Haibo; Park, Ju H; Cao, Jinde
2016-09-01
This paper deals with the problem of synchronization of fractional-order complex-valued neural networks with time delays. By means of linear delay feedback control and a fractional-order inequality, sufficient conditions are obtained to guarantee the synchronization of the drive-response systems. Numerical simulations are provided to show the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Application of dynamic recurrent neural networks in nonlinear system identification
NASA Astrophysics Data System (ADS)
Du, Yun; Wu, Xueli; Sun, Huiqin; Zhang, Suying; Tian, Qiang
2006-11-01
An adaptive identification method of simple dynamic recurrent neural network (SRNN) for nonlinear dynamic systems is presented in this paper. This method based on the theory that by using the inner-states feed-back of dynamic network to describe the nonlinear kinetic characteristics of system can reflect the dynamic characteristics more directly, deduces the recursive prediction error (RPE) learning algorithm of SRNN, and improves the algorithm by studying topological structure on recursion layer without the weight values. The simulation results indicate that this kind of neural network can be used in real-time control, due to its less weight values, simpler learning algorithm, higher identification speed, and higher precision of model. It solves the problems of intricate in training algorithm and slow rate in convergence caused by the complicate topological structure in usual dynamic recurrent neural network.
Li, Xuanying; Li, Xiaotong; Hu, Cheng
2017-12-01
In this paper, without transforming the second order inertial neural networks into the first order differential systems by some variable substitutions, asymptotic stability and synchronization for a class of delayed inertial neural networks are investigated. Firstly, a new Lyapunov functional is constructed to directly propose the asymptotic stability of the inertial neural networks, and some new stability criteria are derived by means of Barbalat Lemma. Additionally, by designing a new feedback control strategy, the asymptotic synchronization of the addressed inertial networks is studied and some effective conditions are obtained. To reduce the control cost, an adaptive control scheme is designed to realize the asymptotic synchronization. It is noted that the dynamical behaviors of inertial neural networks are directly analyzed in this paper by constructing some new Lyapunov functionals, this is totally different from the traditional reduced-order variable substitution method. Finally, some numerical simulations are given to demonstrate the effectiveness of the derived theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gradient calculations for dynamic recurrent neural networks: a survey.
Pearlmutter, B A
1995-01-01
Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some "tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed.
Effects of Oxytocin and Vasopressin on Preferential Brain Responses to Negative Social Feedback.
Gozzi, Marta; Dashow, Erica M; Thurm, Audrey; Swedo, Susan E; Zink, Caroline F
2017-06-01
Receiving negative social feedback can be detrimental to emotional, cognitive, and physical well-being, and fear of negative social feedback is a prominent feature of mental illnesses that involve social anxiety. A large body of evidence has implicated the neuropeptides oxytocin and vasopressin in the modulation of human neural activity underlying social cognition, including negative emotion processing; however, the influence of oxytocin and vasopressin on neural activity elicited during negative social evaluation remains unknown. Here 21 healthy men underwent functional magnetic resonance imaging in a double-blind, placebo-controlled, crossover design to determine how intranasally administered oxytocin and vasopressin modulated neural activity when receiving negative feedback on task performance from a study investigator. We found that under placebo, a preferential response to negative social feedback compared with positive social feedback was evoked in brain regions putatively involved in theory of mind (temporoparietal junction), pain processing (anterior insula and supplementary motor area), and identification of emotionally important visual cues in social perception (right fusiform). These activations weakened with oxytocin and vasopressin administration such that neural responses to receiving negative social feedback were not significantly greater than positive social feedback. Our results show effects of both oxytocin and vasopressin on the brain network involved in negative social feedback, informing the possible use of a pharmacological approach targeting these regions in multiple disorders with impairments in social information processing.
A feedback model of visual attention.
Spratling, M W; Johnson, M H
2004-03-01
Feedback connections are a prominent feature of cortical anatomy and are likely to have a significant functional role in neural information processing. We present a neural network model of cortical feedback that successfully simulates neurophysiological data associated with attention. In this domain, our model can be considered a more detailed, and biologically plausible, implementation of the biased competition model of attention. However, our model is more general as it can also explain a variety of other top-down processes in vision, such as figure/ground segmentation and contextual cueing. This model thus suggests that a common mechanism, involving cortical feedback pathways, is responsible for a range of phenomena and provides a unified account of currently disparate areas of research.
Abdurahman, Abdujelil; Jiang, Haijun; Rahman, Kaysar
2015-12-01
This paper deals with the problem of function projective synchronization for a class of memristor-based Cohen-Grossberg neural networks with time-varying delays. Based on the theory of differential equations with discontinuous right-hand side, some novel criteria are obtained to realize the function projective synchronization of addressed networks by combining open loop control and linear feedback control. As some special cases, several control strategies are given to ensure the realization of complete synchronization, anti-synchronization and the stabilization of the considered memristor-based Cohen-Grossberg neural network. Finally, a numerical example and its simulations are provided to demonstrate the effectiveness of the obtained results.
Expected Number of Fixed Points in Boolean Networks with Arbitrary Topology.
Mori, Fumito; Mochizuki, Atsushi
2017-07-14
Boolean network models describe genetic, neural, and social dynamics in complex networks, where the dynamics depend generally on network topology. Fixed points in a genetic regulatory network are typically considered to correspond to cell types in an organism. We prove that the expected number of fixed points in a Boolean network, with Boolean functions drawn from probability distributions that are not required to be uniform or identical, is one, and is independent of network topology if only a feedback arc set satisfies a stochastic neutrality condition. We also demonstrate that the expected number is increased by the predominance of positive feedback in a cycle.
Network interactions underlying mirror feedback in stroke: A dynamic causal modeling study.
Saleh, Soha; Yarossi, Mathew; Manuweera, Thushini; Adamovich, Sergei; Tunik, Eugene
2017-01-01
Mirror visual feedback (MVF) is potentially a powerful tool to facilitate recovery of disordered movement and stimulate activation of under-active brain areas due to stroke. The neural mechanisms underlying MVF have therefore been a focus of recent inquiry. Although it is known that sensorimotor areas can be activated via mirror feedback, the network interactions driving this effect remain unknown. The aim of the current study was to fill this gap by using dynamic causal modeling to test the interactions between regions in the frontal and parietal lobes that may be important for modulating the activation of the ipsilesional motor cortex during mirror visual feedback of unaffected hand movement in stroke patients. Our intent was to distinguish between two theoretical neural mechanisms that might mediate ipsilateral activation in response to mirror-feedback: transfer of information between bilateral motor cortices versus recruitment of regions comprising an action observation network which in turn modulate the motor cortex. In an event-related fMRI design, fourteen chronic stroke subjects performed goal-directed finger flexion movements with their unaffected hand while observing real-time visual feedback of the corresponding (veridical) or opposite (mirror) hand in virtual reality. Among 30 plausible network models that were tested, the winning model revealed significant mirror feedback-based modulation of the ipsilesional motor cortex arising from the contralesional parietal cortex, in a region along the rostral extent of the intraparietal sulcus. No winning model was identified for the veridical feedback condition. We discuss our findings in the context of supporting the latter hypothesis, that mirror feedback-based activation of motor cortex may be attributed to engagement of a contralateral (contralesional) action observation network. These findings may have important implications for identifying putative cortical areas, which may be targeted with non-invasive brain stimulation as a means of potentiating the effects of mirror training.
Li, Jiarong; Jiang, Haijun; Hu, Cheng; Yu, Zhiyong
2018-03-01
This paper is devoted to the exponential synchronization, finite time synchronization, and fixed-time synchronization of Cohen-Grossberg neural networks (CGNNs) with discontinuous activations and time-varying delays. Discontinuous feedback controller and Novel adaptive feedback controller are designed to realize global exponential synchronization, finite time synchronization and fixed-time synchronization by adjusting the values of the parameters ω in the controller. Furthermore, the settling time of the fixed-time synchronization derived in this paper is less conservative and more accurate. Finally, some numerical examples are provided to show the effectiveness and flexibility of the results derived in this paper. Copyright © 2018 Elsevier Ltd. All rights reserved.
Neural networks for feedback feedforward nonlinear control systems.
Parisini, T; Zoppoli, R
1994-01-01
This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.
Reduced Order Adaptive Controllers for Distributed Parameter Systems
2005-09-01
pitch moment [J313. Neural Network adaptive output feedback control for intensive care unit sedation and intraop- erative anesthesia . Neural network...depth of anesthesia for noncardiac surgery [C3, J15]. These results present an extension of [C8, J9, J10]. Modelling and vibration control of...for Intensive Care Unit Sedation and Operating Room Hypnosis , Submitted to 6 Special Issue of SIAM Journal of Control and Optimization on Control
Wang, Leimin; Zeng, Zhigang; Ge, Ming-Feng; Hu, Junhao
2018-05-02
This paper deals with the stabilization problem of memristive recurrent neural networks with inertial items, discrete delays, bounded and unbounded distributed delays. First, for inertial memristive recurrent neural networks (IMRNNs) with second-order derivatives of states, an appropriate variable substitution method is invoked to transfer IMRNNs into a first-order differential form. Then, based on nonsmooth analysis theory, several algebraic criteria are established for the global stabilizability of IMRNNs under proposed feedback control, where the cases with both bounded and unbounded distributed delays are successfully addressed. Finally, the theoretical results are illustrated via the numerical simulations. Copyright © 2018 Elsevier Ltd. All rights reserved.
de Lamare, Rodrigo C; Sampaio-Neto, Raimundo
2008-11-01
A space-time adaptive decision feedback (DF) receiver using recurrent neural networks (RNNs) is proposed for joint equalization and interference suppression in direct-sequence code-division multiple-access (DS-CDMA) systems equipped with antenna arrays. The proposed receiver structure employs dynamically driven RNNs in the feedforward section for equalization and multiaccess interference (MAI) suppression and a finite impulse response (FIR) linear filter in the feedback section for performing interference cancellation. A data selective gradient algorithm, based upon the set-membership (SM) design framework, is proposed for the estimation of the coefficients of RNN structures and is applied to the estimation of the parameters of the proposed neural receiver structure. Simulation results show that the proposed techniques achieve significant performance gains over existing schemes.
Du, Jialu; Hu, Xin; Liu, Hongbo; Chen, C L Philip
2015-11-01
This paper develops an adaptive robust output feedback control scheme for dynamically positioned ships with unavailable velocities and unknown dynamic parameters in an unknown time-variant disturbance environment. The controller is designed by incorporating the high-gain observer and radial basis function (RBF) neural networks in vectorial backstepping method. The high-gain observer provides the estimations of the ship position and heading as well as velocities. The RBF neural networks are employed to compensate for the uncertainties of ship dynamics. The adaptive laws incorporating a leakage term are designed to estimate the weights of RBF neural networks and the bounds of unknown time-variant environmental disturbances. In contrast to the existing results of dynamic positioning (DP) controllers, the proposed control scheme relies only on the ship position and heading measurements and does not require a priori knowledge of the ship dynamics and external disturbances. By means of Lyapunov functions, it is theoretically proved that our output feedback controller can control a ship's position and heading to the arbitrarily small neighborhood of the desired target values while guaranteeing that all signals in the closed-loop DP control system are uniformly ultimately bounded. Finally, simulations involving two ships are carried out, and simulation results demonstrate the effectiveness of the proposed control scheme.
Neural network-based optimal adaptive output feedback control of a helicopter UAV.
Nodland, David; Zargarzadeh, Hassan; Jagannathan, Sarangapani
2013-07-01
Helicopter unmanned aerial vehicles (UAVs) are widely used for both military and civilian operations. Because the helicopter UAVs are underactuated nonlinear mechanical systems, high-performance controller design for them presents a challenge. This paper introduces an optimal controller design via an output feedback for trajectory tracking of a helicopter UAV, using a neural network (NN). The output-feedback control system utilizes the backstepping methodology, employing kinematic and dynamic controllers and an NN observer. The online approximator-based dynamic controller learns the infinite-horizon Hamilton-Jacobi-Bellman equation in continuous time and calculates the corresponding optimal control input by minimizing a cost function, forward-in-time, without using the value and policy iterations. Optimal tracking is accomplished by using a single NN utilized for the cost function approximation. The overall closed-loop system stability is demonstrated using Lyapunov analysis. Finally, simulation results are provided to demonstrate the effectiveness of the proposed control design for trajectory tracking.
Reward-based training of recurrent neural networks for cognitive and value-based tasks
Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing
2017-01-01
Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal’s internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task. DOI: http://dx.doi.org/10.7554/eLife.21492.001 PMID:28084991
Neural dynamic optimization for control systems. I. Background.
Seong, C Y; Widrow, B
2001-01-01
The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the background and motivations for the development of NDO, while the two other subsequent papers of this topic present the theory of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.
Neural dynamic optimization for control systems.III. Applications.
Seong, C Y; Widrow, B
2001-01-01
For pt.II. see ibid., p. 490-501. The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper demonstrates NDO with several applications including control of autonomous vehicles and of a robot-arm, while the two other companion papers of this topic describes the background for the development of NDO and present the theory of the method, respectively.
Neural dynamic optimization for control systems.II. Theory.
Seong, C Y; Widrow, B
2001-01-01
The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the theory of NDO, while the two other companion papers of this topic explain the background for the development of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.
Chaotic simulated annealing by a neural network with a variable delay: design and application.
Chen, Shyan-Shiou
2011-10-01
In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem. © 2011 IEEE
Flight control with adaptive critic neural network
NASA Astrophysics Data System (ADS)
Han, Dongchen
2001-10-01
In this dissertation, the adaptive critic neural network technique is applied to solve complex nonlinear system control problems. Based on dynamic programming, the adaptive critic neural network can embed the optimal solution into a neural network. Though trained off-line, the neural network forms a real-time feedback controller. Because of its general interpolation properties, the neurocontroller has inherit robustness. The problems solved here are an agile missile control for U.S. Air Force and a midcourse guidance law for U.S. Navy. In the first three papers, the neural network was used to control an air-to-air agile missile to implement a minimum-time heading-reverse in a vertical plane corresponding to following conditions: a system without constraint, a system with control inequality constraint, and a system with state inequality constraint. While the agile missile is a one-dimensional problem, the midcourse guidance law is the first test-bed for multiple-dimensional problem. In the fourth paper, the neurocontroller is synthesized to guide a surface-to-air missile to a fixed final condition, and to a flexible final condition from a variable initial condition. In order to evaluate the adaptive critic neural network approach, the numerical solutions for these cases are also obtained by solving two-point boundary value problem with a shooting method. All of the results showed that the adaptive critic neural network could solve complex nonlinear system control problems.
Li, Yongming; Tong, Shaocheng
The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.
A neural circuitry that emphasizes spinal feedback generates diverse behaviours of human locomotion
Song, Seungmoon; Geyer, Hartmut
2015-01-01
Neural networks along the spinal cord contribute substantially to generating locomotion behaviours in humans and other legged animals. However, the neural circuitry involved in this spinal control remains unclear. We here propose a specific circuitry that emphasizes feedback integration over central pattern generation. The circuitry is based on neurophysiologically plausible muscle-reflex pathways that are organized in 10 spinal modules realizing limb functions essential to legged systems in stance and swing. These modules are combined with a supraspinal control layer that adjusts the desired foot placements and selects the leg that is to transition into swing control during double support. Using physics-based simulation, we test the proposed circuitry in a neuromuscular human model that includes neural transmission delays, musculotendon dynamics and compliant foot–ground contacts. We find that the control network is sufficient to compose steady and transitional 3-D locomotion behaviours including walking and running, acceleration and deceleration, slope and stair negotiation, turning, and deliberate obstacle avoidance. The results suggest feedback integration to be functionally more important than central pattern generation in human locomotion across behaviours. In addition, the proposed control architecture may serve as a guide in the search for the neurophysiological origin and circuitry of spinal control in humans. PMID:25920414
Cross-entropy optimization for neuromodulation.
Brar, Harleen K; Yunpeng Pan; Mahmoudi, Babak; Theodorou, Evangelos A
2016-08-01
This study presents a reinforcement learning approach for the optimization of the proportional-integral gains of the feedback controller represented in a computational model of epilepsy. The chaotic oscillator model provides a feedback control systems view of the dynamics of an epileptic brain with an internal feedback controller representative of the natural seizure suppression mechanism within the brain circuitry. Normal and pathological brain activity is simulated in this model by adjusting the feedback gain values of the internal controller. With insufficient gains, the internal controller cannot provide enough feedback to the brain dynamics causing an increase in correlation between different brain sites. This increase in synchronization results in the destabilization of the brain dynamics, which is representative of an epileptic seizure. To provide compensation for an insufficient internal controller an external controller is designed using proportional-integral feedback control strategy. A cross-entropy optimization algorithm is applied to the chaotic oscillator network model to learn the optimal feedback gains for the external controller instead of hand-tuning the gains to provide sufficient control to the pathological brain and prevent seizure generation. The correlation between the dynamics of neural activity within different brain sites is calculated for experimental data to show similar dynamics of epileptic neural activity as simulated by the network of chaotic oscillators.
Multi-layer neural networks for robot control
NASA Technical Reports Server (NTRS)
Pourboghrat, Farzad
1989-01-01
Two neural learning controller designs for manipulators are considered. The first design is based on a neural inverse-dynamics system. The second is the combination of the first one with a neural adaptive state feedback system. Both types of controllers enable the manipulator to perform any given task very well after a period of training and to do other untrained tasks satisfactorily. The second design also enables the manipulator to compensate for unpredictable perturbations.
Space shuttle main engine fault detection using neural networks
NASA Technical Reports Server (NTRS)
Bishop, Thomas; Greenwood, Dan; Shew, Kenneth; Stevenson, Fareed
1991-01-01
A method for on-line Space Shuttle Main Engine (SSME) anomaly detection and fault typing using a feedback neural network is described. The method involves the computation of features representing time-variance of SSME sensor parameters, using historical test case data. The network is trained, using backpropagation, to recognize a set of fault cases. The network is then able to diagnose new fault cases correctly. An essential element of the training technique is the inclusion of randomly generated data along with the real data, in order to span the entire input space of potential non-nominal data.
2012-01-01
Background Synchronized bursting activity (SBA) is a remarkable dynamical behavior in both ex vivo and in vivo neural networks. Investigations of the underlying structural characteristics associated with SBA are crucial to understanding the system-level regulatory mechanism of neural network behaviors. Results In this study, artificial pulsed neural networks were established using spike response models to capture fundamental dynamics of large scale ex vivo cortical networks. Network simulations with synaptic parameter perturbations showed the following two findings. (i) In a network with an excitatory ratio (ER) of 80-90%, its connective ratio (CR) was within a range of 10-30% when the occurrence of SBA reached the highest expectation. This result was consistent with the experimental observation in ex vivo neuronal networks, which were reported to possess a matured inhibitory synaptic ratio of 10-20% and a CR of 10-30%. (ii) No SBA occurred when a network does not contain any all-positive-interaction feedback loop (APFL) motif. In a neural network containing APFLs, the number of APFLs presented an optimal range corresponding to the maximal occurrence of SBA, which was very similar to the optimal CR. Conclusions In a neural network, the evolutionarily selected CR (10-30%) optimizes the occurrence of SBA, and APFL serves a pivotal network motif required to maximize the occurrence of SBA. PMID:22462685
Neural Networks For Demodulation Of Phase-Modulated Signals
NASA Technical Reports Server (NTRS)
Altes, Richard A.
1995-01-01
Hopfield neural networks proposed for demodulating quadrature phase-shift-keyed (QPSK) signals carrying digital information. Networks solve nonlinear integral equations prior demodulation circuits cannot solve. Consists of set of N operational amplifiers connected in parallel, with weighted feedback from output terminal of each amplifier to input terminals of other amplifiers. Used to solve signal processing problems. Implemented as analog very-large-scale integrated circuit that achieves rapid convergence. Alternatively, implemented as digital simulation of such circuit. Also used to improve phase estimation performance over that of phase-locked loop.
Electronic neural network for solving traveling salesman and similar global optimization problems
NASA Technical Reports Server (NTRS)
Thakoor, Anilkumar P. (Inventor); Moopenn, Alexander W. (Inventor); Duong, Tuan A. (Inventor); Eberhardt, Silvio P. (Inventor)
1993-01-01
This invention is a novel high-speed neural network based processor for solving the 'traveling salesman' and other global optimization problems. It comprises a novel hybrid architecture employing a binary synaptic array whose embodiment incorporates the fixed rules of the problem, such as the number of cities to be visited. The array is prompted by analog voltages representing variables such as distances. The processor incorporates two interconnected feedback networks, each of which solves part of the problem independently and simultaneously, yet which exchange information dynamically.
Ehret, Phillip J; Monroe, Brian M; Read, Stephen J
2015-05-01
We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.
Cai, Zuowei; Huang, Lihong; Zhang, Lingling
2015-05-01
This paper investigates the problem of exponential synchronization of time-varying delayed neural networks with discontinuous neuron activations. Under the extended Filippov differential inclusion framework, by designing discontinuous state-feedback controller and using some analytic techniques, new testable algebraic criteria are obtained to realize two different kinds of global exponential synchronization of the drive-response system. Moreover, we give the estimated rate of exponential synchronization which depends on the delays and system parameters. The obtained results extend some previous works on synchronization of delayed neural networks not only with continuous activations but also with discontinuous activations. Finally, numerical examples are provided to show the correctness of our analysis via computer simulations. Our method and theoretical results have a leading significance in the design of synchronized neural network circuits involving discontinuous factors and time-varying delays. Copyright © 2015 Elsevier Ltd. All rights reserved.
Back-propagation learning of infinite-dimensional dynamical systems.
Tokuda, Isao; Tokunaga, Ryuji; Aihara, Kazuyuki
2003-10-01
This paper presents numerical studies of applying back-propagation learning to a delayed recurrent neural network (DRNN). The DRNN is a continuous-time recurrent neural network having time delayed feedbacks and the back-propagation learning is to teach spatio-temporal dynamics to the DRNN. Since the time-delays make the dynamics of the DRNN infinite-dimensional, the learning algorithm and the learning capability of the DRNN are different from those of the ordinary recurrent neural network (ORNN) having no time-delays. First, two types of learning algorithms are developed for a class of DRNNs. Then, using chaotic signals generated from the Mackey-Glass equation and the Rössler equations, learning capability of the DRNN is examined. Comparing the learning algorithms, learning capability, and robustness against noise of the DRNN with those of the ORNN and time delay neural network, advantages as well as disadvantages of the DRNN are investigated.
NASA Astrophysics Data System (ADS)
Hardinata, Lingga; Warsito, Budi; Suparti
2018-05-01
Complexity of bankruptcy causes the accurate models of bankruptcy prediction difficult to be achieved. Various prediction models have been developed to improve the accuracy of bankruptcy predictions. Machine learning has been widely used to predict because of its adaptive capabilities. Artificial Neural Networks (ANN) is one of machine learning which proved able to complete inference tasks such as prediction and classification especially in data mining. In this paper, we propose the implementation of Jordan Recurrent Neural Networks (JRNN) to classify and predict corporate bankruptcy based on financial ratios. Feedback interconnection in JRNN enable to make the network keep important information well allowing the network to work more effectively. The result analysis showed that JRNN works very well in bankruptcy prediction with average success rate of 81.3785%.
Effects of Response-Driven Feedback in Computer Science Learning
ERIC Educational Resources Information Center
Fernandez Aleman, J. L.; Palmer-Brown, D.; Jayne, C.
2011-01-01
This paper presents the results of a project on generating diagnostic feedback for guided learning in a first-year course on programming and a Master's course on software quality. An online multiple-choice questions (MCQs) system is integrated with neural network-based data analysis. Findings about how students use the system suggest that the…
Tong, Shao Cheng; Li, Yong Ming; Zhang, Hua-Guang
2011-07-01
In this paper, two adaptive neural network (NN) decentralized output feedback control approaches are proposed for a class of uncertain nonlinear large-scale systems with immeasurable states and unknown time delays. Using NNs to approximate the unknown nonlinear functions, an NN state observer is designed to estimate the immeasurable states. By combining the adaptive backstepping technique with decentralized control design principle, an adaptive NN decentralized output feedback control approach is developed. In order to overcome the problem of "explosion of complexity" inherent in the proposed control approach, the dynamic surface control (DSC) technique is introduced into the first adaptive NN decentralized control scheme, and a simplified adaptive NN decentralized output feedback DSC approach is developed. It is proved that the two proposed control approaches can guarantee that all the signals of the closed-loop system are semi-globally uniformly ultimately bounded, and the observer errors and the tracking errors converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approaches.
Finite-time synchronization for memristor-based neural networks with time-varying delays.
Abdurahman, Abdujelil; Jiang, Haijun; Teng, Zhidong
2015-09-01
Memristive network exhibits state-dependent switching behaviors due to the physical properties of memristor, which is an ideal tool to mimic the functionalities of the human brain. In this paper, finite-time synchronization is considered for a class of memristor-based neural networks with time-varying delays. Based on the theory of differential equations with discontinuous right-hand side, several new sufficient conditions ensuring the finite-time synchronization of memristor-based chaotic neural networks are obtained by using analysis technique, finite time stability theorem and adding a suitable feedback controller. Besides, the upper bounds of the settling time of synchronization are estimated. Finally, a numerical example is given to show the effectiveness and feasibility of the obtained results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Temporal neural networks and transient analysis of complex engineering systems
NASA Astrophysics Data System (ADS)
Uluyol, Onder
A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.
Evolving autonomous learning in cognitive networks.
Sheneman, Leigh; Hintze, Arend
2017-12-01
There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. These methods have been previously combined, particularly in artificial neural networks using an external objective feedback mechanism. We adapt this approach to Markov Brains, which are evolvable networks of probabilistic and deterministic logic gates. Prior to this work MB could only adapt from one generation to the other, so we introduce feedback gates which augment their ability to learn during their lifetime. We show that Markov Brains can incorporate these feedback gates in such a way that they do not rely on an external objective feedback signal, but instead can generate internal feedback that is then used to learn. This results in a more biologically accurate model of the evolution of learning, which will enable us to study the interplay between evolution and learning and could be another step towards autonomously learning machines.
Estimation of Heavy Metals Contamination in the Soil of Zaafaraniya City Using the Neural Network
NASA Astrophysics Data System (ADS)
Ghazi, Farah F.
2018-05-01
The aim of this paper is to estimate the heavy metals Contamination in soils which can be used to determine the rate of environmental contamination by using new technique depend on design feedback neural network as an alternative accurate technique. The network simulates to estimate the concentration of Cadmium (Cd), Nickel (Ni), Lead (Pb), Zinc (Zn) and Copper (Cu). Then to show the accuracy and efficiency of suggested design we applied the technique in Al- Zafaraniyah in Baghdad city. The results of this paper show that the suggested networks can be successfully applied to the rapid and accuracy estimation of concentration of heavy metals.
Wang, Leimin; Shen, Yi; Zhang, Guodong
2016-10-01
This paper is concerned with the synchronization problem for a class of switched neural networks (SNNs) with time-varying delays. First, a new crucial lemma which includes and extends the classical exponential stability theorem is constructed. Then by using the lemma, new algebraic criteria of ψ -type synchronization (synchronization with general decay rate) for SNNs are established via the designed nonlinear feedback control. The ψ -type synchronization which is in a general framework is obtained by introducing a ψ -type function. It contains exponential synchronization, polynomial synchronization, and other synchronization as its special cases. The results of this paper are general, and they also complement and extend some previous results. Finally, numerical simulations are carried out to demonstrate the effectiveness of the obtained results.
SPANNER: A Self-Repairing Spiking Neural Network Hardware Architecture.
Liu, Junxiu; Harkin, Jim; Maguire, Liam P; McDaid, Liam J; Wade, John J
2018-04-01
Recent research has shown that a glial cell of astrocyte underpins a self-repair mechanism in the human brain, where spiking neurons provide direct and indirect feedbacks to presynaptic terminals. These feedbacks modulate the synaptic transmission probability of release (PR). When synaptic faults occur, the neuron becomes silent or near silent due to the low PR of synapses; whereby the PRs of remaining healthy synapses are then increased by the indirect feedback from the astrocyte cell. In this paper, a novel hardware architecture of Self-rePAiring spiking Neural NEtwoRk (SPANNER) is proposed, which mimics this self-repairing capability in the human brain. This paper demonstrates that the hardware can self-detect and self-repair synaptic faults without the conventional components for the fault detection and fault repairing. Experimental results show that SPANNER can maintain the system performance with fault densities of up to 40%, and more importantly SPANNER has only a 20% performance degradation when the self-repairing architecture is significantly damaged at a fault density of 80%.
DCS-Neural-Network Program for Aircraft Control and Testing
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
2006-01-01
A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.
Yu, Zhaoxu; Li, Shugang; Yu, Zhaosheng; Li, Fangfei
2018-04-01
This paper investigates the problem of output feedback adaptive stabilization for a class of nonstrict-feedback stochastic nonlinear systems with both unknown backlashlike hysteresis and unknown control directions. A new linear state transformation is applied to the original system, and then, control design for the new system becomes feasible. By combining the neural network's (NN's) parameterization, variable separation technique, and Nussbaum gain function method, an input-driven observer-based adaptive NN control scheme, which involves only one parameter to be updated, is developed for such systems. All closed-loop signals are bounded in probability and the error signals remain semiglobally bounded in the fourth moment (or mean square). Finally, the effectiveness and the applicability of the proposed control design are verified by two simulation examples.
Buitrago, Jaime; Asfour, Shihab
2017-01-01
Short-term load forecasting is crucial for the operations planning of an electrical grid. Forecasting the next 24 h of electrical load in a grid allows operators to plan and optimize their resources. The purpose of this study is to develop a more accurate short-term load forecasting method utilizing non-linear autoregressive artificial neural networks (ANN) with exogenous multi-variable input (NARX). The proposed implementation of the network is new: the neural network is trained in open-loop using actual load and weather data, and then, the network is placed in closed-loop to generate a forecast using the predicted load as the feedback input.more » Unlike the existing short-term load forecasting methods using ANNs, the proposed method uses its own output as the input in order to improve the accuracy, thus effectively implementing a feedback loop for the load, making it less dependent on external data. Using the proposed framework, mean absolute percent errors in the forecast in the order of 1% have been achieved, which is a 30% improvement on the average error using feedforward ANNs, ARMAX and state space methods, which can result in large savings by avoiding commissioning of unnecessary power plants. Finally, the New England electrical load data are used to train and validate the forecast prediction.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buitrago, Jaime; Asfour, Shihab
Short-term load forecasting is crucial for the operations planning of an electrical grid. Forecasting the next 24 h of electrical load in a grid allows operators to plan and optimize their resources. The purpose of this study is to develop a more accurate short-term load forecasting method utilizing non-linear autoregressive artificial neural networks (ANN) with exogenous multi-variable input (NARX). The proposed implementation of the network is new: the neural network is trained in open-loop using actual load and weather data, and then, the network is placed in closed-loop to generate a forecast using the predicted load as the feedback input.more » Unlike the existing short-term load forecasting methods using ANNs, the proposed method uses its own output as the input in order to improve the accuracy, thus effectively implementing a feedback loop for the load, making it less dependent on external data. Using the proposed framework, mean absolute percent errors in the forecast in the order of 1% have been achieved, which is a 30% improvement on the average error using feedforward ANNs, ARMAX and state space methods, which can result in large savings by avoiding commissioning of unnecessary power plants. Finally, the New England electrical load data are used to train and validate the forecast prediction.« less
Motorized CPM/CAM physiotherapy device with sliding-mode Fuzzy Neural Network control loop.
Ho, Hung-Jung; Chen, Tien-Chi
2009-11-01
Continuous passive motion (CPM) and controllable active motion (CAM) physiotherapy devices promote rehabilitation of damaged joints. This paper presents a computerized CPM/CAM system that obviates the need for mechanical resistance devices such as springs. The system is controlled by a computer which performs sliding-mode Fuzzy Neural Network (FNN) calculations online. CAM-type resistance force is generated by the active performance of an electric motor which is controlled so as to oppose the motion of the patient's leg. A force sensor under the patient's foot on the device pedal provides data for feedback in a sliding-mode FNN control loop built around the motor. Via an active impedance control feedback system, the controller drives the motor to behave similarly to a damped spring by generating and controlling the amplitude and direction of the pedal force in relation to the patient's leg. Experiments demonstrate the high sensitivity and speed of the device. The PC-based feedback nature of the control loop means that sophisticated auto-adaptable CPM/CAM custom-designed physiotherapy becomes possible. The computer base also allows extensive data recording, data analysis and network-connected remote patient monitoring.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world.
Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate
2015-01-01
Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world. PMID:26528176
Demiral, Şükrü Barış; Golosheykin, Simon; Anokhin, Andrey P
2017-05-01
Detection and evaluation of the mismatch between the intended and actually obtained result of an action (reward prediction error) is an integral component of adaptive self-regulation of behavior. Extensive human and animal research has shown that evaluation of action outcome is supported by a distributed network of brain regions in which the anterior cingulate cortex (ACC) plays a central role, and the integration of distant brain regions into a unified feedback-processing network is enabled by long-range phase synchronization of cortical oscillations in the theta band. Neural correlates of feedback processing are associated with individual differences in normal and abnormal behavior, however, little is known about the role of genetic factors in the cerebral mechanisms of feedback processing. Here we examined genetic influences on functional cortical connectivity related to prediction error in young adult twins (age 18, n=399) using event-related EEG phase coherence analysis in a monetary gambling task. To identify prediction error-specific connectivity pattern, we compared responses to loss and gain feedback. Monetary loss produced a significant increase of theta-band synchronization between the frontal midline region and widespread areas of the scalp, particularly parietal areas, whereas gain resulted in increased synchrony primarily within the posterior regions. Genetic analyses showed significant heritability of frontoparietal theta phase synchronization (24 to 46%), suggesting that individual differences in large-scale network dynamics are under substantial genetic control. We conclude that theta-band synchronization of brain oscillations related to negative feedback reflects genetically transmitted differences in the neural mechanisms of feedback processing. To our knowledge, this is the first evidence for genetic influences on task-related functional brain connectivity assessed using direct real-time measures of neuronal synchronization. Copyright © 2016 Elsevier B.V. All rights reserved.
Functional model of biological neural networks.
Lo, James Ting-Ho
2010-12-01
A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.
Lakshmanan, Shanmugam; Prakash, Mani; Lim, Chee Peng; Rakkiyappan, Rajan; Balasubramaniam, Pagavathigounder; Nahavandi, Saeid
2018-01-01
In this paper, synchronization of an inertial neural network with time-varying delays is investigated. Based on the variable transformation method, we transform the second-order differential equations into the first-order differential equations. Then, using suitable Lyapunov-Krasovskii functionals and Jensen's inequality, the synchronization criteria are established in terms of linear matrix inequalities. Moreover, a feedback controller is designed to attain synchronization between the master and slave models, and to ensure that the error model is globally asymptotically stable. Numerical examples and simulations are presented to indicate the effectiveness of the proposed method. Besides that, an image encryption algorithm is proposed based on the piecewise linear chaotic map and the chaotic inertial neural network. The chaotic signals obtained from the inertial neural network are utilized for the encryption process. Statistical analyses are provided to evaluate the effectiveness of the proposed encryption algorithm. The results ascertain that the proposed encryption algorithm is efficient and reliable for secure communication applications.
Suzuki, Ikurou; Sugio, Yoshihiro; Moriguchi, Hiroyuki; Jimbo, Yasuhiko; Yasuda, Kenji
2004-07-01
Control over spatial distribution of individual neurons and the pattern of neural network provides an important tool for studying information processing pathways during neural network formation. Moreover, the knowledge of the direction of synaptic connections between cells in each neural network can provide detailed information on the relationship between the forward and feedback signaling. We have developed a method for topographical control of the direction of synaptic connections within a living neuronal network using a new type of individual-cell-based on-chip cell-cultivation system with an agarose microchamber array (AMCA). The advantages of this system include the possibility to control positions and number of cultured cells as well as flexible control of the direction of elongation of axons through stepwise melting of narrow grooves. Such micrometer-order microchannels are obtained by photo-thermal etching of agarose where a portion of the gel is melted with a 1064-nm infrared laser beam. Using this system, we created neural network from individual Rat hippocampal cells. We were able to control elongation of individual axons during cultivation (from cells contained within the AMCA) by non-destructive stepwise photo-thermal etching. We have demonstrated the potential of our on-chip AMCA cell cultivation system for the controlled development of individual cell-based neural networks.
Orhan, A Emin; Ma, Wei Ji
2017-07-26
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
NASA Astrophysics Data System (ADS)
Kwon, Chung-Jin; Kim, Sung-Joong; Han, Woo-Young; Min, Won-Kyoung
2005-12-01
The rotor position and speed estimation of permanent-magnet synchronous motor(PMSM) was dealt with. By measuring the phase voltages and currents of the PMSM drive, two diagonally recurrent neural network(DRNN) based observers, a neural current observer and a neural velocity observer were developed. DRNN which has self-feedback of the hidden neurons ensures that the outputs of DRNN contain the whole past information of the system even if the inputs of DRNN are only the present states and inputs of the system. Thus the structure of DRNN may be simpler than that of feedforward and fully recurrent neural networks. If the backpropagation method was used for the training of the DRNN the problem of slow convergence arise. In order to reduce this problem, recursive prediction error(RPE) based learning method for the DRNN was presented. The simulation results show that the proposed approach gives a good estimation of rotor speed and position, and RPE based training has requires a shorter computation time compared to backpropagation based training.
H∞ output tracking control of discrete-time nonlinear systems via standard neural network models.
Liu, Meiqin; Zhang, Senlin; Chen, Haiyang; Sheng, Weihua
2014-10-01
This brief proposes an output tracking control for a class of discrete-time nonlinear systems with disturbances. A standard neural network model is used to represent discrete-time nonlinear systems whose nonlinearity satisfies the sector conditions. H∞ control performance for the closed-loop system including the standard neural network model, the reference model, and state feedback controller is analyzed using Lyapunov-Krasovskii stability theorem and linear matrix inequality (LMI) approach. The H∞ controller, of which the parameters are obtained by solving LMIs, guarantees that the output of the closed-loop system closely tracks the output of a given reference model well, and reduces the influence of disturbances on the tracking error. Three numerical examples are provided to show the effectiveness of the proposed H∞ output tracking design approach.
Finite time synchronization of memristor-based Cohen-Grossberg neural networks with mixed delays.
Chen, Chuan; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-01-01
Finite time synchronization, which means synchronization can be achieved in a settling time, is desirable in some practical applications. However, most of the published results on finite time synchronization don't include delays or only include discrete delays. In view of the fact that distributed delays inevitably exist in neural networks, this paper aims to investigate the finite time synchronization of memristor-based Cohen-Grossberg neural networks (MCGNNs) with both discrete delay and distributed delay (mixed delays). By means of a simple feedback controller and novel finite time synchronization analysis methods, several new criteria are derived to ensure the finite time synchronization of MCGNNs with mixed delays. The obtained criteria are very concise and easy to verify. Numerical simulations are presented to demonstrate the effectiveness of our theoretical results.
Jagannathan, Sarangapani; He, Pingan
2008-12-01
In this paper, a suite of adaptive neural network (NN) controllers is designed to deliver a desired tracking performance for the control of an unknown, second-order, nonlinear discrete-time system expressed in nonstrict feedback form. In the first approach, two feedforward NNs are employed in the controller with tracking error as the feedback variable whereas in the adaptive critic NN architecture, three feedforward NNs are used. In the adaptive critic architecture, two action NNs produce virtual and actual control inputs, respectively, whereas the third critic NN approximates certain strategic utility function and its output is employed for tuning action NN weights in order to attain the near-optimal control action. Both the NN control methods present a well-defined controller design and the noncausal problem in discrete-time backstepping design is avoided via NN approximation. A comparison between the controller methodologies is highlighted. The stability analysis of the closed-loop control schemes is demonstrated. The NN controller schemes do not require an offline learning phase and the NN weights can be initialized at zero or random. Results show that the performance of the proposed controller schemes is highly satisfactory while meeting the closed-loop stability.
Wang, Huanqing; Liu, Peter Xiaoping; Li, Shuai; Wang, Ding
2017-08-29
This paper presents the development of an adaptive neural controller for a class of nonlinear systems with unmodeled dynamics and immeasurable states. An observer is designed to estimate system states. The structure consistency of virtual control signals and the variable partition technique are combined to overcome the difficulties appearing in a nonlower triangular form. An adaptive neural output-feedback controller is developed based on the backstepping technique and the universal approximation property of the radial basis function (RBF) neural networks. By using the Lyapunov stability analysis, the semiglobally and uniformly ultimate boundedness of all signals within the closed-loop system is guaranteed. The simulation results show that the controlled system converges quickly, and all the signals are bounded. This paper is novel at least in the two aspects: 1) an output-feedback control strategy is developed for a class of nonlower triangular nonlinear systems with unmodeled dynamics and 2) the nonlinear disturbances and their bounds are the functions of all states, which is in a more general form than existing results.
Integration of Online Parameter Identification and Neural Network for In-Flight Adaptive Control
NASA Technical Reports Server (NTRS)
Hageman, Jacob J.; Smith, Mark S.; Stachowiak, Susan
2003-01-01
An indirect adaptive system has been constructed for robust control of an aircraft with uncertain aerodynamic characteristics. This system consists of a multilayer perceptron pre-trained neural network, online stability and control derivative identification, a dynamic cell structure online learning neural network, and a model following control system based on the stochastic optimal feedforward and feedback technique. The pre-trained neural network and model following control system have been flight-tested, but the online parameter identification and online learning neural network are new additions used for in-flight adaptation of the control system model. A description of the modification and integration of these two stand-alone software packages into the complete system in preparation for initial flight tests is presented. Open-loop results using both simulation and flight data, as well as closed-loop performance of the complete system in a nonlinear, six-degree-of-freedom, flight validated simulation, are analyzed. Results show that this online learning system, in contrast to the nonlearning system, has the ability to adapt to changes in aerodynamic characteristics in a real-time, closed-loop, piloted simulation, resulting in improved flying qualities.
Testolin, Alberto; Zorzi, Marco
2016-01-01
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage. PMID:27468262
A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)
Dülger, L. Canan; Kapucu, Sadettin
2016-01-01
This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles. PMID:27610129
Testolin, Alberto; Zorzi, Marco
2016-01-01
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.
Almusawi, Ahmed R J; Dülger, L Canan; Kapucu, Sadettin
2016-01-01
This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.
Strategies influence neural activity for feedback learning across child and adolescent development.
Peters, Sabine; Koolschijn, P Cédric M P; Crone, Eveline A; Van Duijvenvoorde, Anna C K; Raijmakers, Maartje E J
2014-09-01
Learning from feedback is an important aspect of executive functioning that shows profound improvements during childhood and adolescence. This is accompanied by neural changes in the feedback-learning network, which includes pre-supplementary motor area (pre- SMA)/anterior cingulate cortex (ACC), dorsolateral prefrontal cortex (DLPFC), superior parietal cortex (SPC), and the basal ganglia. However, there can be considerable differences within age ranges in performance that are ascribed to differences in strategy use. This is problematic for traditional approaches of analyzing developmental data, in which age groups are assumed to be homogenous in strategy use. In this study, we used latent variable models to investigate if underlying strategy groups could be detected for a feedback-learning task and whether there were differences in neural activation patterns between strategies. In a sample of 268 participants between ages 8 to 25 years, we observed four underlying strategy groups, which were cut across age groups and varied in the optimality of executive functioning. These strategy groups also differed in neural activity during learning; especially the most optimal performing group showed more activity in DLPFC, SPC and pre-SMA/ACC compared to the other groups. However, age differences remained an important contributor to neural activation, even when correcting for strategy. These findings contribute to the debate of age versus performance predictors of neural development, and highlight the importance of studying individual differences in strategy use when studying development. Copyright © 2014 Elsevier Ltd. All rights reserved.
Algorithm for Training a Recurrent Multilayer Perceptron
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.
2004-01-01
An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.
Chartier, Sylvain; Proulx, Robert
2005-11-01
This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.
Wang, Leimin; Shen, Yi; Sheng, Yin
2016-04-01
This paper is concerned with the finite-time robust stabilization of delayed neural networks (DNNs) in the presence of discontinuous activations and parameter uncertainties. By using the nonsmooth analysis and control theory, a delayed controller is designed to realize the finite-time robust stabilization of DNNs with discontinuous activations and parameter uncertainties, and the upper bound of the settling time functional for stabilization is estimated. Finally, two examples are provided to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fei, Juntao; Lu, Cheng
2018-04-01
In this paper, an adaptive sliding mode control system using a double loop recurrent neural network (DLRNN) structure is proposed for a class of nonlinear dynamic systems. A new three-layer RNN is proposed to approximate unknown dynamics with two different kinds of feedback loops where the firing weights and output signal calculated in the last step are stored and used as the feedback signals in each feedback loop. Since the new structure has combined the advantages of internal feedback NN and external feedback NN, it can acquire the internal state information while the output signal is also captured, thus the new designed DLRNN can achieve better approximation performance compared with the regular NNs without feedback loops or the regular RNNs with a single feedback loop. The new proposed DLRNN structure is employed in an equivalent controller to approximate the unknown nonlinear system dynamics, and the parameters of the DLRNN are updated online by adaptive laws to get favorable approximation performance. To investigate the effectiveness of the proposed controller, the designed adaptive sliding mode controller with the DLRNN is applied to a -axis microelectromechanical system gyroscope to control the vibrating dynamics of the proof mass. Simulation results demonstrate that the proposed methodology can achieve good tracking property, and the comparisons of the approximation performance between radial basis function NN, RNN, and DLRNN show that the DLRNN can accurately estimate the unknown dynamics with a fast speed while the internal states of DLRNN are more stable.
A comparison between HMLP and HRBF for attitude control.
Fortuna, L; Muscato, G; Xibilia, M G
2001-01-01
In this paper the problem of controlling the attitude of a rigid body, such as a Spacecraft, in three-dimensional space is approached by introducing two new control strategies developed in hypercomplex algebra. The proposed approaches are based on two parallel controllers, both derived in quaternion algebra. The first is a feedback controller of the proportional derivative (PD) type, while the second is a feedforward controller, which is implemented either by means of a hypercomplex multilayer perceptron (HMLP) neural network or by means of a hypercomplex radial basis function (HRBF) neural network. Several simulations show the performance of the two approaches. The results are also compared with a classical PD controller and with an adaptive controller, showing the improvements obtained by using neural networks, especially when an external disturbance acts on the rigid body. In particular the HMLP network gave better results when considering trajectories not presented during the learning phase.
A neural network controller for automated composite manufacturing
NASA Technical Reports Server (NTRS)
Lichtenwalner, Peter F.
1994-01-01
At McDonnell Douglas Aerospace (MDA), an artificial neural network based control system has been developed and implemented to control laser heating for the fiber placement composite manufacturing process. This neurocontroller learns an approximate inverse model of the process on-line to provide performance that improves with experience and exceeds that of conventional feedback control techniques. When untrained, the control system behaves as a proportional plus integral (PI) controller. However after learning from experience, the neural network feedforward control module provides control signals that greatly improve temperature tracking performance. Faster convergence to new temperature set points and reduced temperature deviation due to changing feed rate have been demonstrated on the machine. A Cerebellar Model Articulation Controller (CMAC) network is used for inverse modeling because of its rapid learning performance. This control system is implemented in an IBM compatible 386 PC with an A/D board interface to the machine.
An annealed chaotic maximum neural network for bipartite subgraph problem.
Wang, Jiahai; Tang, Zheng; Wang, Ronglong
2004-04-01
In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms.
Liu, Zongcheng; Dong, Xinmin; Xue, Jianping; Li, Hongbo; Chen, Yong
2016-09-01
This brief addresses the adaptive control problem for a class of pure-feedback systems with nonaffine functions possibly being nondifferentiable. Without using the mean value theorem, the difficulty of the control design for pure-feedback systems is overcome by modeling the nonaffine functions appropriately. With the help of neural network approximators, an adaptive neural controller is developed by combining the dynamic surface control (DSC) and minimal learning parameter (MLP) techniques. The key features of our approach are that, first, the restrictive assumptions on the partial derivative of nonaffine functions are removed, second, the DSC technique is used to avoid "the explosion of complexity" in the backstepping design, and the number of adaptive parameters is reduced significantly using the MLP technique, third, smooth robust compensators are employed to circumvent the influences of approximation errors and disturbances. Furthermore, it is proved that all the signals in the closed-loop system are semiglobal uniformly ultimately bounded. Finally, the simulation results are provided to demonstrate the effectiveness of the designed method.
Homeostatic Scaling of Excitability in Recurrent Neural Networks
Remme, Michiel W. H.; Wadman, Wytse J.
2012-01-01
Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE) in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (patho)physiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity. PMID:22570604
Intelligent robust tracking control for a class of uncertain strict-feedback nonlinear systems.
Chang, Yeong-Chan
2009-02-01
This paper addresses the problem of designing robust tracking controls for a large class of strict-feedback nonlinear systems involving plant uncertainties and external disturbances. The input and virtual input weighting matrices are perturbed by bounded time-varying uncertainties. An adaptive fuzzy-based (or neural-network-based) dynamic feedback tracking controller will be developed such that all the states and signals of the closed-loop system are bounded and the trajectory tracking error should be as small as possible. First, the adaptive approximators with linearly parameterized models are designed, and a partitioned procedure with respect to the developed adaptive approximators is proposed such that the implementation of the fuzzy (or neural network) basis functions depends only on the state variables but does not depend on the tuning approximation parameters. Furthermore, we extend to design the nonlinearly parameterized adaptive approximators. Consequently, the intelligent robust tracking control schemes developed in this paper possess the properties of computational simplicity and easy implementation. Finally, simulation examples are presented to demonstrate the effectiveness of the proposed control algorithms.
Biomimetic Hybrid Feedback Feedforward Neural-Network Learning Control.
Pan, Yongping; Yu, Haoyong
2017-06-01
This brief presents a biomimetic hybrid feedback feedforward neural-network learning control (NNLC) strategy inspired by the human motor learning control mechanism for a class of uncertain nonlinear systems. The control structure includes a proportional-derivative controller acting as a feedback servo machine and a radial-basis-function (RBF) NN acting as a feedforward predictive machine. Under the sufficient constraints on control parameters, the closed-loop system achieves semiglobal practical exponential stability, such that an accurate NN approximation is guaranteed in a local region along recurrent reference trajectories. Compared with the existing NNLC methods, the novelties of the proposed method include: 1) the implementation of an adaptive NN control to guarantee plant states being recurrent is not needed, since recurrent reference signals rather than plant states are utilized as NN inputs, which greatly simplifies the analysis and synthesis of the NNLC and 2) the domain of NN approximation can be determined a priori by the given reference signals, which leads to an easy construction of the RBF-NNs. Simulation results have verified the effectiveness of this approach.
The predictive roles of neural oscillations in speech motor adaptability.
Sengupta, Ranit; Nasir, Sazzad M
2016-06-01
The human speech system exhibits a remarkable flexibility by adapting to alterations in speaking environments. While it is believed that speech motor adaptation under altered sensory feedback involves rapid reorganization of speech motor networks, the mechanisms by which different brain regions communicate and coordinate their activity to mediate adaptation remain unknown, and explanations of outcome differences in adaption remain largely elusive. In this study, under the paradigm of altered auditory feedback with continuous EEG recordings, the differential roles of oscillatory neural processes in motor speech adaptability were investigated. The predictive capacities of different EEG frequency bands were assessed, and it was found that theta-, beta-, and gamma-band activities during speech planning and production contained significant and reliable information about motor speech adaptability. It was further observed that these bands do not work independently but interact with each other suggesting an underlying brain network operating across hierarchically organized frequency bands to support motor speech adaptation. These results provide novel insights into both learning and disorders of speech using time frequency analysis of neural oscillations. Copyright © 2016 the American Physiological Society.
Finite-time synchronization of fractional-order memristor-based neural networks with time delays.
Velmurugan, G; Rakkiyappan, R; Cao, Jinde
2016-01-01
In this paper, we consider the problem of finite-time synchronization of a class of fractional-order memristor-based neural networks (FMNNs) with time delays and investigated it potentially. By using Laplace transform, the generalized Gronwall's inequality, Mittag-Leffler functions and linear feedback control technique, some new sufficient conditions are derived to ensure the finite-time synchronization of addressing FMNNs with fractional order α:1<α<2 and 0<α<1. The results from the theory of fractional-order differential equations with discontinuous right-hand sides are used to investigate the problem under consideration. The derived results are extended to some previous related works on memristor-based neural networks. Finally, three numerical examples are presented to show the effectiveness of our proposed theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bu, Xiangwei; Wu, Xiaoyan; Tian, Mingyan; Huang, Jiaqi; Zhang, Rui; Ma, Zhen
2015-09-01
In this paper, an adaptive neural controller is exploited for a constrained flexible air-breathing hypersonic vehicle (FAHV) based on high-order tracking differentiator (HTD). By utilizing functional decomposition methodology, the dynamic model is reasonably decomposed into the respective velocity subsystem and altitude subsystem. For the velocity subsystem, a dynamic inversion based neural controller is constructed. By introducing the HTD to adaptively estimate the newly defined states generated in the process of model transformation, a novel neural based altitude controller that is quite simpler than the ones derived from back-stepping is addressed based on the normal output-feedback form instead of the strict-feedback formulation. Based on minimal-learning parameter scheme, only two neural networks with two adaptive parameters are needed for neural approximation. Especially, a novel auxiliary system is explored to deal with the problem of control inputs constraints. Finally, simulation results are presented to test the effectiveness of the proposed control strategy in the presence of system uncertainties and actuators constraints. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Delayed excitatory and inhibitory feedback shape neural information transmission
NASA Astrophysics Data System (ADS)
Chacron, Maurice J.; Longtin, André; Maler, Leonard
2005-11-01
Feedback circuitry with conduction and synaptic delays is ubiquitous in the nervous system. Yet the effects of delayed feedback on sensory processing of natural signals are poorly understood. This study explores the consequences of delayed excitatory and inhibitory feedback inputs on the processing of sensory information. We show, through numerical simulations and theory, that excitatory and inhibitory feedback can alter the firing frequency response of stochastic neurons in opposite ways by creating dynamical resonances, which in turn lead to information resonances (i.e., increased information transfer for specific ranges of input frequencies). The resonances are created at the expense of decreased information transfer in other frequency ranges. Using linear response theory for stochastically firing neurons, we explain how feedback signals shape the neural transfer function for a single neuron as a function of network size. We also find that balanced excitatory and inhibitory feedback can further enhance information tuning while maintaining a constant mean firing rate. Finally, we apply this theory to in vivo experimental data from weakly electric fish in which the feedback loop can be opened. We show that it qualitatively predicts the observed effects of inhibitory feedback. Our study of feedback excitation and inhibition reveals a possible mechanism by which optimal processing may be achieved over selected frequency ranges.
Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian
2018-02-01
This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Liu, Meiqin; Zhang, Senlin
2008-10-01
A unified neural network model termed standard neural network model (SNNM) is advanced. Based on the robust L(2) gain (i.e. robust H(infinity) performance) analysis of the SNNM with external disturbances, a state-feedback control law is designed for the SNNM to stabilize the closed-loop system and eliminate the effect of external disturbances. The control design constraints are shown to be a set of linear matrix inequalities (LMIs) which can be easily solved by various convex optimization algorithms (e.g. interior-point algorithms) to determine the control law. Most discrete-time recurrent neural network (RNNs) and discrete-time nonlinear systems modelled by neural networks or Takagi and Sugeno (T-S) fuzzy models can be transformed into the SNNMs to be robust H(infinity) performance analyzed or robust H(infinity) controller synthesized in a unified SNNM's framework. Finally, some examples are presented to illustrate the wide application of the SNNMs to the nonlinear systems, and the proposed approach is compared with related methods reported in the literature.
Buzaev, Igor Vyacheslavovich; Plechev, Vladimir Vyacheslavovich; Nikolaeva, Irina Evgenievna; Galimova, Rezida Maratovna
2016-09-01
The continuous uninterrupted feedback system is the essential part of any well-organized system. We propose aLYNX concept that is a possibility to use an artificial intelligence algorithm or a neural network model in decision-making system so as to avoid possible mistakes and to remind the doctors to review tactics once more in selected cases. aLYNX system includes: registry with significant factors, decisions and results; machine learning process based on this registry data; the use of the machine learning results as the adviser. We show a possibility to build a computer adviser with a neural network model for making a choice between coronary aortic bypass surgery (CABG) and percutaneous coronary intervention (PCI) in order to achieve a higher 5-year survival rate in patients with angina based on the experience of 5107 patients. The neural network was trained by 4679 patients who achieved 5-year survival. Among them, 2390 patients underwent PCI and 2289 CABG. After training, the correlation coefficient ( r ) of the network was 0.74 for training, 0.67 for validation, 0.71 for test and 0.73 for total. Simulation of the neural network function has been performed after training in the two groups of patients with known 5-year outcome. The disagreement rate was significantly higher in the dead patient group than that in the survivor group between neural network model and heart team [16.8% (787/4679) vs. 20.3% (87/428), P = 0.065)]. The study shows the possibility to build a computer adviser with a neural network model for making a choice between CABG and PCI in order to achieve a higher 5-year survival rate in patients with angina.
Reward-Modulated Hebbian Plasticity as Leverage for Partially Embodied Control in Compliant Robotics
Burms, Jeroen; Caluwaerts, Ken; Dambre, Joni
2015-01-01
In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics. PMID:26347645
Neuromorphic Learning From Noisy Data
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Troudet, Terry
1993-01-01
Two reports present numerical study of performance of feedforward neural network trained by back-propagation algorithm in learning continuous-valued mappings from data corrupted by noise. Two types of noise considered: plant noise which affects dynamics of controlled process and data-processing noise, which occurs during analog processing and digital sampling of signals. Study performed with view toward use of neural networks as neurocontrollers to substitute for, or enhance, performances of human experts in controlling mechanical devices in presence of sensor and actuator noise and to enhance performances of more-conventional digital feedback electronic process controllers in noisy environments.
NASA Astrophysics Data System (ADS)
Tiwari, Shivendra N.; Padhi, Radhakant
2018-01-01
Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.
Civier, Oren; Tasko, Stephen M.; Guenther, Frank H.
2010-01-01
This paper investigates the hypothesis that stuttering may result in part from impaired readout of feedforward control of speech, which forces persons who stutter (PWS) to produce speech with a motor strategy that is weighted too much toward auditory feedback control. Over-reliance on feedback control leads to production errors which, if they grow large enough, can cause the motor system to “reset” and repeat the current syllable. This hypothesis is investigated using computer simulations of a “neurally impaired” version of the DIVA model, a neural network model of speech acquisition and production. The model’s outputs are compared to published acoustic data from PWS’ fluent speech, and to combined acoustic and articulatory movement data collected from the dysfluent speech of one PWS. The simulations mimic the errors observed in the PWS subject’s speech, as well as the repairs of these errors. Additional simulations were able to account for enhancements of fluency gained by slowed/prolonged speech and masking noise. Together these results support the hypothesis that many dysfluencies in stuttering are due to a bias away from feedforward control and toward feedback control. PMID:20831971
Coordinated three-dimensional motion of the head and torso by dynamic neural networks.
Kim, J; Hemami, H
1998-01-01
The problem of trajectory tracking control of a three dimensional (3D) model of the human upper torso and head is considered. The torso and the head are modeled as two rigid bodies connected at one point, and the Newton-Euler method is used to derive the nonlinear differential equations that govern the motion of the system. The two-link system is driven by six pairs of muscle like actuators that possess physiologically inspired alpha like and gamma like inputs, and spindle like and Golgi tendon organ like outputs. These outputs are utilized as reflex feedback for stability and stiffness control, in a long loop feedback for the purpose of estimating the state of the system (somesthesis), and as part of the input to the controller. Ideal delays of different duration are included in the feedforward and feedback paths of the system to emulate such delays encountered in physiological systems. Dynamical neural networks are trained to learn effective control of the desired maneuvers of the system. The feasibility of the controller is demonstrated by computer simulation of the successful execution of the desired maneuvers. This work demonstrates the capabilities of neural circuits in controlling highly nonlinear systems with multidelays in their feedforward and feedback paths. The ultimate long range goal of this research is toward understanding the working of the central nervous system in controlling movement. It is an interdisciplinary effort relying on mechanics, biomechanics, neuroscience, system theory, physiology and anatomy, and its short range relevance to rehabilitation must be noted.
Similar brain networks for detecting visuo-motor and visuo-proprioceptive synchrony.
Balslev, Daniela; Nielsen, Finn A; Lund, Torben E; Law, Ian; Paulson, Olaf B
2006-05-15
The ability to recognize feedback from own movement as opposed to the movement of someone else is important for motor control and social interaction. The neural processes involved in feedback recognition are incompletely understood. Two competing hypotheses have been proposed: the stimulus is compared with either (a) the proprioceptive feedback or with (b) the motor command and if they match, then the external stimulus is identified as feedback. Hypothesis (a) predicts that the neural mechanisms or brain areas involved in distinguishing self from other during passive and active movement are similar, whereas hypothesis (b) predicts that they are different. In this fMRI study, healthy subjects saw visual cursor movement that was either synchronous or asynchronous with their active or passive finger movements. The aim was to identify the brain areas where the neural activity depended on whether the visual stimulus was feedback from own movement and to contrast the functional activation maps for active and passive movement. We found activity increases in the right temporoparietal cortex in the condition with asynchronous relative to synchronous visual feedback from both active and passive movements. However, no statistically significant difference was found between these sets of activated areas when the active and passive movement conditions were compared. With a posterior probability of 0.95, no brain voxel had a contrast effect above 0.11% of the whole-brain mean signal. These results do not support the hypothesis that recognition of visual feedback during active and passive movement relies on different brain areas.
Distributed Adaptive Neural Control for Stochastic Nonlinear Multiagent Systems.
Wang, Fang; Chen, Bing; Lin, Chong; Li, Xuehua
2016-11-14
In this paper, a consensus tracking problem of nonlinear multiagent systems is investigated under a directed communication topology. All the followers are modeled by stochastic nonlinear systems in nonstrict feedback form, where nonlinearities and stochastic disturbance terms are totally unknown. Based on the structural characteristic of neural networks (in Lemma 4), a novel distributed adaptive neural control scheme is put forward. The raised control method not only effectively handles unknown nonlinearities in nonstrict feedback systems, but also copes with the interactions among agents and coupling terms. Based on the stochastic Lyapunov functional method, it is indicated that all the signals of the closed-loop system are bounded in probability and all followers' outputs are convergent to a neighborhood of the output of leader. At last, the efficiency of the control method is testified by a numerical example.
Neural signatures of trust in reciprocity: a coordinate-based meta-analysis
Bellucci, Gabriele; Chernyak, Sergey V.; Goodyear, Kimberly; Eickhoff, Simon B.; Krueger, Frank
2017-01-01
Trust in reciprocity (TR) is defined as the risky decision to invest valued resources in another party with the hope of mutual benefit. Several fMRI studies have investigated the neural correlates of TR in one-shot and multi-round versions of the investment game (IG). However, an overall characterization of the underlying neural networks remains elusive. Here, we employed a coordinate-based meta-analysis (activation likelihood estimation method, 30 papers) to investigate consistent brain activations in each of the IG stages (i.e., the trust, reciprocity and feedback stage). Our results showed consistent activations in the anterior insula (AI) during trust decisions in the one-shot IG and decisions to reciprocate in the multi-round IG, likely related to representations of aversive feelings. Moreover, decisions to reciprocate also consistently engaged the intraparietal sulcus, probably involved in evaluations of the reciprocity options. On the contrary, trust decisions in the multi-round IG consistently activated the ventral striatum, likely associated with reward prediction error signals. Finally, the dorsal striatum was found consistently recruited during the feedback stage of the multi-round IG, likely related to reinforcement learning. In conclusion, our results indicate different neural networks underlying trust, reciprocity and feedback learning. These findings suggest that although decisions to trust and reciprocate may elicit aversive feelings likely evoked by the uncertainty about the decision outcomes and the pressing requirements of social standards, multiple interactions allow people to build interpersonal trust for cooperation via a learning mechanism by which they arguably learn to distinguish trustworthy from untrustworthy partners. PMID:27859899
Neural signatures of trust in reciprocity: A coordinate-based meta-analysis.
Bellucci, Gabriele; Chernyak, Sergey V; Goodyear, Kimberly; Eickhoff, Simon B; Krueger, Frank
2017-03-01
Trust in reciprocity (TR) is defined as the risky decision to invest valued resources in another party with the hope of mutual benefit. Several fMRI studies have investigated the neural correlates of TR in one-shot and multiround versions of the investment game (IG). However, an overall characterization of the underlying neural networks remains elusive. Here, a coordinate-based meta-analysis was employed (activation likelihood estimation method, 30 articles) to investigate consistent brain activations in each of the IG stages (i.e., the trust, reciprocity and feedback stage). Results showed consistent activations in the anterior insula (AI) during trust decisions in the one-shot IG and decisions to reciprocate in the multiround IG, likely related to representations of aversive feelings. Moreover, decisions to reciprocate also consistently engaged the intraparietal sulcus, probably involved in evaluations of the reciprocity options. On the contrary, trust decisions in the multiround IG consistently activated the ventral striatum, likely associated with reward prediction error signals. Finally, the dorsal striatum was found consistently recruited during the feedback stage of the multiround IG, likely related to reinforcement learning. In conclusion, our results indicate different neural networks underlying trust, reciprocity, and feedback learning. These findings suggest that although decisions to trust and reciprocate may elicit aversive feelings likely evoked by the uncertainty about the decision outcomes and the pressing requirements of social standards, multiple interactions allow people to build interpersonal trust for cooperation via a learning mechanism by which they arguably learn to distinguish trustworthy from untrustworthy partners. Hum Brain Mapp 38:1233-1248, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Neural coding in graphs of bidirectional associative memories.
Bouchain, A David; Palm, Günther
2012-01-24
In the last years we have developed large neural network models for the realization of complex cognitive tasks in a neural network architecture that resembles the network of the cerebral cortex. We have used networks of several cortical modules that contain two populations of neurons (one excitatory, one inhibitory). The excitatory populations in these so-called "cortical networks" are organized as a graph of Bidirectional Associative Memories (BAMs), where edges of the graph correspond to BAMs connecting two neural modules and nodes of the graph correspond to excitatory populations with associative feedback connections (and inhibitory interneurons). The neural code in each of these modules consists essentially of the firing pattern of the excitatory population, where mainly it is the subset of active neurons that codes the contents to be represented. The overall activity can be used to distinguish different properties of the patterns that are represented which we need to distinguish and control when performing complex tasks like language understanding with these cortical networks. The most important pattern properties or situations are: exactly fitting or matching input, incomplete information or partially matching pattern, superposition of several patterns, conflicting information, and new information that is to be learned. We show simple simulations of these situations in one area or module and discuss how to distinguish these situations based on the overall internal activation of the module. This article is part of a Special Issue entitled "Neural Coding". Copyright © 2011 Elsevier B.V. All rights reserved.
Neural network evaluation of tokamak current profiles for real time control
NASA Astrophysics Data System (ADS)
Wróblewski, Dariusz
1997-02-01
Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q0, minimum value of q, qmin, and the location of qmin. Very good performance of the trained neural network both for simulated test data and for experimental datais demonstrated.
Neural network evaluation of tokamak current profiles for real time control (abstract)
NASA Astrophysics Data System (ADS)
Wróblewski, Dariusz
1997-01-01
Active feedback control of the current profile, requiring real-time determination of the current profile parameters, is envisioned for tokamaks operating in enhanced confinement regimes. The distribution of toroidal current in a tokamak is now routinely evaluated based on external (magnetic probes, flux loops) and internal (motional Stark effect) measurements of the poloidal magnetic field. However, the analysis involves reconstruction of magnetohydrodynamic equilibrium and is too intensive computationally to be performed in real time. In the present study, a neural network is used to provide a mapping from the magnetic measurements (internal and external) to selected parameters of the safety factor profile. The single-pass, feedforward calculation of output of a trained neural network is very fast, making this approach particularly suitable for real-time applications. The network was trained on a large set of simulated equilibrium data for the DIII-D tokamak. The database encompasses a large variety of current profiles including the hollow current profiles important for reversed central shear operation. The parameters of safety factor profile (a quantity related to the current profile through the magnetic field tilt angle) estimated by the neural network include central safety factor, q0, minimum value of q, qmin, and the location of qmin. Very good performance of the trained neural network both for simulated test data and for experimental data is demonstrated.
NASA Astrophysics Data System (ADS)
Ferrari, F. A. S.; Viana, R. L.; Reis, A. S.; Iarosz, K. C.; Caldas, I. L.; Batista, A. M.
2018-04-01
The cerebral cortex plays a key role in complex cortical functions. It can be divided into areas according to their function (motor, sensory and association areas). In this paper, the cerebral cortex is described as a network of networks (cortex network), we consider that each cortical area is composed of a network with small-world property (cortical network). The neurons are assumed to have bursting properties with the dynamics described by the Rulkov model. We study the phase synchronization of the cortex network and the cortical networks. In our simulations, we verify that synchronization in cortex network is not homogeneous. Besides, we focus on the suppression of neural phase synchronization. Synchronization can be related to undesired and pathological abnormal rhythms in the brain. For this reason, we consider the delayed feedback control to suppress the synchronization. We show that delayed feedback control is efficient to suppress synchronous behavior in our network model when an appropriate signal intensity and time delay are defined.
Model-Free Adaptive Control for Unknown Nonlinear Zero-Sum Differential Game.
Zhong, Xiangnan; He, Haibo; Wang, Ding; Ni, Zhen
2018-05-01
In this paper, we present a new model-free globalized dual heuristic dynamic programming (GDHP) approach for the discrete-time nonlinear zero-sum game problems. First, the online learning algorithm is proposed based on the GDHP method to solve the Hamilton-Jacobi-Isaacs equation associated with optimal regulation control problem. By setting backward one step of the definition of performance index, the requirement of system dynamics, or an identifier is relaxed in the proposed method. Then, three neural networks are established to approximate the optimal saddle point feedback control law, the disturbance law, and the performance index, respectively. The explicit updating rules for these three neural networks are provided based on the data generated during the online learning along the system trajectories. The stability analysis in terms of the neural network approximation errors is discussed based on the Lyapunov approach. Finally, two simulation examples are provided to show the effectiveness of the proposed method.
Movement decoupling control for two-axis fast steering mirror
NASA Astrophysics Data System (ADS)
Wang, Rui; Qiao, Yongming; Lv, Tao
2017-02-01
Based on flexure hinge and piezoelectric actuator of two-axis fast steering mirror is a complex system with time varying, uncertain and strong coupling. It is extremely difficult to achieve high precision decoupling control with the traditional PID control method. The feedback error learning method was established an inverse hysteresis model which was based inner product dynamic neural network nonlinear and no-smooth for piezo-ceramic. In order to improve the actuator high precision, a method was proposed, which was based piezo-ceramic inverse model of two dynamic neural network adaptive control. The experiment result indicated that, compared with two neural network adaptive movement decoupling control algorithm, static relative error is reduced from 4.44% to 0.30% and coupling degree is reduced from 12.71% to 0.60%, while dynamic relative error is reduced from 13.92% to 2.85% and coupling degree is reduced from 2.63% to 1.17%.
Two neural network algorithms for designing optimal terminal controllers with open final time
NASA Technical Reports Server (NTRS)
Plumer, Edward S.
1992-01-01
Multilayer neural networks, trained by the backpropagation through time algorithm (BPTT), have been used successfully as state-feedback controllers for nonlinear terminal control problems. Current BPTT techniques, however, are not able to deal systematically with open final-time situations such as minimum-time problems. Two approaches which extend BPTT to open final-time problems are presented. In the first, a neural network learns a mapping from initial-state to time-to-go. In the second, the optimal number of steps for each trial run is found using a line-search. Both methods are derived using Lagrange multiplier techniques. This theoretical framework is used to demonstrate that the derived algorithms are direct extensions of forward/backward sweep methods used in N-stage optimal control. The two algorithms are tested on a Zermelo problem and the resulting trajectories compare favorably to optimal control results.
Adaptive online inverse control of a shape memory alloy wire actuator using a dynamic neural network
NASA Astrophysics Data System (ADS)
Mai, Huanhuan; Song, Gangbing; Liao, Xiaofeng
2013-01-01
Shape memory alloy (SMA) actuators exhibit severe hysteresis, a nonlinear behavior, which complicates control strategies and limits their applications. This paper presents a new approach to controlling an SMA actuator through an adaptive inverse model based controller that consists of a dynamic neural network (DNN) identifier, a copy dynamic neural network (CDNN) feedforward term and a proportional (P) feedback action. Unlike fixed hysteresis models used in most inverse controllers, the proposed one uses a DNN to identify online the relationship between the applied voltage to the actuator and the displacement (the inverse model). Even without a priori knowledge of the SMA hysteresis and without pre-training, the proposed controller can precisely control the SMA wire actuator in various tracking tasks by identifying online the inverse model of the SMA actuator. Experiments were conducted, and experimental results demonstrated real-time modeling capabilities of DNN and the performance of the adaptive inverse controller.
Some Problems of Queues with Feedback.
1978-11-01
occur in comr,uter networks , production networks, Street traffic networks , neural networks , and the like . In spite of these potential applications...55— - - - ~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~ 29 b where h — ~ j ~ . Now, caking the Lap lace —Stieltjes transform of the above k
Smooth function approximation using neural networks.
Ferrari, Silvia; Stengel, Robert F
2005-01-01
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.
Vibration control of building structures using self-organizing and self-learning neural networks
NASA Astrophysics Data System (ADS)
Madan, Alok
2005-11-01
Past research in artificial intelligence establishes that artificial neural networks (ANN) are effective and efficient computational processors for performing a variety of tasks including pattern recognition, classification, associative recall, combinatorial problem solving, adaptive control, multi-sensor data fusion, noise filtering and data compression, modelling and forecasting. The paper presents a potentially feasible approach for training ANN in active control of earthquake-induced vibrations in building structures without the aid of teacher signals (i.e. target control forces). A counter-propagation neural network is trained to output the control forces that are required to reduce the structural vibrations in the absence of any feedback on the correctness of the output control forces (i.e. without any information on the errors in output activations of the network). The present study shows that, in principle, the counter-propagation network (CPN) can learn from the control environment to compute the required control forces without the supervision of a teacher (unsupervised learning). Simulated case studies are presented to demonstrate the feasibility of implementing the unsupervised learning approach in ANN for effective vibration control of structures under the influence of earthquake ground motions. The proposed learning methodology obviates the need for developing a mathematical model of structural dynamics or training a separate neural network to emulate the structural response for implementation in practice.
Xu, Bin; Yang, Daipeng; Shi, Zhongke; Pan, Yongping; Chen, Badong; Sun, Fuchun
2017-09-25
This paper investigates the online recorded data-based composite neural control of uncertain strict-feedback systems using the backstepping framework. In each step of the virtual control design, neural network (NN) is employed for uncertainty approximation. In previous works, most designs are directly toward system stability ignoring the fact how the NN is working as an approximator. In this paper, to enhance the learning ability, a novel prediction error signal is constructed to provide additional correction information for NN weight update using online recorded data. In this way, the neural approximation precision is highly improved, and the convergence speed can be faster. Furthermore, the sliding mode differentiator is employed to approximate the derivative of the virtual control signal, and thus, the complex analysis of the backstepping design can be avoided. The closed-loop stability is rigorously established, and the boundedness of the tracking error can be guaranteed. Through simulation of hypersonic flight dynamics, the proposed approach exhibits better tracking performance.
Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.
Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus
2017-01-01
Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.
Theory of feedback controlled brain stimulations for Parkinson's disease
NASA Astrophysics Data System (ADS)
Sanzeni, A.; Celani, A.; Tiana, G.; Vergassola, M.
2016-01-01
Limb tremor and other debilitating symptoms caused by the neurodegenerative Parkinson's disease are currently treated by administering drugs and by fixed-frequency deep brain stimulation. The latter interferes directly with the brain dynamics by delivering electrical impulses to neurons in the subthalamic nucleus. While deep brain stimulation has shown therapeutic benefits in many instances, its mechanism is still unclear. Since its understanding could lead to improved protocols of stimulation and feedback control, we have studied a mathematical model of the many-body neural network dynamics controlling the dynamics of the basal ganglia. On the basis of the results obtained from the model, we propose a new procedure of active stimulation, that depends on the feedback of the network and that respects the constraints imposed by existing technology. We show by numerical simulations that the new protocol outperforms the standard ones for deep brain stimulation and we suggest future experiments that could further improve the feedback procedure.
NASA Astrophysics Data System (ADS)
Li, Shanshan; Zhang, Guoshan; Wang, Jiang; Chen, Yingyuan; Deng, Bin
2018-02-01
This paper proposes that modified two-compartment Pinsky-Rinzel (PR) neural model can be used to develop the simple form of central pattern generator (CPG). The CPG is called as 'half-central oscillator', which constructed by two inhibitory chemical coupled PR neurons with time delay. Some key properties of PR neural model related to CPG are studied and proved to meet the requirements of CPG. Using the simple CPG network, we first study the relationship between rhythmical output and key factors, including ambient noise, sensory feedback signals, morphological character of single neuron as well as the coupling delay time. We demonstrate that, appropriate intensity noise can enhance synchronization between two coupled neurons. Different output rhythm of CPG network can be entrained by sensory feedback signals. We also show that the morphology of single neuron has strong effect on the output rhythm. The phase synchronization indexes decrease with the increase of morphology parameter's difference. Through adjusting coupled delay time, we can get absolutely phase synchronization and antiphase state of CPG. Those results of simulation show the feasibility of PR neural model as a valid CPG as well as the emergent behaviors of the particularly CPG.
Civier, Oren; Tasko, Stephen M; Guenther, Frank H
2010-09-01
This paper investigates the hypothesis that stuttering may result in part from impaired readout of feedforward control of speech, which forces persons who stutter (PWS) to produce speech with a motor strategy that is weighted too much toward auditory feedback control. Over-reliance on feedback control leads to production errors which if they grow large enough, can cause the motor system to "reset" and repeat the current syllable. This hypothesis is investigated using computer simulations of a "neurally impaired" version of the DIVA model, a neural network model of speech acquisition and production. The model's outputs are compared to published acoustic data from PWS' fluent speech, and to combined acoustic and articulatory movement data collected from the dysfluent speech of one PWS. The simulations mimic the errors observed in the PWS subject's speech, as well as the repairs of these errors. Additional simulations were able to account for enhancements of fluency gained by slowed/prolonged speech and masking noise. Together these results support the hypothesis that many dysfluencies in stuttering are due to a bias away from feedforward control and toward feedback control. The reader will be able to (a) describe the contribution of auditory feedback control and feedforward control to normal and stuttered speech production, (b) summarize the neural modeling approach to speech production and its application to stuttering, and (c) explain how the DIVA model accounts for enhancements of fluency gained by slowed/prolonged speech and masking noise.
Processing speed in recurrent visual networks correlates with general intelligence.
Jolij, Jacob; Huisman, Danielle; Scholte, Steven; Hamel, Ronald; Kemner, Chantal; Lamme, Victor A F
2007-01-08
Studies on the neural basis of general fluid intelligence strongly suggest that a smarter brain processes information faster. Different brain areas, however, are interconnected by both feedforward and feedback projections. Whether both types of connections or only one of the two types are faster in smarter brains remains unclear. Here we show, by measuring visual evoked potentials during a texture discrimination task, that general fluid intelligence shows a strong correlation with processing speed in recurrent visual networks, while there is no correlation with speed of feedforward connections. The hypothesis that a smarter brain runs faster may need to be refined: a smarter brain's feedback connections run faster.
Zhou, Miaolei; Zhang, Qi; Wang, Jingyuan
2014-01-01
As a new type of smart material, magnetic shape memory alloy has the advantages of a fast response frequency and outstanding strain capability in the field of microdrive and microposition actuators. The hysteresis nonlinearity in magnetic shape memory alloy actuators, however, limits system performance and further application. Here we propose a feedforward-feedback hybrid control method to improve control precision and mitigate the effects of the hysteresis nonlinearity of magnetic shape memory alloy actuators. First, hysteresis nonlinearity compensation for the magnetic shape memory alloy actuator is implemented by establishing a feedforward controller which is an inverse hysteresis model based on Krasnosel'skii-Pokrovskii operator. Secondly, the paper employs the classical Proportion Integration Differentiation feedback control with feedforward control to comprise the hybrid control system, and for further enhancing the adaptive performance of the system and improving the control accuracy, the Radial Basis Function neural network self-tuning Proportion Integration Differentiation feedback control replaces the classical Proportion Integration Differentiation feedback control. Utilizing self-learning ability of the Radial Basis Function neural network obtains Jacobian information of magnetic shape memory alloy actuator for the on-line adjustment of parameters in Proportion Integration Differentiation controller. Finally, simulation results show that the hybrid control method proposed in this paper can greatly improve the control precision of magnetic shape memory alloy actuator and the maximum tracking error is reduced from 1.1% in the open-loop system to 0.43% in the hybrid control system. PMID:24828010
Zhou, Miaolei; Zhang, Qi; Wang, Jingyuan
2014-01-01
As a new type of smart material, magnetic shape memory alloy has the advantages of a fast response frequency and outstanding strain capability in the field of microdrive and microposition actuators. The hysteresis nonlinearity in magnetic shape memory alloy actuators, however, limits system performance and further application. Here we propose a feedforward-feedback hybrid control method to improve control precision and mitigate the effects of the hysteresis nonlinearity of magnetic shape memory alloy actuators. First, hysteresis nonlinearity compensation for the magnetic shape memory alloy actuator is implemented by establishing a feedforward controller which is an inverse hysteresis model based on Krasnosel'skii-Pokrovskii operator. Secondly, the paper employs the classical Proportion Integration Differentiation feedback control with feedforward control to comprise the hybrid control system, and for further enhancing the adaptive performance of the system and improving the control accuracy, the Radial Basis Function neural network self-tuning Proportion Integration Differentiation feedback control replaces the classical Proportion Integration Differentiation feedback control. Utilizing self-learning ability of the Radial Basis Function neural network obtains Jacobian information of magnetic shape memory alloy actuator for the on-line adjustment of parameters in Proportion Integration Differentiation controller. Finally, simulation results show that the hybrid control method proposed in this paper can greatly improve the control precision of magnetic shape memory alloy actuator and the maximum tracking error is reduced from 1.1% in the open-loop system to 0.43% in the hybrid control system.
Application of Adaptive Autopilot Designs for an Unmanned Aerial Vehicle
NASA Technical Reports Server (NTRS)
Shin, Yoonghyun; Calise, Anthony J.; Motter, Mark A.
2005-01-01
This paper summarizes the application of two adaptive approaches to autopilot design, and presents an evaluation and comparison of the two approaches in simulation for an unmanned aerial vehicle. One approach employs two-stage dynamic inversion and the other employs feedback dynamic inversions based on a command augmentation system. Both are augmented with neural network based adaptive elements. The approaches permit adaptation to both parametric uncertainty and unmodeled dynamics, and incorporate a method that permits adaptation during periods of control saturation. Simulation results for an FQM-117B radio controlled miniature aerial vehicle are presented to illustrate the performance of the neural network based adaptation.
Spontaneous scale-free structure in adaptive networks with synchronously dynamical linking
NASA Astrophysics Data System (ADS)
Yuan, Wu-Jie; Zhou, Jian-Fang; Li, Qun; Chen, De-Bao; Wang, Zhen
2013-08-01
Inspired by the anti-Hebbian learning rule in neural systems, we study how the feedback from dynamical synchronization shapes network structure by adding new links. Through extensive numerical simulations, we find that an adaptive network spontaneously forms scale-free structure, as confirmed in many real systems. Moreover, the adaptive process produces two nontrivial power-law behaviors of deviation strength from mean activity of the network and negative degree correlation, which exists widely in technological and biological networks. Importantly, these scalings are robust to variation of the adaptive network parameters, which may have meaningful implications in the scale-free formation and manipulation of dynamical networks. Our study thus suggests an alternative adaptive mechanism for the formation of scale-free structure with negative degree correlation, which means that nodes of high degree tend to connect, on average, with others of low degree and vice versa. The relevance of the results to structure formation and dynamical property in neural networks is briefly discussed as well.
NASA Astrophysics Data System (ADS)
Zargarzadeh, H.; Nodland, David; Thotla, V.; Jagannathan, S.; Agarwal, S.
2012-06-01
Unmanned Aerial Vehicles (UAVs) are versatile aircraft with many applications, including the potential for use to detect unintended electromagnetic emissions from electronic devices. A particular area of recent interest has been helicopter unmanned aerial vehicles. Because of the nature of these helicopters' dynamics, high-performance controller design for them presents a challenge. This paper introduces an optimal controller design via output feedback control for trajectory tracking of a helicopter UAV using a neural network (NN). The output-feedback control system utilizes the backstepping methodology, employing kinematic, virtual, and dynamic controllers and an observer. Optimal tracking is accomplished with a single NN utilized for cost function approximation. The controller positions the helicopter, which is equipped with an antenna, such that the antenna can detect unintended emissions. The overall closed-loop system stability with the proposed controller is demonstrated by using Lyapunov analysis. Finally, results are provided to demonstrate the effectiveness of the proposed control design for positioning the helicopter for unintended emissions detection.
Mendes, César S; Bartos, Imre; Akay, Turgay; Márka, Szabolcs; Mann, Richard S
2013-01-01
Coordinated walking in vertebrates and multi-legged invertebrates such as Drosophila melanogaster requires a complex neural network coupled to sensory feedback. An understanding of this network will benefit from systems such as Drosophila that have the ability to genetically manipulate neural activities. However, the fly's small size makes it challenging to analyze walking in this system. In order to overcome this limitation, we developed an optical method coupled with high-speed imaging that allows the tracking and quantification of gait parameters in freely walking flies with high temporal and spatial resolution. Using this method, we present a comprehensive description of many locomotion parameters, such as gait, tarsal positioning, and intersegmental and left-right coordination for wild type fruit flies. Surprisingly, we find that inactivation of sensory neurons in the fly's legs, to block proprioceptive feedback, led to deficient step precision, but interleg coordination and the ability to execute a tripod gait were unaffected. DOI: http://dx.doi.org/10.7554/eLife.00231.001 PMID:23326642
Li, Yongming; Tong, Shaocheng
2017-06-28
In this paper, an adaptive neural networks (NNs)-based decentralized control scheme with the prescribed performance is proposed for uncertain switched nonstrict-feedback interconnected nonlinear systems. It is assumed that nonlinear interconnected terms and nonlinear functions of the concerned systems are unknown, and also the switching signals are unknown and arbitrary. A linear state estimator is constructed to solve the problem of unmeasured states. The NNs are employed to approximate unknown interconnected terms and nonlinear functions. A new output feedback decentralized control scheme is developed by using the adaptive backstepping design technique. The control design problem of nonlinear interconnected switched systems with unknown switching signals can be solved by the proposed scheme, and only a tuning parameter is needed for each subsystem. The proposed scheme can ensure that all variables of the control systems are semi-globally uniformly ultimately bounded and the tracking errors converge to a small residual set with the prescribed performance bound. The effectiveness of the proposed control approach is verified by some simulation results.
Output feedback control of a quadrotor UAV using neural networks.
Dierks, Travis; Jagannathan, Sarangapani
2010-01-01
In this paper, a new nonlinear controller for a quadrotor unmanned aerial vehicle (UAV) is proposed using neural networks (NNs) and output feedback. The assumption on the availability of UAV dynamics is not always practical, especially in an outdoor environment. Therefore, in this work, an NN is introduced to learn the complete dynamics of the UAV online, including uncertain nonlinear terms like aerodynamic friction and blade flapping. Although a quadrotor UAV is underactuated, a novel NN virtual control input scheme is proposed which allows all six degrees of freedom (DOF) of the UAV to be controlled using only four control inputs. Furthermore, an NN observer is introduced to estimate the translational and angular velocities of the UAV, and an output feedback control law is developed in which only the position and the attitude of the UAV are considered measurable. It is shown using Lyapunov theory that the position, orientation, and velocity tracking errors, the virtual control and observer estimation errors, and the NN weight estimation errors for each NN are all semiglobally uniformly ultimately bounded (SGUUB) in the presence of bounded disturbances and NN functional reconstruction errors while simultaneously relaxing the separation principle. The effectiveness of proposed output feedback control scheme is then demonstrated in the presence of unknown nonlinear dynamics and disturbances, and simulation results are included to demonstrate the theoretical conjecture.
A bilateral cortical network responds to pitch perturbations in speech feedback
Kort, Naomi S.; Nagarajan, Srikantan S.; Houde, John F.
2014-01-01
Auditory feedback is used to monitor and correct for errors in speech production, and one of the clearest demonstrations of this is the pitch perturbation reflex. During ongoing phonation, speakers respond rapidly to shifts of the pitch of their auditory feedback, altering their pitch production to oppose the direction of the applied pitch shift. In this study, we examine the timing of activity within a network of brain regions thought to be involved in mediating this behavior. To isolate auditory feedback processing relevant for motor control of speech, we used magnetoencephalography (MEG) to compare neural responses to speech onset and to transient (400ms) pitch feedback perturbations during speaking with responses to identical acoustic stimuli during passive listening. We found overlapping, but distinct bilateral cortical networks involved in monitoring speech onset and feedback alterations in ongoing speech. Responses to speech onset during speaking were suppressed in bilateral auditory and left ventral supramarginal gyrus/posterior superior temporal sulcus (vSMG/pSTS). In contrast, during pitch perturbations, activity was enhanced in bilateral vSMG/pSTS, bilateral premotor cortex, right primary auditory cortex, and left higher order auditory cortex. We also found speaking-induced delays in responses to both unaltered and altered speech in bilateral primary and secondary auditory regions, the left vSMG/pSTS and right premotor cortex. The network dynamics reveal the cortical processing involved in both detecting the speech error and updating the motor plan to create the new pitch output. These results implicate vSMG/pSTS as critical in both monitoring auditory feedback and initiating rapid compensation to feedback errors. PMID:24076223
Forbes, Chad E; Leitner, Jordan B
2014-10-01
Stereotype threat, a situational pressure individuals experience when they fear confirming a negative group stereotype, engenders a cascade of physiological stress responses, negative appraisals, and performance monitoring processes that tax working memory resources necessary for optimal performance. Less is known, however, about how stereotype threat biases attentional processing in response to performance feedback, and how such attentional biases may undermine performance. Women received feedback on math problems in stereotype threatening compared to stereotype-neutral contexts while continuous EEG activity was recorded. Findings revealed that stereotype threatened women elicited larger midline P100 ERPs, increased phase locking between anterior cingulate cortex and dorsolateral prefrontal cortex (two regions integral for attentional processes), and increased power in left fusiform gyrus in response to negative feedback compared to positive feedback and women in stereotype-neutral contexts. Increased power in left fusiform gyrus in response to negative feedback predicted underperformance on the math task among stereotype threatened women only. Women in stereotype-neutral contexts exhibited the opposite trend. Findings suggest that in stereotype threatening contexts, neural networks integral for attention and working memory are biased toward negative, stereotype confirming feedback at very early speeds of information processing. This bias, in turn, plays a role in undermining performance. Copyright © 2014 Elsevier B.V. All rights reserved.
Real-Time Adaptive Color Segmentation by Neural Networks
NASA Technical Reports Server (NTRS)
Duong, Tuan A.
2004-01-01
Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural network and algorithm is that each update of synaptic weights takes place in conjunction with the addition of another hidden unit, which then remains in place as still other hidden units are added on subsequent iterations. For a given training pattern, the synaptic weight between (1) the inputs and the previously added hidden units and (2) the newly added hidden unit is updated by an amount proportional to the partial derivative of a quadratic error function with respect to the synaptic weight. The synaptic weight between the newly added hidden unit and each output unit is given by a more complex function that involves the errors between the outputs and their target values, the transfer functions (hyperbolic tangents) of the neural units, and the derivatives of the transfer functions.
Neuronal networks with NMDARs and lateral inhibition implement winner-takes-all
Shoemaker, Patrick A.
2015-01-01
A neural circuit that relies on the electrical properties of NMDA synaptic receptors is shown by numerical and theoretical analysis to be capable of realizing the winner-takes-all function, a powerful computational primitive that is often attributed to biological nervous systems. This biophysically-plausible model employs global lateral inhibition in a simple feedback arrangement. As its inputs increase, high-gain and then bi- or multi-stable equilibrium states may be assumed in which there is significant depolarization of a single neuron and hyperpolarization or very weak depolarization of other neurons in the network. The state of the winning neuron conveys analog information about its input. The winner-takes-all characteristic depends on the nonmonotonic current-voltage relation of NMDA receptor ion channels, as well as neural thresholding, and the gain and nature of the inhibitory feedback. Dynamical regimes vary with input strength. Fixed points may become unstable as the network enters a winner-takes-all regime, which can lead to entrained oscillations. Under some conditions, oscillatory behavior can be interpreted as winner-takes-all in nature. Stable winner-takes-all behavior is typically recovered as inputs increase further, but with still larger inputs, the winner-takes-all characteristic is ultimately lost. Network stability may be enhanced by biologically plausible mechanisms. PMID:25741276
Neural node network and model, and method of teaching same
Parlos, A.G.; Atiya, A.F.; Fernandez, B.; Tsai, W.K.; Chong, K.T.
1995-12-26
The present invention is a fully connected feed forward network that includes at least one hidden layer. The hidden layer includes nodes in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device occurring in the feedback path (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit from all the other nodes within the same layer. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing. 21 figs.
Neural node network and model, and method of teaching same
Parlos, Alexander G.; Atiya, Amir F.; Fernandez, Benito; Tsai, Wei K.; Chong, Kil T.
1995-01-01
The present invention is a fully connected feed forward network that includes at least one hidden layer 16. The hidden layer 16 includes nodes 20 in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device 24 occurring in the feedback path 22 (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit 36 from all the other nodes within the same layer 16. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing.
Neural control of magnetic suspension systems
NASA Technical Reports Server (NTRS)
Gray, W. Steven
1993-01-01
The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.
Simulation tests of the optimization method of Hopfield and Tank using neural networks
NASA Technical Reports Server (NTRS)
Paielli, Russell A.
1988-01-01
The method proposed by Hopfield and Tank for using the Hopfield neural network with continuous valued neurons to solve the traveling salesman problem is tested by simulation. Several researchers have apparently been unable to successfully repeat the numerical simulation documented by Hopfield and Tank. However, as suggested to the author by Adams, it appears that the reason for those difficulties is that a key parameter value is reported erroneously (by four orders of magnitude) in the original paper. When a reasonable value is used for that parameter, the network performs generally as claimed. Additionally, a new method of using feedback to control the input bias currents to the amplifiers is proposed and successfully tested. This eliminates the need to set the input currents by trial and error.
Clarke, Aaron M.; Herzog, Michael H.; Francis, Gregory
2014-01-01
Experimentalists tend to classify models of visual perception as being either local or global, and involving either feedforward or feedback processing. We argue that these distinctions are not as helpful as they might appear, and we illustrate these issues by analyzing models of visual crowding as an example. Recent studies have argued that crowding cannot be explained by purely local processing, but that instead, global factors such as perceptual grouping are crucial. Theories of perceptual grouping, in turn, often invoke feedback connections as a way to account for their global properties. We examined three types of crowding models that are representative of global processing models, and two of which employ feedback processing: a model based on Fourier filtering, a feedback neural network, and a specific feedback neural architecture that explicitly models perceptual grouping. Simulations demonstrate that crucial empirical findings are not accounted for by any of the models. We conclude that empirical investigations that reject a local or feedforward architecture offer almost no constraints for model construction, as there are an uncountable number of global and feedback systems. We propose that the identification of a system as being local or global and feedforward or feedback is less important than the identification of a system's computational details. Only the latter information can provide constraints on model development and promote quantitative explanations of complex phenomena. PMID:25374554
Feedback Enhances Feedforward Figure-Ground Segmentation by Changing Firing Mode
Supèr, Hans; Romeo, August
2011-01-01
In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforwardspiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses withthe responses to a homogenous texture. We propose that feedback controlsfigure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons. PMID:21738747
Feedback enhances feedforward figure-ground segmentation by changing firing mode.
Supèr, Hans; Romeo, August
2011-01-01
In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses with the responses to a homogenous texture. We propose that feedback controls figure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons.
Dordek, Yedidyah; Soudry, Daniel; Meir, Ron; Derdikman, Dori
2016-01-01
Many recent models study the downstream projection from grid cells to place cells, while recent data have pointed out the importance of the feedback projection. We thus asked how grid cells are affected by the nature of the input from the place cells. We propose a single-layer neural network with feedforward weights connecting place-like input cells to grid cell outputs. Place-to-grid weights are learned via a generalized Hebbian rule. The architecture of this network highly resembles neural networks used to perform Principal Component Analysis (PCA). Both numerical results and analytic considerations indicate that if the components of the feedforward neural network are non-negative, the output converges to a hexagonal lattice. Without the non-negativity constraint, the output converges to a square lattice. Consistent with experiments, grid spacing ratio between the first two consecutive modules is −1.4. Our results express a possible linkage between place cell to grid cell interactions and PCA. DOI: http://dx.doi.org/10.7554/eLife.10094.001 PMID:26952211
Impulsive stabilization and impulsive synchronization of discrete-time delayed neural networks.
Chen, Wu-Hua; Lu, Xiaomei; Zheng, Wei Xing
2015-04-01
This paper investigates the problems of impulsive stabilization and impulsive synchronization of discrete-time delayed neural networks (DDNNs). Two types of DDNNs with stabilizing impulses are studied. By introducing the time-varying Lyapunov functional to capture the dynamical characteristics of discrete-time impulsive delayed neural networks (DIDNNs) and by using a convex combination technique, new exponential stability criteria are derived in terms of linear matrix inequalities. The stability criteria for DIDNNs are independent of the size of time delay but rely on the lengths of impulsive intervals. With the newly obtained stability results, sufficient conditions on the existence of linear-state feedback impulsive controllers are derived. Moreover, a novel impulsive synchronization scheme for two identical DDNNs is proposed. The novel impulsive synchronization scheme allows synchronizing two identical DDNNs with unknown delays. Simulation results are given to validate the effectiveness of the proposed criteria of impulsive stabilization and impulsive synchronization of DDNNs. Finally, an application of the obtained impulsive synchronization result for two identical chaotic DDNNs to a secure communication scheme is presented.
Bakkum, Douglas J.; Gamblen, Philip M.; Ben-Ary, Guy; Chao, Zenas C.; Potter, Steve M.
2007-01-01
Here, we and others describe an unusual neurorobotic project, a merging of art and science called MEART, the semi-living artist. We built a pneumatically actuated robotic arm to create drawings, as controlled by a living network of neurons from rat cortex grown on a multi-electrode array (MEA). Such embodied cultured networks formed a real-time closed-loop system which could now behave and receive electrical stimulation as feedback on its behavior. We used MEART and simulated embodiments, or animats, to study the network mechanisms that produce adaptive, goal-directed behavior. This approach to neural interfacing will help instruct the design of other hybrid neural-robotic systems we call hybrots. The interfacing technologies and algorithms developed have potential applications in responsive deep brain stimulation systems and for motor prosthetics using sensory components. In a broader context, MEART educates the public about neuroscience, neural interfaces, and robotics. It has paved the way for critical discussions on the future of bio-art and of biotechnology. PMID:18958276
The neural circuit and synaptic dynamics underlying perceptual decision-making
NASA Astrophysics Data System (ADS)
Liu, Feng
2015-03-01
Decision-making with several choice options is central to cognition. To elucidate the neural mechanisms of multiple-choice motion discrimination, we built a continuous recurrent network model to represent a local circuit in the lateral intraparietal area (LIP). The network is composed of pyramidal cells and interneurons, which are directionally tuned. All neurons are reciprocally connected, and the synaptic connectivity strength is heterogeneous. Specifically, we assume two types of inhibitory connectivity to pyramidal cells: opposite-feature and similar-feature inhibition. The model accounted for both physiological and behavioral data from monkey experiments. The network is endowed with slow excitatory reverberation, which subserves the buildup and maintenance of persistent neural activity, and predominant feedback inhibition, which underlies the winner-take-all competition and attractor dynamics. The opposite-feature and opposite-feature inhibition have different effects on decision-making, and only their combination allows for a categorical choice among 12 alternatives. Together, our work highlights the importance of structured synaptic inhibition in multiple-choice decision-making processes.
Signal processing and neural network toolbox and its application to failure diagnosis and prognosis
NASA Astrophysics Data System (ADS)
Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.
2001-07-01
Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.
Hypersonic Vehicle Trajectory Optimization and Control
NASA Technical Reports Server (NTRS)
Balakrishnan, S. N.; Shen, J.; Grohs, J. R.
1997-01-01
Two classes of neural networks have been developed for the study of hypersonic vehicle trajectory optimization and control. The first one is called an 'adaptive critic'. The uniqueness and main features of this approach are that: (1) they need no external training; (2) they allow variability of initial conditions; and (3) they can serve as feedback control. This is used to solve a 'free final time' two-point boundary value problem that maximizes the mass at the rocket burn-out while satisfying the pre-specified burn-out conditions in velocity, flightpath angle, and altitude. The second neural network is a recurrent network. An interesting feature of this network formulation is that when its inputs are the coefficients of the dynamics and control matrices, the network outputs are the Kalman sequences (with a quadratic cost function); the same network is also used for identifying the coefficients of the dynamics and control matrices. Consequently, we can use it to control a system whose parameters are uncertain. Numerical results are presented which illustrate the potential of these methods.
Sensory-motor interactions for vocal pitch monitoring in non-primary human auditory cortex.
Greenlee, Jeremy D W; Behroozmand, Roozbeh; Larson, Charles R; Jackson, Adam W; Chen, Fangxiang; Hansen, Daniel R; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A
2013-01-01
The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (-100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70-150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.
Sensory-Motor Interactions for Vocal Pitch Monitoring in Non-Primary Human Auditory Cortex
Larson, Charles R.; Jackson, Adam W.; Chen, Fangxiang; Hansen, Daniel R.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.
2013-01-01
The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (−100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70–150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control. PMID:23577157
Learning and optimization with cascaded VLSI neural network building-block chips
NASA Technical Reports Server (NTRS)
Duong, T.; Eberhardt, S. P.; Tran, M.; Daud, T.; Thakoor, A. P.
1992-01-01
To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-b multiplying digital-to-analog converter (MDAC) synapse circuits, with 31 x 32 and 32 x 32 synapses per chip. A novel learning algorithm compatible with analog VLSI was applied to the two-input parity problem. The algorithm combines dynamically evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the learning algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7 x 7 assignment problem was mapped onto a cascaded 64-neuron fully connected feedback network. In 100 randomly selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.
Towards autonomous neuroprosthetic control using Hebbian reinforcement learning.
Mahmoudi, Babak; Pohlmeyer, Eric A; Prins, Noeline W; Geng, Shijia; Sanchez, Justin C
2013-12-01
Our goal was to design an adaptive neuroprosthetic controller that could learn the mapping from neural states to prosthetic actions and automatically adjust adaptation using only a binary evaluative feedback as a measure of desirability/undesirability of performance. Hebbian reinforcement learning (HRL) in a connectionist network was used for the design of the adaptive controller. The method combines the efficiency of supervised learning with the generality of reinforcement learning. The convergence properties of this approach were studied using both closed-loop control simulations and open-loop simulations that used primate neural data from robot-assisted reaching tasks. The HRL controller was able to perform classification and regression tasks using its episodic and sequential learning modes, respectively. In our experiments, the HRL controller quickly achieved convergence to an effective control policy, followed by robust performance. The controller also automatically stopped adapting the parameters after converging to a satisfactory control policy. Additionally, when the input neural vector was reorganized, the controller resumed adaptation to maintain performance. By estimating an evaluative feedback directly from the user, the HRL control algorithm may provide an efficient method for autonomous adaptation of neuroprosthetic systems. This method may enable the user to teach the controller the desired behavior using only a simple feedback signal.
Barman, Adriana; Richter, Sylvia; Soch, Joram; Deibele, Anna; Richter, Anni; Assmann, Anne; Wüstenberg, Torsten; Walter, Henrik; Seidenbecher, Constanze I.
2015-01-01
Autism spectrum disorder refers to a neurodevelopmental condition primarily characterized by deficits in social cognition and behavior. Subclinically, autistic features are supposed to be present in healthy humans and can be quantified using the Autism Quotient (AQ). Here, we investigated a potential relationship between AQ and neural correlates of social and monetary reward processing, using functional magnetic resonance imaging in young, healthy participants. In an incentive delay task with either monetary or social reward, reward anticipation elicited increased ventral striatal activation, which was more pronounced during monetary reward anticipation. Anticipation of social reward elicited activation in the default mode network (DMN), a network previously implicated in social processing. Social reward feedback was associated with bilateral amygdala and fusiform face area activation. The relationship between AQ and neural correlates of social reward processing varied in a gender-dependent manner. In women and, to a lesser extent in men, higher AQ was associated with increased posterior DMN activation during social reward anticipation. During feedback, we observed a negative correlation of AQ and right amygdala activation in men only. Our results suggest that social reward processing might constitute an endophenotype for autism-related traits in healthy humans that manifests in a gender-specific way. PMID:25944965
Cai, Zuowei; Huang, Lihong; Guo, Zhenyuan; Zhang, Lingling; Wan, Xuting
2015-08-01
This paper is concerned with the periodic synchronization problem for a general class of delayed neural networks (DNNs) with discontinuous neuron activation. One of the purposes is to analyze the problem of periodic orbits. To do so, we introduce new tools including inequality techniques and Kakutani's fixed point theorem of set-valued maps to derive the existence of periodic solution. Another purpose is to design a switching state-feedback control for realizing global exponential synchronization of the drive-response network system with periodic coefficients. Unlike the previous works on periodic synchronization of neural network, both the neuron activations and controllers in this paper are allowed to be discontinuous. Moreover, owing to the occurrence of delays in neuron signal, the neural network model is described by the functional differential equation. So we introduce extended Filippov-framework to deal with the basic issues of solutions for discontinuous DNNs. Finally, two examples and simulation experiments are given to illustrate the proposed method and main results which have an important instructional significance in the design of periodic synchronized DNNs circuits involving discontinuous or switching factors. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dulla, Chris G.; Coulter, Douglas A.; Ziburkus, Jokubas
2015-01-01
Complex circuitry with feed-forward and feed-back systems regulate neuronal activity throughout the brain. Cell biological, electrical, and neurotransmitter systems enable neural networks to process and drive the entire spectrum of cognitive, behavioral, and motor functions. Simultaneous orchestration of distinct cells and interconnected neural circuits relies on hundreds, if not thousands, of unique molecular interactions. Even single molecule dysfunctions can be disrupting to neural circuit activity, leading to neurological pathology. Here, we sample our current understanding of how molecular aberrations lead to disruptions in networks using three neurological pathologies as exemplars: epilepsy, traumatic brain injury (TBI), and Alzheimer’s disease (AD). Epilepsy provides a window into how total destabilization of network balance can occur. TBI is an abrupt physical disruption that manifests in both acute and chronic neurological deficits. Last, in AD progressive cell loss leads to devastating cognitive consequences. Interestingly, all three of these neurological diseases are interrelated. The goal of this review, therefore, is to identify molecular changes that may lead to network dysfunction, elaborate on how altered network activity and circuit structure can contribute to neurological disease, and suggest common threads that may lie at the heart of molecular circuit dysfunction. PMID:25948650
Dulla, Chris G; Coulter, Douglas A; Ziburkus, Jokubas
2016-06-01
Complex circuitry with feed-forward and feed-back systems regulate neuronal activity throughout the brain. Cell biological, electrical, and neurotransmitter systems enable neural networks to process and drive the entire spectrum of cognitive, behavioral, and motor functions. Simultaneous orchestration of distinct cells and interconnected neural circuits relies on hundreds, if not thousands, of unique molecular interactions. Even single molecule dysfunctions can be disrupting to neural circuit activity, leading to neurological pathology. Here, we sample our current understanding of how molecular aberrations lead to disruptions in networks using three neurological pathologies as exemplars: epilepsy, traumatic brain injury (TBI), and Alzheimer's disease (AD). Epilepsy provides a window into how total destabilization of network balance can occur. TBI is an abrupt physical disruption that manifests in both acute and chronic neurological deficits. Last, in AD progressive cell loss leads to devastating cognitive consequences. Interestingly, all three of these neurological diseases are interrelated. The goal of this review, therefore, is to identify molecular changes that may lead to network dysfunction, elaborate on how altered network activity and circuit structure can contribute to neurological disease, and suggest common threads that may lie at the heart of molecular circuit dysfunction. © The Author(s) 2015.
Hybrid neural network for density limit disruption prediction and avoidance on J-TEXT tokamak
NASA Astrophysics Data System (ADS)
Zheng, W.; Hu, F. R.; Zhang, M.; Chen, Z. Y.; Zhao, X. Q.; Wang, X. L.; Shi, P.; Zhang, X. L.; Zhang, X. Q.; Zhou, Y. N.; Wei, Y. N.; Pan, Y.; J-TEXT team
2018-05-01
Increasing the plasma density is one of the key methods in achieving an efficient fusion reaction. High-density operation is one of the hot topics in tokamak plasmas. Density limit disruptions remain an important issue for safe operation. An effective density limit disruption prediction and avoidance system is the key to avoid density limit disruptions for long pulse steady state operations. An artificial neural network has been developed for the prediction of density limit disruptions on the J-TEXT tokamak. The neural network has been improved from a simple multi-layer design to a hybrid two-stage structure. The first stage is a custom network which uses time series diagnostics as inputs to predict plasma density, and the second stage is a three-layer feedforward neural network to predict the probability of density limit disruptions. It is found that hybrid neural network structure, combined with radiation profile information as an input can significantly improve the prediction performance, especially the average warning time ({{T}warn} ). In particular, the {{T}warn} is eight times better than that in previous work (Wang et al 2016 Plasma Phys. Control. Fusion 58 055014) (from 5 ms to 40 ms). The success rate for density limit disruptive shots is above 90%, while, the false alarm rate for other shots is below 10%. Based on the density limit disruption prediction system and the real-time density feedback control system, the on-line density limit disruption avoidance system has been implemented on the J-TEXT tokamak.
Implications of behavioral architecture for the evolution of self-organized division of labor.
Duarte, A; Scholtens, E; Weissing, F J
2012-01-01
Division of labor has been studied separately from a proximate self-organization and an ultimate evolutionary perspective. We aim to bring together these two perspectives. So far this has been done by choosing a behavioral mechanism a priori and considering the evolution of the properties of this mechanism. Here we use artificial neural networks to allow for a more open architecture. We study whether emergent division of labor can evolve in two different network architectures; a simple feedforward network, and a more complex network that includes the possibility of self-feedback from previous experiences. We focus on two aspects of division of labor; worker specialization and the ratio of work performed for each task. Colony fitness is maximized by both reducing idleness and achieving a predefined optimal work ratio. Our results indicate that architectural constraints play an important role for the outcome of evolution. With the simplest network, only genetically determined specialization is possible. This imposes several limitations on worker specialization. Moreover, in order to minimize idleness, networks evolve a biased work ratio, even when an unbiased work ratio would be optimal. By adding self-feedback to the network we increase the network's flexibility and worker specialization evolves under a wider parameter range. Optimal work ratios are more easily achieved with the self-feedback network, but still provide a challenge when combined with worker specialization.
Implications of Behavioral Architecture for the Evolution of Self-Organized Division of Labor
Duarte, A.; Scholtens, E.; Weissing, F. J.
2012-01-01
Division of labor has been studied separately from a proximate self-organization and an ultimate evolutionary perspective. We aim to bring together these two perspectives. So far this has been done by choosing a behavioral mechanism a priori and considering the evolution of the properties of this mechanism. Here we use artificial neural networks to allow for a more open architecture. We study whether emergent division of labor can evolve in two different network architectures; a simple feedforward network, and a more complex network that includes the possibility of self-feedback from previous experiences. We focus on two aspects of division of labor; worker specialization and the ratio of work performed for each task. Colony fitness is maximized by both reducing idleness and achieving a predefined optimal work ratio. Our results indicate that architectural constraints play an important role for the outcome of evolution. With the simplest network, only genetically determined specialization is possible. This imposes several limitations on worker specialization. Moreover, in order to minimize idleness, networks evolve a biased work ratio, even when an unbiased work ratio would be optimal. By adding self-feedback to the network we increase the network's flexibility and worker specialization evolves under a wider parameter range. Optimal work ratios are more easily achieved with the self-feedback network, but still provide a challenge when combined with worker specialization. PMID:22457609
Decoupling control of vehicle chassis system based on neural network inverse system
NASA Astrophysics Data System (ADS)
Wang, Chunyan; Zhao, Wanzhong; Luan, Zhongkai; Gao, Qi; Deng, Ke
2018-06-01
Steering and suspension are two important subsystems affecting the handling stability and riding comfort of the chassis system. In order to avoid the interference and coupling of the control channels between active front steering (AFS) and active suspension subsystems (ASS), this paper presents a composite decoupling control method, which consists of a neural network inverse system and a robust controller. The neural network inverse system is composed of a static neural network with several integrators and state feedback of the original chassis system to approach the inverse system of the nonlinear systems. The existence of the inverse system for the chassis system is proved by the reversibility derivation of Interactor algorithm. The robust controller is based on the internal model control (IMC), which is designed to improve the robustness and anti-interference of the decoupled system by adding a pre-compensation controller to the pseudo linear system. The results of the simulation and vehicle test show that the proposed decoupling controller has excellent decoupling performance, which can transform the multivariable system into a number of single input and single output systems, and eliminate the mutual influence and interference. Furthermore, it has satisfactory tracking capability and robust performance, which can improve the comprehensive performance of the chassis system.
On the Role of Sensory Feedbacks in Rowat–Selverston CPG to Improve Robot Legged Locomotion
Amrollah, Elmira; Henaff, Patrick
2010-01-01
This paper presents the use of Rowat and Selverston-type of central pattern generator (CPG) to control locomotion. It focuses on the role of afferent exteroceptive and proprioceptive signals in the dynamic phase synchronization in CPG legged robots. The sensori-motor neural network architecture is evaluated to control a two-joint planar robot leg that slips on a rail. Then, the closed loop between the CPG and the mechanical system allows to study the modulation of rhythmic patterns and the effect of the sensing loop via sensory neurons during the locomotion task. Firstly simulations show that the proposed architecture easily allows to modulate rhythmic patterns of the leg, and therefore the velocity of the robot. Secondly, simulations show that sensori-feedbacks from foot/ground contact of the leg make the hip velocity smoother and larger. The results show that the Rowat–Selverston-type CPG with sensory feedbacks is an effective choice for building adaptive neural CPGs for legged robots. PMID:21228904
Observer-Based Adaptive Fault-Tolerant Tracking Control of Nonlinear Nonstrict-Feedback Systems.
Wu, Chengwei; Liu, Jianxing; Xiong, Yongyang; Wu, Ligang
2017-06-28
This paper studies an output-based adaptive fault-tolerant control problem for nonlinear systems with nonstrict-feedback form. Neural networks are utilized to identify the unknown nonlinear characteristics in the system. An observer and a general fault model are constructed to estimate the unavailable states and describe the fault, respectively. Adaptive parameters are constructed to overcome the difficulties in the design process for nonstrict-feedback systems. Meanwhile, dynamic surface control technique is introduced to avoid the problem of ''explosion of complexity''. Furthermore, based on adaptive backstepping control method, an output-based adaptive neural tracking control strategy is developed for the considered system against actuator fault, which can ensure that all the signals in the resulting closed-loop system are bounded, and the system output signal can be regulated to follow the response of the given reference signal with a small error. Finally, the simulation results are provided to validate the effectiveness of the control strategy proposed in this paper.
Heikkinen, Hanna; Sharifian, Fariba; Vigario, Ricardo; Vanni, Simo
2015-07-01
The blood oxygenation level-dependent (BOLD) response has been strongly associated with neuronal activity in the brain. However, some neuronal tuning properties are consistently different from the BOLD response. We studied the spatial extent of neural and hemodynamic responses in the primary visual cortex, where the BOLD responses spread and interact over much longer distances than the small receptive fields of individual neurons would predict. Our model shows that a feedforward-feedback loop between V1 and a higher visual area can account for the observed spread of the BOLD response. In particular, anisotropic landing of inputs to compartmental neurons were necessary to account for the BOLD signal spread, while retaining realistic spiking responses. Our work shows that simple dendrites can separate tuning at the synapses and at the action potential output, thus bridging the BOLD signal to the neural receptive fields with high fidelity. Copyright © 2015 the American Physiological Society.
Adaptive Neural Tracking Control for Switched High-Order Stochastic Nonlinear Systems.
Zhao, Xudong; Wang, Xinyong; Zong, Guangdeng; Zheng, Xiaolong
2017-10-01
This paper deals with adaptive neural tracking control design for a class of switched high-order stochastic nonlinear systems with unknown uncertainties and arbitrary deterministic switching. The considered issues are: 1) completely unknown uncertainties; 2) stochastic disturbances; and 3) high-order nonstrict-feedback system structure. The considered mathematical models can represent many practical systems in the actual engineering. By adopting the approximation ability of neural networks, common stochastic Lyapunov function method together with adding an improved power integrator technique, an adaptive state feedback controller with multiple adaptive laws is systematically designed for the systems. Subsequently, a controller with only two adaptive laws is proposed to solve the problem of over parameterization. Under the designed controllers, all the signals in the closed-loop system are bounded-input bounded-output stable in probability, and the system output can almost surely track the target trajectory within a specified bounded error. Finally, simulation results are presented to show the effectiveness of the proposed approaches.
Ferrante, Simona; Pedrocchi, Alessandra; Iannò, Marco; De Momi, Elena; Ferrarin, Maurizio; Ferrigno, Giancarlo
2004-01-01
This study falls within the ambit of research on functional electrical stimulation for the design of rehabilitation training for spinal cord injured patients. In this context, a crucial issue is the control of the stimulation parameters in order to optimize the patterns of muscle activation and to increase the duration of the exercises. An adaptive control system (NEURADAPT) based on artificial neural networks (ANNs) was developed to control the knee joint in accordance with desired trajectories by stimulating quadriceps muscles. This strategy includes an inverse neural model of the stimulated limb in the feedforward line and a neural network trained on-line in the feedback loop. NEURADAPT was compared with a linear closed-loop proportional integrative derivative (PID) controller and with a model-based neural controller (NEUROPID). Experiments on two subjects (one healthy and one paraplegic) show the good performance of NEURADAPT, which is able to reduce the time lag introduced by the PID controller. In addition, control systems based on ANN techniques do not require complicated calibration procedures at the beginning of each experimental session. After the initial learning phase, the ANN, thanks to its generalization capacity, is able to cope with a certain range of variability of skeletal muscle properties.
GA-based fuzzy reinforcement learning for control of a magnetic bearing system.
Lin, C T; Jou, C P
2000-01-01
This paper proposes a TD (temporal difference) and GA (genetic algorithm)-based reinforcement (TDGAR) learning method and applies it to the control of a real magnetic bearing system. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to perform the reinforcement learning task. The TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network to guide the learning of the other network (the action network) which determines the outputs (actions) of the TDGAR learning system. The action network can be a normal neural network or a neural fuzzy network. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. The key concept of the TDGAR learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA such that the GA can evaluate the candidate solutions (chromosomes) regularly, even during periods without external feedback from the environment. This enables the GA to proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning since a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problem. The proposed TDGAR learning system has been used to control an active magnetic bearing (AMB) system in practice. A systematic design procedure is developed to achieve successful integration of all the subsystems including magnetic suspension, mechanical structure, and controller training. The results show that the TDGAR learning scheme can successfully find a neural controller or a neural fuzzy controller for a self-designed magnetic bearing system.
A symbolic/subsymbolic interface protocol for cognitive modeling
Simen, Patrick; Polk, Thad
2009-01-01
Researchers studying complex cognition have grown increasingly interested in mapping symbolic cognitive architectures onto subsymbolic brain models. Such a mapping seems essential for understanding cognition under all but the most extreme viewpoints (namely, that cognition consists exclusively of digitally implemented rules; or instead, involves no rules whatsoever). Making this mapping reduces to specifying an interface between symbolic and subsymbolic descriptions of brain activity. To that end, we propose parameterization techniques for building cognitive models as programmable, structured, recurrent neural networks. Feedback strength in these models determines whether their components implement classically subsymbolic neural network functions (e.g., pattern recognition), or instead, logical rules and digital memory. These techniques support the implementation of limited production systems. Though inherently sequential and symbolic, these neural production systems can exploit principles of parallel, analog processing from decision-making models in psychology and neuroscience to explain the effects of brain damage on problem solving behavior. PMID:20711520
Yoo, Sung Jin; Park, Jin Bae; Choi, Yoon Ho
2008-10-01
In this paper, we propose a new robust output feedback control approach for flexible-joint electrically driven (FJED) robots via the observer dynamic surface design technique. The proposed method only requires position measurements of the FJED robots. To estimate the link and actuator velocity information of the FJED robots with model uncertainties, we develop an adaptive observer using self-recurrent wavelet neural networks (SRWNNs). The SRWNNs are used to approximate model uncertainties in both robot (link) dynamics and actuator dynamics, and all their weights are trained online. Based on the designed observer, the link position tracking controller using the estimated states is induced from the dynamic surface design procedure. Therefore, the proposed controller can be designed more simply than the observer backstepping controller. From the Lyapunov stability analysis, it is shown that all signals in a closed-loop adaptive system are uniformly ultimately bounded. Finally, the simulation results on a three-link FJED robot are presented to validate the good position tracking performance and robustness of the proposed control system against payload uncertainties and external disturbances.
Dynamical Motor Control Learned with Deep Deterministic Policy Gradient
2018-01-01
Conventional models of motor control exploit the spatial representation of the controlled system to generate control commands. Typically, the control command is gained with the feedback state of a specific instant in time, which behaves like an optimal regulator or spatial filter to the feedback state. Yet, recent neuroscience studies found that the motor network may constitute an autonomous dynamical system and the temporal patterns of the control command can be contained in the dynamics of the motor network, that is, the dynamical system hypothesis (DSH). Inspired by these findings, here we propose a computational model that incorporates this neural mechanism, in which the control command could be unfolded from a dynamical controller whose initial state is specified with the task parameters. The model is trained in a trial-and-error manner in the framework of deep deterministic policy gradient (DDPG). The experimental results show that the dynamical controller successfully learns the control policy for arm reaching movements, while the analysis of the internal activities of the dynamical controller provides the computational evidence to the DSH of the neural coding in motor cortices. PMID:29666634
Dynamical Motor Control Learned with Deep Deterministic Policy Gradient.
Shi, Haibo; Sun, Yaoru; Li, Jie
2018-01-01
Conventional models of motor control exploit the spatial representation of the controlled system to generate control commands. Typically, the control command is gained with the feedback state of a specific instant in time, which behaves like an optimal regulator or spatial filter to the feedback state. Yet, recent neuroscience studies found that the motor network may constitute an autonomous dynamical system and the temporal patterns of the control command can be contained in the dynamics of the motor network, that is, the dynamical system hypothesis (DSH). Inspired by these findings, here we propose a computational model that incorporates this neural mechanism, in which the control command could be unfolded from a dynamical controller whose initial state is specified with the task parameters. The model is trained in a trial-and-error manner in the framework of deep deterministic policy gradient (DDPG). The experimental results show that the dynamical controller successfully learns the control policy for arm reaching movements, while the analysis of the internal activities of the dynamical controller provides the computational evidence to the DSH of the neural coding in motor cortices.
Coding the presence of visual objects in a recurrent neural network of visual cortex.
Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard
2007-01-01
Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.
Emergence of Slow Collective Oscillations in Neural Networks with Spike-Timing Dependent Plasticity
NASA Astrophysics Data System (ADS)
Mikkelsen, Kaare; Imparato, Alberto; Torcini, Alessandro
2013-05-01
The collective dynamics of excitatory pulse coupled neurons with spike-timing dependent plasticity is studied. The introduction of spike-timing dependent plasticity induces persistent irregular oscillations between strongly and weakly synchronized states, reminiscent of brain activity during slow-wave sleep. We explain the oscillations by a mechanism, the Sisyphus Effect, caused by a continuous feedback between the synaptic adjustments and the coherence in the neural firing. Due to this effect, the synaptic weights have oscillating equilibrium values, and this prevents the system from relaxing into a stationary macroscopic state.
NASA Astrophysics Data System (ADS)
Dumedah, Gift; Walker, Jeffrey P.; Chik, Li
2014-07-01
Soil moisture information is critically important for water management operations including flood forecasting, drought monitoring, and groundwater recharge estimation. While an accurate and continuous record of soil moisture is required for these applications, the available soil moisture data, in practice, is typically fraught with missing values. There are a wide range of methods available to infilling hydrologic variables, but a thorough inter-comparison between statistical methods and artificial neural networks has not been made. This study examines 5 statistical methods including monthly averages, weighted Pearson correlation coefficient, a method based on temporal stability of soil moisture, and a weighted merging of the three methods, together with a method based on the concept of rough sets. Additionally, 9 artificial neural networks are examined, broadly categorized into feedforward, dynamic, and radial basis networks. These 14 infilling methods were used to estimate missing soil moisture records and subsequently validated against known values for 13 soil moisture monitoring stations for three different soil layer depths in the Yanco region in southeast Australia. The evaluation results show that the top three highest performing methods are the nonlinear autoregressive neural network, rough sets method, and monthly replacement. A high estimation accuracy (root mean square error (RMSE) of about 0.03 m/m) was found in the nonlinear autoregressive network, due to its regression based dynamic network which allows feedback connections through discrete-time estimation. An equally high accuracy (0.05 m/m RMSE) in the rough sets procedure illustrates the important role of temporal persistence of soil moisture, with the capability to account for different soil moisture conditions.
A recurrent self-organizing neural fuzzy inference network.
Juang, C F; Lin, C T
1999-01-01
A recurrent self-organizing neural fuzzy inference network (RSONFIN) is proposed in this paper. The RSONFIN is inherently a recurrent multilayered connectionist network for realizing the basic elements and functions of dynamic fuzzy inference, and may be considered to be constructed from a series of dynamic fuzzy rules. The temporal relations embedded in the network are built by adding some feedback connections representing the memory elements to a feedforward neural fuzzy network. Each weight as well as node in the RSONFIN has its own meaning and represents a special element in a fuzzy rule. There are no hidden nodes (i.e., no membership functions and fuzzy rules) initially in the RSONFIN. They are created on-line via concurrent structure identification (the construction of dynamic fuzzy if-then rules) and parameter identification (the tuning of the free parameters of membership functions). The structure learning together with the parameter learning forms a fast learning algorithm for building a small, yet powerful, dynamic neural fuzzy network. Two major characteristics of the RSONFIN can thus be seen: 1) the recurrent property of the RSONFIN makes it suitable for dealing with temporal problems and 2) no predetermination, like the number of hidden nodes, must be given, since the RSONFIN can find its optimal structure and parameters automatically and quickly. Moreover, to reduce the number of fuzzy rules generated, a flexible input partition method, the aligned clustering-based algorithm, is proposed. Various simulations on temporal problems are done and performance comparisons with some existing recurrent networks are also made. Efficiency of the RSONFIN is verified from these results.
Hellyer, Peter John; Clopath, Claudia; Kehagia, Angie A; Turkheimer, Federico E; Leech, Robert
2017-08-01
In recent years, there have been many computational simulations of spontaneous neural dynamics. Here, we describe a simple model of spontaneous neural dynamics that controls an agent moving in a simple virtual environment. These dynamics generate interesting brain-environment feedback interactions that rapidly destabilize neural and behavioral dynamics demonstrating the need for homeostatic mechanisms. We investigate roles for homeostatic plasticity both locally (local inhibition adjusting to balance excitatory input) as well as more globally (regional "task negative" activity that compensates for "task positive", sensory input in another region) balancing neural activity and leading to more stable behavior (trajectories through the environment). Our results suggest complementary functional roles for both local and macroscale mechanisms in maintaining neural and behavioral dynamics and a novel functional role for macroscopic "task-negative" patterns of activity (e.g., the default mode network).
Fault-tolerant nonlinear adaptive flight control using sliding mode online learning.
Krüger, Thomas; Schnetter, Philipp; Placzek, Robin; Vörsmann, Peter
2012-08-01
An expanded nonlinear model inversion flight control strategy using sliding mode online learning for neural networks is presented. The proposed control strategy is implemented for a small unmanned aircraft system (UAS). This class of aircraft is very susceptible towards nonlinearities like atmospheric turbulence, model uncertainties and of course system failures. Therefore, these systems mark a sensible testbed to evaluate fault-tolerant, adaptive flight control strategies. Within this work the concept of feedback linearization is combined with feed forward neural networks to compensate for inversion errors and other nonlinear effects. Backpropagation-based adaption laws of the network weights are used for online training. Within these adaption laws the standard gradient descent backpropagation algorithm is augmented with the concept of sliding mode control (SMC). Implemented as a learning algorithm, this nonlinear control strategy treats the neural network as a controlled system and allows a stable, dynamic calculation of the learning rates. While considering the system's stability, this robust online learning method therefore offers a higher speed of convergence, especially in the presence of external disturbances. The SMC-based flight controller is tested and compared with the standard gradient descent backpropagation algorithm in the presence of system failures. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ebner, Marc; Hameroff, Stuart
2011-01-01
Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot”) suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of perception are represented and contained. This mobile zone can be viewed as a model of the neural correlate of consciousness in the brain. PMID:22046178
Ebner, Marc; Hameroff, Stuart
2011-01-01
Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on "autopilot"). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the "conscious pilot") suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious "auto-pilot" cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways "gap junctions" in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of perception are represented and contained. This mobile zone can be viewed as a model of the neural correlate of consciousness in the brain.
NASA Astrophysics Data System (ADS)
Li, Chengcheng; Li, Yuefeng; Wang, Guanglin
2017-07-01
The work presented in this paper seeks to address the tracking problem for uncertain continuous nonlinear systems with external disturbances. The objective is to obtain a model that uses a reference-based output feedback tracking control law. The control scheme is based on neural networks and a linear difference inclusion (LDI) model, and a PDC structure and H∞ performance criterion are used to attenuate external disturbances. The stability of the whole closed-loop model is investigated using the well-known quadratic Lyapunov function. The key principles of the proposed approach are as follows: neural networks are first used to approximate nonlinearities, to enable a nonlinear system to then be represented as a linearised LDI model. An LMI (linear matrix inequality) formula is obtained for uncertain and disturbed linear systems. This formula enables a solution to be obtained through an interior point optimisation method for some nonlinear output tracking control problems. Finally, simulations and comparisons are provided on two practical examples to illustrate the validity and effectiveness of the proposed method.
Ding, Xiaoshuai; Cao, Jinde; Alsaedi, Ahmed; Alsaadi, Fuad E; Hayat, Tasawar
2017-06-01
This paper is concerned with the fixed-time synchronization for a class of complex-valued neural networks in the presence of discontinuous activation functions and parameter uncertainties. Fixed-time synchronization not only claims that the considered master-slave system realizes synchronization within a finite time segment, but also requires a uniform upper bound for such time intervals for all initial synchronization errors. To accomplish the target of fixed-time synchronization, a novel feedback control procedure is designed for the slave neural networks. By means of the Filippov discontinuity theories and Lyapunov stability theories, some sufficient conditions are established for the selection of control parameters to guarantee synchronization within a fixed time, while an upper bound of the settling time is acquired as well, which allows to be modulated to predefined values independently on initial conditions. Additionally, criteria of modified controller for assurance of fixed-time anti-synchronization are also derived for the same system. An example is included to illustrate the proposed methodologies. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)
2001-01-01
A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.
Decentralized Adaptive Neural Output-Feedback DSC for Switched Large-Scale Nonlinear Systems.
Lijun Long; Jun Zhao
2017-04-01
In this paper, for a class of switched large-scale uncertain nonlinear systems with unknown control coefficients and unmeasurable states, a switched-dynamic-surface-based decentralized adaptive neural output-feedback control approach is developed. The approach proposed extends the classical dynamic surface control (DSC) technique for nonswitched version to switched version by designing switched first-order filters, which overcomes the problem of multiple "explosion of complexity." Also, a dual common coordinates transformation of all subsystems is exploited to avoid individual coordinate transformations for subsystems that are required when applying the backstepping recursive design scheme. Nussbaum-type functions are utilized to handle the unknown control coefficients, and a switched neural network observer is constructed to estimate the unmeasurable states. Combining with the average dwell time method and backstepping and the DSC technique, decentralized adaptive neural controllers of subsystems are explicitly designed. It is proved that the approach provided can guarantee the semiglobal uniformly ultimately boundedness for all the signals in the closed-loop system under a class of switching signals with average dwell time, and the tracking errors to a small neighborhood of the origin. A two inverted pendulums system is provided to demonstrate the effectiveness of the method proposed.
Soft tissue deformation modelling through neural dynamics-based reaction-diffusion mechanics.
Zhang, Jinao; Zhong, Yongmin; Gu, Chengfan
2018-05-30
Soft tissue deformation modelling forms the basis of development of surgical simulation, surgical planning and robotic-assisted minimally invasive surgery. This paper presents a new methodology for modelling of soft tissue deformation based on reaction-diffusion mechanics via neural dynamics. The potential energy stored in soft tissues due to a mechanical load to deform tissues away from their rest state is treated as the equivalent transmembrane potential energy, and it is distributed in the tissue masses in the manner of reaction-diffusion propagation of nonlinear electrical waves. The reaction-diffusion propagation of mechanical potential energy and nonrigid mechanics of motion are combined to model soft tissue deformation and its dynamics, both of which are further formulated as the dynamics of cellular neural networks to achieve real-time computational performance. The proposed methodology is implemented with a haptic device for interactive soft tissue deformation with force feedback. Experimental results demonstrate that the proposed methodology exhibits nonlinear force-displacement relationship for nonlinear soft tissue deformation. Homogeneous, anisotropic and heterogeneous soft tissue material properties can be modelled through the inherent physical properties of mass points. Graphical abstract Soft tissue deformation modelling with haptic feedback via neural dynamics-based reaction-diffusion mechanics.
1997-06-01
made based on a learning mechanism. Traditional statistical regression and neural network approaches offer some utility, but suffer from practical...Columbus, OH. Kraiger, K., Ford, J. K., & Salas, E. (1993). Application of cognitive, skill- based , and affective theories of learning outcomes to new...and Feature Effects 151 Enhanced Spatial State Feedback for Night Vision Goggle Displays 159 Statistical Network Applications of Decision Aiding for
Potential implementation of reservoir computing models based on magnetic skyrmions
NASA Astrophysics Data System (ADS)
Bourianoff, George; Pinna, Daniele; Sitte, Matthias; Everschor-Sitte, Karin
2018-05-01
Reservoir Computing is a type of recursive neural network commonly used for recognizing and predicting spatio-temporal events relying on a complex hierarchy of nested feedback loops to generate a memory functionality. The Reservoir Computing paradigm does not require any knowledge of the reservoir topology or node weights for training purposes and can therefore utilize naturally existing networks formed by a wide variety of physical processes. Most efforts to implement reservoir computing prior to this have focused on utilizing memristor techniques to implement recursive neural networks. This paper examines the potential of magnetic skyrmion fabrics and the complex current patterns which form in them as an attractive physical instantiation for Reservoir Computing. We argue that their nonlinear dynamical interplay resulting from anisotropic magnetoresistance and spin-torque effects allows for an effective and energy efficient nonlinear processing of spatial temporal events with the aim of event recognition and prediction.
Kuo, Ching-Chang; Ha, Thao; Ebbert, Ashley M.; Tucker, Don M.; Dishion, Thomas J.
2017-01-01
Adolescence is a sensitive period for the development of romantic relationships. During this period the maturation of frontolimbic networks is particularly important for the capacity to regulate emotional experiences. In previous research, both functional magnetic resonance imaging (fMRI) and dense array electroencephalography (dEEG) measures have suggested that responses in limbic regions are enhanced in adolescents experiencing social rejection. In the present research, we examined social acceptance and rejection from romantic partners as they engaged in a Chatroom Interact Task. Dual 128-channel dEEG systems were used to record neural responses to acceptance and rejection from both adolescent romantic partners and unfamiliar peers (N = 75). We employed a two-step temporal principal component analysis (PCA) and spatial independent component analysis (ICA) approach to statistically identify the neural components related to social feedback. Results revealed that the early (288 ms) discrimination between acceptance and rejection reflected by the P3a component was significant for the romantic partner but not the unfamiliar peer. In contrast, the later (364 ms) P3b component discriminated between acceptance and rejection for both partners and peers. The two-step approach (PCA then ICA) was better able than either PCA or ICA alone in separating these components of the brain's electrical activity that reflected both temporal and spatial phases of the brain's processing of social feedback. PMID:28620292
Sengupta, Ranit
2015-01-01
Despite recent progress in our understanding of sensorimotor integration in speech learning, a comprehensive framework to investigate its neural basis is lacking at behaviorally relevant timescales. Structural and functional imaging studies in humans have helped us identify brain networks that support speech but fail to capture the precise spatiotemporal coordination within the networks that takes place during speech learning. Here we use neuronal oscillations to investigate interactions within speech motor networks in a paradigm of speech motor adaptation under altered feedback with continuous recording of EEG in which subjects adapted to the real-time auditory perturbation of a target vowel sound. As subjects adapted to the task, concurrent changes were observed in the theta-gamma phase coherence during speech planning at several distinct scalp regions that is consistent with the establishment of a feedforward map. In particular, there was an increase in coherence over the central region and a decrease over the fronto-temporal regions, revealing a redistribution of coherence over an interacting network of brain regions that could be a general feature of error-based motor learning in general. Our findings have implications for understanding the neural basis of speech motor learning and could elucidate how transient breakdown of neuronal communication within speech networks relates to speech disorders. PMID:25632078
An artificial neural network model for periodic trajectory generation
NASA Astrophysics Data System (ADS)
Shankar, S.; Gander, R. E.; Wood, H. C.
A neural network model based on biological systems was developed for potential robotic application. The model consists of three interconnected layers of artificial neurons or units: an input layer subdivided into state and plan units, an output layer, and a hidden layer between the two outer layers which serves to implement nonlinear mappings between the input and output activation vectors. Weighted connections are created between the three layers, and learning is effected by modifying these weights. Feedback connections between the output and the input state serve to make the network operate as a finite state machine. The activation vector of the plan units of the input layer emulates the supraspinal commands in biological central pattern generators in that different plan activation vectors correspond to different sequences or trajectories being recalled, even with different frequencies. Three trajectories were chosen for implementation, and learning was accomplished in 10,000 trials. The fault tolerant behavior, adaptiveness, and phase maintenance of the implemented network are discussed.
Adaptive Neural Network Control of a Flapping Wing Micro Aerial Vehicle With Disturbance Observer.
He, Wei; Yan, Zichen; Sun, Changyin; Chen, Yunan
2017-10-01
The research of this paper works out the attitude and position control of the flapping wing micro aerial vehicle (FWMAV). Neural network control with full state and output feedback are designed to deal with uncertainties in this complex nonlinear FWMAV dynamic system and enhance the system robustness. Meanwhile, we design disturbance observers which are exerted into the FWMAV system via feedforward loops to counteract the bad influence of disturbances. Then, a Lyapunov function is proposed to prove the closed-loop system stability and the semi-global uniform ultimate boundedness of all state variables. Finally, a series of simulation results indicate that proposed controllers can track desired trajectories well via selecting appropriate control gains. And the designed controllers possess potential applications in FWMAVs.
Fixed-time synchronization of memristor-based BAM neural networks with time-varying discrete delay.
Chen, Chuan; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-12-01
This paper is devoted to studying the fixed-time synchronization of memristor-based BAM neural networks (MBAMNNs) with discrete delay. Fixed-time synchronization means that synchronization can be achieved in a fixed time for any initial values of the considered systems. In the light of the double-layer structure of MBAMNNs, we design two similar feedback controllers. Based on Lyapunov stability theories, several criteria are established to guarantee that the drive and response MBAMNNs can realize synchronization in a fixed time. In particular, by changing the parameters of controllers, this fixed time can be adjusted to some desired value in advance, irrespective of the initial values of MBAMNNs. Numerical simulations are included to validate the derived results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Li, Zhijun; Su, Chun-Yi
2013-09-01
In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.
Brain-wide neuronal dynamics during motor adaptation in zebrafish
Ahrens, Misha B; Li, Jennifer M; Orger, Michael B; Robson, Drew N; Schier, Alexander F; Engert, Florian; Portugues, Ruben
2013-01-01
A fundamental question in neuroscience is how entire neural circuits generate behavior and adapt it to changes in sensory feedback. Here we use two-photon calcium imaging to record activity of large populations of neurons at the cellular level throughout the brain of larval zebrafish expressing a genetically-encoded calcium sensor, while the paralyzed animals interact fictively with a virtual environment and rapidly adapt their motor output to changes in visual feedback. We decompose the network dynamics involved in adaptive locomotion into four types of neural response properties, and provide anatomical maps of the corresponding sites. A subset of these signals occurred during behavioral adjustments and are candidates for the functional elements that drive motor learning. Lesions to the inferior olive indicate a specific functional role for olivocerebellar circuitry in adaptive locomotion. This study enables the analysis of brain-wide dynamics at single-cell resolution during behavior. PMID:22622571
How linear response shaped models of neural circuits and the quest for alternatives.
Herfurth, Tim; Tchumatchenko, Tatjana
2017-10-01
In the past decades, many mathematical approaches to solve complex nonlinear systems in physics have been successfully applied to neuroscience. One of these tools is the concept of linear response functions. However, phenomena observed in the brain emerge from fundamentally nonlinear interactions and feedback loops rather than from a composition of linear filters. Here, we review the successes achieved by applying the linear response formalism to topics, such as rhythm generation and synchrony and by incorporating it into models that combine linear and nonlinear transformations. We also discuss the challenges encountered in the linear response applications and argue that new theoretical concepts are needed to tackle feedback loops and non-equilibrium dynamics which are experimentally observed in neural networks but are outside of the validity regime of the linear response formalism. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dalgleish, Tim; Walsh, Nicholas D.; Mobbs, Dean; Schweizer, Susanne; van Harmelen, Anne-Laura; Dunn, Barnaby; Dunn, Valerie; Goodyer, Ian; Stretton, Jason
2017-01-01
Social interaction inherently involves the subjective evaluation of cues salient to social inclusion and exclusion. Testifying to the importance of such social cues, parts of the neural system dedicated to the detection of physical pain, the dorsal anterior cingulate cortex (dACC) and anterior insula (AI), have been shown to be equally sensitive to the detection of social pain experienced after social exclusion. However, recent work suggests that this dACC-AI matrix may index any socially pertinent information. We directly tested the hypothesis that the dACC-AI would respond to cues of both inclusion and exclusion, using a novel social feedback fMRI paradigm in a population-derived sample of adolescents. We show that the dACC and left AI are commonly activated by feedback cues of inclusion and exclusion. Our findings suggest that theoretical accounts of the dACC-AI network as a neural alarm system restricted within the social domain to the processing of signals of exclusion require significant revision. PMID:28169323
Optical alignment procedure utilizing neural networks combined with Shack-Hartmann wavefront sensor
NASA Astrophysics Data System (ADS)
Adil, Fatime Zehra; Konukseven, Erhan İlhan; Balkan, Tuna; Adil, Ömer Faruk
2017-05-01
In the design of pilot helmets with night vision capability, to not limit or block the sight of the pilot, a transparent visor is used. The reflected image from the coated part of the visor must coincide with the physical human sight image seen through the nonreflecting regions of the visor. This makes the alignment of the visor halves critical. In essence, this is an alignment problem of two optical parts that are assembled together during the manufacturing process. Shack-Hartmann wavefront sensor is commonly used for the determination of the misalignments through wavefront measurements, which are quantified in terms of the Zernike polynomials. Although the Zernike polynomials provide very useful feedback about the misalignments, the corrective actions are basically ad hoc. This stems from the fact that there exists no easy inverse relation between the misalignment measurements and the physical causes of the misalignments. This study aims to construct this inverse relation by making use of the expressive power of the neural networks in such complex relations. For this purpose, a neural network is designed and trained in MATLAB® regarding which types of misalignments result in which wavefront measurements, quantitatively given by Zernike polynomials. This way, manual and iterative alignment processes relying on trial and error will be replaced by the trained guesses of a neural network, so the alignment process is reduced to applying the counter actions based on the misalignment causes. Such a training requires data containing misalignment and measurement sets in fine detail, which is hard to obtain manually on a physical setup. For that reason, the optical setup is completely modeled in Zemax® software, and Zernike polynomials are generated for misalignments applied in small steps. The performance of the neural network is experimented and found promising in the actual physical setup.
Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.
Heydari, Ali; Balakrishnan, Sivasubramanya N
2013-01-01
To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.
Center for the Study of Rhythmic Processes.
1987-10-20
pattern generators Neural network Spinal cord Mathematical modeling Neuromodulators Regeneration Sensory feedback 19 ABSTRACT (Continue on reverse if...generator circuit. Trends in Neurosciences 9: 432-437. Marder, E. (1987) Neurotransmitters and neuromodulators . In Selverston, A.I. and Moulins, M. The...relating to the effects of neuromodulators on the output of the lobster stomatogastric central pattern generator. (See Sections III and IV.) 2. Trainig
Wang, Zhanshan; Liu, Lei; Wu, Yanming; Zhang, Huaguang
2018-06-01
This paper investigates the problem of optimal fault-tolerant control (FTC) for a class of unknown nonlinear discrete-time systems with actuator fault in the framework of adaptive critic design (ACD). A pivotal highlight is the adaptive auxiliary signal of the actuator fault, which is designed to offset the effect of the fault. The considered systems are in strict-feedback forms and involve unknown nonlinear functions, which will result in the causal problem. To solve this problem, the original nonlinear systems are transformed into a novel system by employing the diffeomorphism theory. Besides, the action neural networks (ANNs) are utilized to approximate a predefined unknown function in the backstepping design procedure. Combined the strategic utility function and the ACD technique, a reinforcement learning algorithm is proposed to set up an optimal FTC, in which the critic neural networks (CNNs) provide an approximate structure of the cost function. In this case, it not only guarantees the stability of the systems, but also achieves the optimal control performance as well. In the end, two simulation examples are used to show the effectiveness of the proposed optimal FTC strategy.
Mantziaris, Charalampos; Bockemühl, Till; Holmes, Philip; Borgmann, Anke; Daun, Silvia; Büschges, Ansgar
2017-10-01
To efficiently move around, animals need to coordinate their limbs. Proper, context-dependent coupling among the neural networks underlying leg movement is necessary for generating intersegmental coordination. In the slow-walking stick insect, local sensory information is very important for shaping coordination. However, central coupling mechanisms among segmental central pattern generators (CPGs) may also contribute to this. Here, we analyzed the interactions between contralateral networks that drive the depressor trochanteris muscle of the legs in both isolated and interconnected deafferented thoracic ganglia of the stick insect on application of pilocarpine, a muscarinic acetylcholine receptor agonist. Our results show that depressor CPG activity is only weakly coupled between all segments. Intrasegmental phase relationships differ between the three isolated ganglia, and they are modified and stabilized when ganglia are interconnected. However, the coordination patterns that emerge do not resemble those observed during walking. Our findings are in line with recent studies and highlight the influence of sensory input on coordination in slowly walking insects. Finally, as a direct interaction between depressor CPG networks and contralateral motoneurons could not be observed, we hypothesize that coupling is based on interactions at the level of CPG interneurons. NEW & NOTEWORTHY Maintaining functional interleg coordination is vitally important as animals locomote through changing environments. The relative importance of central mechanisms vs. sensory feedback in this process is not well understood. We analyzed coordination among the neural networks generating leg movements in stick insect preparations lacking phasic sensory feedback. Under these conditions, the networks governing different legs were only weakly coupled. In stick insect, central connections alone are thus insufficient to produce the leg coordination observed behaviorally. Copyright © 2017 the American Physiological Society.
Trunk Acceleration for Neuroprosthetic Control of Standing – a Pilot Study
Audu, Musa L.; Kirsch, Robert F.; Triolo, Ronald J.
2013-01-01
This pilot study investigated the potential of using trunk acceleration feedback control of center of pressure (COP) against postural disturbances with a standing neuroprosthesis following paralysis. Artificial neural networks (ANNs) were trained to use three-dimensional trunk acceleration as input to predict changes in COP for able-bodied subjects undergoing perturbations during bipedal stance. Correlation coefficients between ANN predictions and actual COP ranged from 0.67 to 0.77. An ANN trained across all subject-normalized data was used to drive feedback control of ankle muscle excitation levels for a computer model representing a standing neuroprosthesis user. Feedback control reduced average upper-body loading during perturbation onset and recovery by 42% and peak loading by 29% compared to optimal, constant excitation. PMID:21975251
Trunk acceleration for neuroprosthetic control of standing: a pilot study.
Nataraj, Raviraj; Audu, Musa L; Kirsch, Robert F; Triolo, Ronald J
2012-02-01
This pilot study investigated the potential of using trunk acceleration feedback control of center of pressure (COP) against postural disturbances with a standing neuroprosthesis following paralysis. Artificial neural networks (ANNs) were trained to use three-dimensional trunk acceleration as input to predict changes in COP for able-bodied subjects undergoing perturbations during bipedal stance. Correlation coefficients between ANN predictions and actual COP ranged from 0.67 to 0.77. An ANN trained across all subject-normalized data was used to drive feedback control of ankle muscle excitation levels for a computer model representing a standing neuroprosthesis user. Feedback control reduced average upper-body loading during perturbation onset and recovery by 42% and peak loading by 29% compared with optimal, constant excitation.
Cai, Mingbo; Stetson, Chess; Eagleman, David M.
2012-01-01
When observers experience a constant delay between their motor actions and sensory feedback, their perception of the temporal order between actions and sensations adapt (Stetson et al., 2006). We present here a novel neural model that can explain temporal order judgments (TOJs) and their recalibration. Our model employs three ubiquitous features of neural systems: (1) information pooling, (2) opponent processing, and (3) synaptic scaling. Specifically, the model proposes that different populations of neurons encode different delays between motor-sensory events, the outputs of these populations feed into rivaling neural populations (encoding “before” and “after”), and the activity difference between these populations determines the perceptual judgment. As a consequence of synaptic scaling of input weights, motor acts which are consistently followed by delayed sensory feedback will cause the network to recalibrate its point of subjective simultaneity. The structure of our model raises the possibility that recalibration of TOJs is a temporal analog to the motion aftereffect (MAE). In other words, identical neural mechanisms may be used to make perceptual determinations about both space and time. Our model captures behavioral recalibration results for different numbers of adapting trials and different adapting delays. In line with predictions of the model, we additionally demonstrate that temporal recalibration can last through time, in analogy to storage of the MAE. PMID:23130010
Nataraj, Raviraj; Audu, Musa L; Kirsch, Robert F; Triolo, Ronald J
2010-12-01
Previous investigations of feedback control of standing after spinal cord injury (SCI) using functional neuromuscular stimulation (FNS) have primarily targeted individual joints. This study assesses the potential efficacy of comprehensive (trunk, hips, knees, and ankles) joint feedback control against postural disturbances using a bipedal, 3-D computer model of SCI stance. Proportional-derivative feedback drove an artificial neural network trained to produce muscle excitation patterns consistent with maximal joint stiffness values achievable about neutral stance given typical SCI muscle properties. Feedback gains were optimized to minimize upper extremity (UE) loading required to stabilize against disturbances. Compared to the baseline case of maximum constant muscle excitations used clinically, the controller reduced UE loading by 55% in resisting external force perturbations and by 84% during simulated one-arm functional tasks. Performance was most sensitive to inaccurate measurements of ankle plantar/dorsiflexion position and hip ab/adduction velocity feedback. In conclusion, comprehensive joint feedback demonstrates potential to markedly improve FNS standing function. However, alternative control structures capable of effective performance with fewer sensor-based feedback parameters may better facilitate clinical usage.
Nataraj, Raviraj; Audu, Musa L.; Kirsch, Robert F.; Triolo, Ronald J.
2013-01-01
Previous investigations of feedback control of standing after spinal cord injury (SCI) using functional neuromuscular stimulation (FNS) have primarily targeted individual joints. This study assesses the potential efficacy of comprehensive (trunk, hips, knees, and ankles) joint-feedback control against postural disturbances using a bipedal, three-dimensional computer model of SCI stance. Proportional-derivative feedback drove an artificial neural network trained to produce muscle excitation patterns consistent with maximal joint stiffness values achievable about neutral stance given typical SCI muscle properties. Feedback gains were optimized to minimize upper extremity (UE) loading required to stabilize against disturbances. Compared to the baseline case of maximum constant muscle excitations used clinically, the controller reduced UE loading by 55% in resisting external force perturbations and by 84% during simulated one-arm functional tasks. Performance was most sensitive to inaccurate measurements of ankle plantar/dorsiflexion position and hip ab/adduction velocity feedback. In conclusion, comprehensive joint-feedback demonstrates potential to markedly improve FNS standing function. However, alternative control structures capable of effective performance with fewer sensor-based feedback parameters may better facilitate clinical usage. PMID:20923741
Neural networks supporting switching, hypothesis testing, and rule application
Liu, Zhiya; Braunlich, Kurt; Wehe, Hillary S.; Seger, Carol A.
2015-01-01
We identified dynamic changes in recruitment of neural connectivity networks across three phases of a flexible rule learning and set-shifting task similar to the Wisconsin Card Sort Task: switching, rule learning via hypothesis testing, and rule application. During fMRI scanning, subjects viewed pairs of stimuli that differed across four dimensions (letter, color, size, screen location), chose one stimulus, and received feedback. Subjects were informed that the correct choice was determined by a simple unidimensional rule, for example “choose the blue letter.” Once each rule had been learned and correctly applied for 4-7 trials, subjects were cued via either negative feedback or visual cues to switch to learning a new rule. Task performance was divided into three phases: Switching (first trial after receiving the switch cue), hypothesis testing (subsequent trials through the last error trial), and rule application (correct responding after the rule was learned). We used both univariate analysis to characterize activity occurring within specific regions of the brain, and a multivariate method, constrained principal component analysis for fMRI (fMRI-CPCA), to investigate how distributed regions coordinate to subserve different processes. As hypothesized, switching was subserved by a limbic network including the ventral striatum, thalamus, and parahippocampal gyrus, in conjunction with cortical salience network regions including the anterior cingulate and frontoinsular cortex. Activity in the ventral striatum was associated with switching regardless of how switching was cued; visually cued shifts were associated with additional visual cortical activity. After switching, as subjects moved into the hypothesis testing phase, a broad fronto-parietal-striatal network (associated with the cognitive control, dorsal attention, and salience networks) increased in activity. This network was sensitive to rule learning speed, with greater extended activity for the slowest learning speed late in the time course of learning. As subjects shifted from hypothesis testing to rule application, activity in this network decreased and activity in the somatomotor and default mode networks increased. PMID:26197092
Neural networks supporting switching, hypothesis testing, and rule application.
Liu, Zhiya; Braunlich, Kurt; Wehe, Hillary S; Seger, Carol A
2015-10-01
We identified dynamic changes in recruitment of neural connectivity networks across three phases of a flexible rule learning and set-shifting task similar to the Wisconsin Card Sort Task: switching, rule learning via hypothesis testing, and rule application. During fMRI scanning, subjects viewed pairs of stimuli that differed across four dimensions (letter, color, size, screen location), chose one stimulus, and received feedback. Subjects were informed that the correct choice was determined by a simple unidimensional rule, for example "choose the blue letter". Once each rule had been learned and correctly applied for 4-7 trials, subjects were cued via either negative feedback or visual cues to switch to learning a new rule. Task performance was divided into three phases: Switching (first trial after receiving the switch cue), hypothesis testing (subsequent trials through the last error trial), and rule application (correct responding after the rule was learned). We used both univariate analysis to characterize activity occurring within specific regions of the brain, and a multivariate method, constrained principal component analysis for fMRI (fMRI-CPCA), to investigate how distributed regions coordinate to subserve different processes. As hypothesized, switching was subserved by a limbic network including the ventral striatum, thalamus, and parahippocampal gyrus, in conjunction with cortical salience network regions including the anterior cingulate and frontoinsular cortex. Activity in the ventral striatum was associated with switching regardless of how switching was cued; visually cued shifts were associated with additional visual cortical activity. After switching, as subjects moved into the hypothesis testing phase, a broad fronto-parietal-striatal network (associated with the cognitive control, dorsal attention, and salience networks) increased in activity. This network was sensitive to rule learning speed, with greater extended activity for the slowest learning speed late in the time course of learning. As subjects shifted from hypothesis testing to rule application, activity in this network decreased and activity in the somatomotor and default mode networks increased. Copyright © 2015 Elsevier Ltd. All rights reserved.
Computational Models and Emergent Properties of Respiratory Neural Networks
Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.
2012-01-01
Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564
Neural substrates of visuomotor learning based on improved feedback control and prediction
Grafton, Scott T.; Schmitt, Paul; Horn, John Van; Diedrichsen, Jörn
2008-01-01
Motor skills emerge from learning feedforward commands as well as improvements in feedback control. These two components of learning were investigated in a compensatory visuomotor tracking task on a trial-by-trial basis. Between trial learning was characterized with a state-space model to provide smoothed estimates of feedforward and feedback learning, separable from random fluctuations in motor performance and error. The resultant parameters were correlated with brain activity using magnetic resonance imaging. Learning related to the generation of a feedforward command correlated with activity in dorsal premotor cortex, inferior parietal lobule, supplementary motor area and cingulate motor area, supporting a role of these areas in retrieving and executing a predictive motor command. Modulation of feedback control was associated with activity in bilateral posterior superior parietal lobule as well as right ventral premotor cortex. Performance error correlated with activity in a widespread cortical and subcortical network including bilateral parietal, premotor and rostral anterior cingulate cortex as well as the cerebellar cortex. Finally, trial-by-trial changes of kinematics, as measured by mean absolute hand acceleration, correlated with activity in motor cortex and anterior cerebellum. The results demonstrate that incremental, learning dependent changes can be modeled on a trial-by-trial basis and neural substrates for feedforward control of novel motor programs are localized to secondary motor areas. PMID:18032069
Neural Correlates of Success and Failure Signals During Neurofeedback Learning.
Radua, Joaquim; Stoica, Teodora; Scheinost, Dustin; Pittenger, Christopher; Hampson, Michelle
2018-05-15
Feedback-driven learning, observed across phylogeny and of clear adaptive value, is frequently operationalized in simple operant conditioning paradigms, but it can be much more complex, driven by abstract representations of success and failure. This study investigates the neural processes involved in processing success and failure during feedback learning, which are not well understood. Data analyzed were acquired during a multisession neurofeedback experiment in which ten participants were presented with, and instructed to modulate, the activity of their orbitofrontal cortex with the aim of decreasing their anxiety. We assessed the regional blood-oxygenation-level-dependent response to the individualized neurofeedback signals of success and failure across twelve functional runs acquired in two different magnetic resonance sessions in each of ten individuals. Neurofeedback signals of failure correlated early during learning with deactivation in the precuneus/posterior cingulate and neurofeedback signals of success correlated later during learning with deactivation in the medial prefrontal/anterior cingulate cortex. The intensity of the latter deactivations predicted the efficacy of the neurofeedback intervention in the reduction of anxiety. These findings indicate a role for regulation of the default mode network during feedback learning, and suggest a higher sensitivity to signals of failure during the early feedback learning and to signals of success subsequently. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Error mapping controller: a closed loop neuroprosthesis controlled by artificial neural networks.
Pedrocchi, Alessandra; Ferrante, Simona; De Momi, Elena; Ferrigno, Giancarlo
2006-10-09
The design of an optimal neuroprostheses controller and its clinical use presents several challenges. First, the physiological system is characterized by highly inter-subjects varying properties and also by non stationary behaviour with time, due to conditioning level and fatigue. Secondly, the easiness to use in routine clinical practice requires experienced operators. Therefore, feedback controllers, avoiding long setting procedures, are required. The error mapping controller (EMC) here proposed uses artificial neural networks (ANNs) both for the design of an inverse model and of a feedback controller. A neuromuscular model is used to validate the performance of the controllers in simulations. The EMC performance is compared to a Proportional Integral Derivative (PID) included in an anti wind-up scheme (called PIDAW) and to a controller with an ANN as inverse model and a PID in the feedback loop (NEUROPID). In addition tests on the EMC robustness in response to variations of the Plant parameters and to mechanical disturbances are carried out. The EMC shows improvements with respect to the other controllers in tracking accuracy, capability to prolong exercise managing fatigue, robustness to parameter variations and resistance to mechanical disturbances. Different from the other controllers, the EMC is capable of balancing between tracking accuracy and mapping of fatigue during the exercise. In this way, it avoids overstressing muscles and allows a considerable prolongation of the movement. The collection of the training sets does not require any particular experimental setting and can be introduced in routine clinical practice.
Error mapping controller: a closed loop neuroprosthesis controlled by artificial neural networks
Pedrocchi, Alessandra; Ferrante, Simona; De Momi, Elena; Ferrigno, Giancarlo
2006-01-01
Background The design of an optimal neuroprostheses controller and its clinical use presents several challenges. First, the physiological system is characterized by highly inter-subjects varying properties and also by non stationary behaviour with time, due to conditioning level and fatigue. Secondly, the easiness to use in routine clinical practice requires experienced operators. Therefore, feedback controllers, avoiding long setting procedures, are required. Methods The error mapping controller (EMC) here proposed uses artificial neural networks (ANNs) both for the design of an inverse model and of a feedback controller. A neuromuscular model is used to validate the performance of the controllers in simulations. The EMC performance is compared to a Proportional Integral Derivative (PID) included in an anti wind-up scheme (called PIDAW) and to a controller with an ANN as inverse model and a PID in the feedback loop (NEUROPID). In addition tests on the EMC robustness in response to variations of the Plant parameters and to mechanical disturbances are carried out. Results The EMC shows improvements with respect to the other controllers in tracking accuracy, capability to prolong exercise managing fatigue, robustness to parameter variations and resistance to mechanical disturbances. Conclusion Different from the other controllers, the EMC is capable of balancing between tracking accuracy and mapping of fatigue during the exercise. In this way, it avoids overstressing muscles and allows a considerable prolongation of the movement. The collection of the training sets does not require any particular experimental setting and can be introduced in routine clinical practice. PMID:17029636
Computed tomography of x-ray images using neural networks
NASA Astrophysics Data System (ADS)
Allred, Lloyd G.; Jones, Martin H.; Sheats, Matthew J.; Davis, Anthony W.
2000-03-01
Traditional CT reconstruction is done using the technique of Filtered Backprojection. While this technique is widely employed in industrial and medical applications, it is not generally understood that FB has a fundamental flaw. Gibbs phenomena states any Fourier reconstruction will produce errors in the vicinity of all discontinuities, and that the error will equal 28 percent of the discontinuity. A number of years back, one of the authors proposed a biological perception model whereby biological neural networks perceive 3D images from stereo vision. The perception model proports an internal hard-wired neural network which emulates the external physical process. A process is repeated whereby erroneous unknown internal values are used to generate an emulated signal with is compared to external sensed data, generating an error signal. Feedback from the error signal is then sued to update the erroneous internal values. The process is repeated until the error signal no longer decrease. It was soon realized that the same method could be used to obtain CT from x-rays without having to do Fourier transforms. Neural networks have the additional potential for handling non-linearities and missing data. The technique has been applied to some coral images, collected at the Los Alamos high-energy x-ray facility. The initial images show considerable promise, in some instances showing more detail than the FB images obtained from the same data. Although routine production using this new method would require a massively parallel computer, the method shows promise, especially where refined detail is required.
Hybrid feedback feedforward: An efficient design of adaptive neural network control.
Pan, Yongping; Liu, Yiqi; Xu, Bin; Yu, Haoyong
2016-04-01
This paper presents an efficient hybrid feedback feedforward (HFF) adaptive approximation-based control (AAC) strategy for a class of uncertain Euler-Lagrange systems. The control structure includes a proportional-derivative (PD) control term in the feedback loop and a radial-basis-function (RBF) neural network (NN) in the feedforward loop, which mimics the human motor learning control mechanism. At the presence of discontinuous friction, a sigmoid-jump-function NN is incorporated to improve control performance. The major difference of the proposed HFF-AAC design from the traditional feedback AAC (FB-AAC) design is that only desired outputs, rather than both tracking errors and desired outputs, are applied as RBF-NN inputs. Yet, such a slight modification leads to several attractive properties of HFF-AAC, including the convenient choice of an approximation domain, the decrease of the number of RBF-NN inputs, and semiglobal practical asymptotic stability dominated by control gains. Compared with previous HFF-AAC approaches, the proposed approach possesses the following two distinctive features: (i) all above attractive properties are achieved by a much simpler control scheme; (ii) the bounds of plant uncertainties are not required to be known. Consequently, the proposed approach guarantees a minimum configuration of the control structure and a minimum requirement of plant knowledge for the AAC design, which leads to a sharp decrease of implementation cost in terms of hardware selection, algorithm realization and system debugging. Simulation results have demonstrated that the proposed HFF-AAC can perform as good as or even better than the traditional FB-AAC under much simpler control synthesis and much lower computational cost. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Brain as an Efficient and Robust Adaptive Learner.
Denève, Sophie; Alemi, Alireza; Bourdoukan, Ralph
2017-06-07
Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level. Copyright © 2017 Elsevier Inc. All rights reserved.
A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks
Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo
2015-01-01
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. PMID:26291608
A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.
Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo
2015-08-01
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.
Neural system applied on an invariant industrial character recognition
NASA Astrophysics Data System (ADS)
Lecoeuche, Stephane; Deguillemont, Denis; Dubus, Jean-Paul
1997-04-01
Besides the variety of fonts, character recognition systems for the industrial world are confronted with specific problems like: the variety of support (metal, wood, paper, ceramics . . .) as well as the variety of marking (printing, engraving, . . .) and conditions of lighting. We present a system that is able to solve a part of this problem. It implements a collaboration between two neural networks. The first network specialized in vision allows the system to extract the character from an image. Besides this capability, we have equipped our system with characteristics allowing it to obtain an invariant model from the presented character. Thus, whatever the position, the size and the orientation of the character during the capture are, the model presented to the input of the second network will be identical. The second network, thanks to a learning phase, permits us to obtain a character recognition system independent of the type of fonts used. Furthermore, its capabilities of generalization permit us to recognize degraded and/or distorted characters. A feedback loop between the two networks permits the first one to modify the quality of vision.The cooperation between these two networks allows us to recognize characters whatever the support and the marking.
NASA Astrophysics Data System (ADS)
Zheng, Mingwen; Li, Lixiang; Peng, Haipeng; Xiao, Jinghua; Yang, Yixian; Zhang, Yanping; Zhao, Hui
2018-06-01
This paper mainly studies the finite-time stability and synchronization problems of memristor-based fractional-order fuzzy cellular neural network (MFFCNN). Firstly, we discuss the existence and uniqueness of the Filippov solution of the MFFCNN according to the Banach fixed point theorem and give a sufficient condition for the existence and uniqueness of the solution. Secondly, a sufficient condition to ensure the finite-time stability of the MFFCNN is obtained based on the definition of finite-time stability of the MFFCNN and Gronwall-Bellman inequality. Thirdly, by designing a simple linear feedback controller, the finite-time synchronization criterion for drive-response MFFCNN systems is derived according to the definition of finite-time synchronization. These sufficient conditions are easy to verify. Finally, two examples are given to show the effectiveness of the proposed results.
Wang, Leimin; Zeng, Zhigang; Hu, Junhao; Wang, Xiaoping
2017-03-01
This paper addresses the controller design problem for global fixed-time synchronization of delayed neural networks (DNNs) with discontinuous activations. To solve this problem, adaptive control and state feedback control laws are designed. Then based on the two controllers and two lemmas, the error system is proved to be globally asymptotically stable and even fixed-time stable. Moreover, some sufficient and easy checked conditions are derived to guarantee the global synchronization of drive and response systems in fixed time. It is noted that the settling time functional for fixed-time synchronization is independent on initial conditions. Our fixed-time synchronization results contain the finite-time results as the special cases by choosing different values of the two controllers. Finally, theoretical results are supported by numerical simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Structurally Dynamic Spin Market Networks
NASA Astrophysics Data System (ADS)
Horváth, Denis; Kuscsik, Zoltán
The agent-based model of stock price dynamics on a directed evolving complex network is suggested and studied by direct simulation. The stationary regime is maintained as a result of the balance between the extremal dynamics, adaptivity of strategic variables and reconnection rules. The inherent structure of node agent "brain" is modeled by a recursive neural network with local and global inputs and feedback connections. For specific parametric combination the complex network displays small-world phenomenon combined with scale-free behavior. The identification of a local leader (network hub, agent whose strategies are frequently adapted by its neighbors) is carried out by repeated random walk process through network. The simulations show empirically relevant dynamics of price returns and volatility clustering. The additional emerging aspects of stylized market statistics are Zipfian distributions of fitness.
Mathalon, Daniel H; Sohal, Vikaas S
2015-08-01
Neural oscillations are rhythmic fluctuations over time in the activity or excitability of single neurons, local neuronal populations or "assemblies," and/or multiple regionally distributed neuronal assemblies. Synchronized oscillations among large numbers of neurons are evident in electrocorticographic, electroencephalographic, magnetoencephalographic, and local field potential recordings and are generally understood to depend on inhibition that paces assemblies of excitatory neurons to produce alternating temporal windows of reduced and increased excitability. Synchronization of neural oscillations is supported by the extensive networks of local and long-range feedforward and feedback bidirectional connections between neurons. Here, we review some of the major methods and measures used to characterize neural oscillations, with a focus on gamma oscillations. Distinctions are drawn between stimulus-independent oscillations recorded during resting states or intervals between task events, stimulus-induced oscillations that are time locked but not phase locked to stimuli, and stimulus-evoked oscillations that are both time and phase locked to stimuli. Synchrony of oscillations between recording sites, and between the amplitudes and phases of oscillations of different frequencies (cross-frequency coupling), is described and illustrated. Molecular mechanisms underlying gamma oscillations are also reviewed. Ultimately, understanding the temporal organization of neuronal network activity, including interactions between neural oscillations, is critical for elucidating brain dysfunction in neuropsychiatric disorders.
Slot-like capacity and resource-like coding in a neural model of multiple-item working memory.
Standage, Dominic; Pare, Martin
2018-06-27
For the past decade, research on the storage limitations of working memory has been dominated by two fundamentally different hypotheses. On the one hand, the contents of working memory may be stored in a limited number of `slots', each with a fixed resolution. On the other hand, any number of items may be stored, but with decreasing resolution. These two hypotheses have been invaluable in characterizing the computational structure of working memory, but neither provides a complete account of the available experimental data, nor speaks to the neural basis of the limitations it characterizes. To address these shortcomings, we simulated a multiple-item working memory task with a cortical network model, the cellular resolution of which allowed us to quantify the coding fidelity of memoranda as a function of memory load, as measured by the discriminability, regularity and reliability of simulated neural spiking. Our simulations account for a wealth of neural and behavioural data from human and non-human primate studies, and they demonstrate that feedback inhibition lowers both capacity and coding fidelity. Because the strength of inhibition scales with the number of items stored by the network, increasing this number progressively lowers fidelity until capacity is reached. Crucially, the model makes specific, testable predictions for neural activity on multiple-item working memory tasks.
Peters, Sabine; Van Duijvenvoorde, Anna C K; Koolschijn, P Cédric M P; Crone, Eveline A
2016-06-01
Feedback learning is a crucial skill for cognitive flexibility that continues to develop into adolescence, and is linked to neural activity within a frontoparietal network. Although it is well conceptualized that activity in the frontoparietal network changes during development, there is surprisingly little consensus about the direction of change. Using a longitudinal design (N=208, 8-27 years, two measurements in two years), we investigated developmental trajectories in frontoparietal activity during feedback learning. Our first aim was to test for linear and nonlinear developmental trajectories in dorsolateral prefrontal cortex (DLPFC), superior parietal cortex (SPC), supplementary motor area (SMA) and anterior cingulate cortex (ACC). Second, we tested which factors (task performance, working memory, cortical thickness) explained additional variance in time-related changes in activity besides age. Developmental patterns for activity in DLPFC and SPC were best characterized by a quadratic age function leveling off/peaking in late adolescence. There was a linear increase in SMA and a linear decrease with age in ACC activity. In addition to age, task performance explained variance in DLPFC and SPC activity, whereas cortical thickness explained variance in SMA activity. Together, these findings provide a novel perspective of linear and nonlinear developmental changes in the frontoparietal network during feedback learning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Liu, Derong; Wang, Ding; Li, Hongliang
2014-02-01
In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuous-time nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal control policies of the isolated subsystems. Next, an online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman equations related to the optimal control problem. Through constructing a set of critic neural networks, the cost functions can be obtained approximately, followed by the control policies. Furthermore, the dynamics of the estimation errors of the critic networks are verified to be uniformly and ultimately bounded. Finally, a simulation example is provided to illustrate the effectiveness of the present decentralized control scheme.
Combined feedforward and feedback control of a redundant, nonlinear, dynamic musculoskeletal system.
Blana, Dimitra; Kirsch, Robert F; Chadwick, Edward K
2009-05-01
A functional electrical stimulation controller is presented that uses a combination of feedforward and feedback for arm control in high-level injury. The feedforward controller generates the muscle activations nominally required for desired movements, and the feedback controller corrects for errors caused by muscle fatigue and external disturbances. The feedforward controller is an artificial neural network (ANN) which approximates the inverse dynamics of the arm. The feedback loop includes a PID controller in series with a second ANN representing the nonlinear properties and biomechanical interactions of muscles and joints. The controller was designed and tested using a two-joint musculoskeletal model of the arm that includes four mono-articular and two bi-articular muscles. Its performance during goal-oriented movements of varying amplitudes and durations showed a tracking error of less than 4 degrees in ideal conditions, and less than 10 degrees even in the case of considerable fatigue and external disturbances.
Flight Test of an Intelligent Flight-Control System
NASA Technical Reports Server (NTRS)
Davidson, Ron; Bosworth, John T.; Jacobson, Steven R.; Thomson, Michael Pl; Jorgensen, Charles C.
2003-01-01
The F-15 Advanced Controls Technology for Integrated Vehicles (ACTIVE) airplane (see figure) was the test bed for a flight test of an intelligent flight control system (IFCS). This IFCS utilizes a neural network to determine critical stability and control derivatives for a control law, the real-time gains of which are computed by an algorithm that solves the Riccati equation. These derivatives are also used to identify the parameters of a dynamic model of the airplane. The model is used in a model-following portion of the control law, in order to provide specific vehicle handling characteristics. The flight test of the IFCS marks the initiation of the Intelligent Flight Control System Advanced Concept Program (IFCS ACP), which is a collaboration between NASA and Boeing Phantom Works. The goals of the IFCS ACP are to (1) develop the concept of a flight-control system that uses neural-network technology to identify aircraft characteristics to provide optimal aircraft performance, (2) develop a self-training neural network to update estimates of aircraft properties in flight, and (3) demonstrate the aforementioned concepts on the F-15 ACTIVE airplane in flight. The activities of the initial IFCS ACP were divided into three Phases, each devoted to the attainment of a different objective. The objective of Phase I was to develop a pre-trained neural network to store and recall the wind-tunnel-based stability and control derivatives of the vehicle. The objective of Phase II was to develop a neural network that can learn how to adjust the stability and control derivatives to account for failures or modeling deficiencies. The objective of Phase III was to develop a flight control system that uses the neural network outputs as a basis for controlling the aircraft. The flight test of the IFCS was performed in stages. In the first stage, the Phase I version of the pre-trained neural network was flown in a passive mode. The neural network software was running using flight data inputs with the outputs provided to instrumentation only. The IFCS was not used to control the airplane. In another stage of the flight test, the Phase I pre-trained neural network was integrated into a Phase III version of the flight control system. The Phase I pretrained neural network provided realtime stability and control derivatives to a Phase III controller that was based on a stochastic optimal feedforward and feedback technique (SOFFT). This combined Phase I/III system was operated together with the research flight-control system (RFCS) of the F-15 ACTIVE during the flight test. The RFCS enables the pilot to switch quickly from the experimental- research flight mode back to the safe conventional mode. These initial IFCS ACP flight tests were completed in April 1999. The Phase I/III flight test milestone was to demonstrate, across a range of subsonic and supersonic flight conditions, that the pre-trained neural network could be used to supply real-time aerodynamic stability and control derivatives to the closed-loop optimal SOFFT flight controller. Additional objectives attained in the flight test included (1) flight qualification of a neural-network-based control system; (2) the use of a combined neural-network/closed-loop optimal flight-control system to obtain level-one handling qualities; and (3) demonstration, through variation of control gains, that different handling qualities can be achieved by setting new target parameters. In addition, data for the Phase-II (on-line-learning) neural network were collected, during the use of stacked-frequency- sweep excitation, for post-flight analysis. Initial analysis of these data showed the potential for future flight tests that will incorporate the real-time identification and on-line learning aspects of the IFCS.
Adaptive neural control for a class of nonlinear time-varying delay systems with unknown hysteresis.
Liu, Zhi; Lai, Guanyu; Zhang, Yun; Chen, Xin; Chen, Chun Lung Philip
2014-12-01
This paper investigates the fusion of unknown direction hysteresis model with adaptive neural control techniques in face of time-delayed continuous time nonlinear systems without strict-feedback form. Compared with previous works on the hysteresis phenomenon, the direction of the modified Bouc-Wen hysteresis model investigated in the literature is unknown. To reduce the computation burden in adaptation mechanism, an optimized adaptation method is successfully applied to the control design. Based on the Lyapunov-Krasovskii method, two neural-network-based adaptive control algorithms are constructed to guarantee that all the system states and adaptive parameters remain bounded, and the tracking error converges to an adjustable neighborhood of the origin. In final, some numerical examples are provided to validate the effectiveness of the proposed control methods.
Fernández-Alemán, José Luis; López-González, Laura; González-Sequeros, Ofelia; Jayne, Chrisina; López-Jiménez, Juan José; Carrillo-de-Gea, Juan Manuel; Toval, Ambrosio
2016-04-01
This paper presents an empirical study of a formative neural network-based assessment approach by using mobile technology to provide pharmacy students with intelligent diagnostic feedback. An unsupervised learning algorithm was integrated with an audience response system called SIDRA in order to generate states that collect some commonality in responses to questions and add diagnostic feedback for guided learning. A total of 89 pharmacy students enrolled on a Human Anatomy course were taught using two different teaching methods. Forty-four students employed intelligent SIDRA (i-SIDRA), whereas 45 students received the same training but without using i-SIDRA. A statistically significant difference was found between the experimental group (i-SIDRA) and the control group (traditional learning methodology), with T (87) = 6.598, p < 0.001. In four MCQs tests, the difference between the number of correct answers in the first attempt and in the last attempt was also studied. A global effect size of 0.644 was achieved in the meta-analysis carried out. The students expressed satisfaction with the content provided by i-SIDRA and the methodology used during the process of learning anatomy (M = 4.59). The new empirical contribution presented in this paper allows instructors to perform post hoc analyses of each particular student's progress to ensure appropriate training.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-01-01
This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.
Leonard, J L
2000-05-01
Understanding how species-typical movement patterns are organized in the nervous system is a central question in neurobiology. The current explanations involve 'alphabet' models in which an individual neuron may participate in the circuit for several behaviors but each behavior is specified by a specific neural circuit. However, not all of the well-studied model systems fit the 'alphabet' model. The 'equation' model provides an alternative possibility, whereby a system of parallel motor neurons, each with a unique (but overlapping) field of innervation, can account for the production of stereotyped behavior patterns by variable circuits. That is, it is possible for such patterns to arise as emergent properties of a generalized neural network in the absence of feedback, a simple version of a 'self-organizing' behavioral system. Comparison of systems of identified neurons suggest that the 'alphabet' model may account for most observations where CPGs act to organize motor patterns. Other well-known model systems, involving architectures corresponding to feed-forward neural networks with a hidden layer, may organize patterned behavior in a manner consistent with the 'equation' model. Such architectures are found in the Mauthner and reticulospinal circuits, 'escape' locomotion in cockroaches, CNS control of Aplysia gill, and may also be important in the coordination of sensory information and motor systems in insect mushroom bodies and the vertebrate hippocampus. The hidden layer of such networks may serve as an 'internal representation' of the behavioral state and/or body position of the animal, allowing the animal to fine-tune oriented, or particularly context-sensitive, movements to the prevalent conditions. Experiments designed to distinguish between the two models in cases where they make mutually exclusive predictions provide an opportunity to elucidate the neural mechanisms by which behavior is organized in vivo and in vitro. Copyright 2000 S. Karger AG, Basel
A novel constructive-optimizer neural network for the traveling salesman problem.
Saadatmand-Tarzjan, Mahdi; Khademi, Morteza; Akbarzadeh-T, Mohammad-R; Moghaddam, Hamid Abrishami
2007-08-01
In this paper, a novel constructive-optimizer neural network (CONN) is proposed for the traveling salesman problem (TSP). CONN uses a feedback structure similar to Hopfield-type neural networks and a competitive training algorithm similar to the Kohonen-type self-organizing maps (K-SOMs). Consequently, CONN is composed of a constructive part, which grows the tour and an optimizer part to optimize it. In the training algorithm, an initial tour is created first and introduced to CONN. Then, it is trained in the constructive phase for adding a number of cities to the tour. Next, the training algorithm switches to the optimizer phase for optimizing the current tour by displacing the tour cities. After convergence in this phase, the training algorithm switches to the constructive phase anew and is continued until all cities are added to the tour. Furthermore, we investigate a relationship between the number of TSP cities and the number of cities to be added in each constructive phase. CONN was tested on nine sets of benchmark TSPs from TSPLIB to demonstrate its performance and efficiency. It performed better than several typical Neural networks (NNs), including KNIES_TSP_Local, KNIES_TSP_Global, Budinich's SOM, Co-Adaptive Net, and multivalued Hopfield network as wall as computationally comparable variants of the simulated annealing algorithm, in terms of both CPU time and accuracy. Furthermore, CONN converged considerably faster than expanding SOM and evolved integrated SOM and generated shorter tours compared to KNIES_DECOMPOSE. Although CONN is not yet comparable in terms of accuracy with some sophisticated computationally intensive algorithms, it converges significantly faster than they do. Generally speaking, CONN provides the best compromise between CPU time and accuracy among currently reported NNs for TSP.
Sensory-Motor Networks Involved in Speech Production and Motor Control: An fMRI Study
Behroozmand, Roozbeh; Shebek, Rachel; Hansen, Daniel R.; Oya, Hiroyuki; Robin, Donald A.; Howard, Matthew A.; Greenlee, Jeremy D.W.
2015-01-01
Speaking is one of the most complex motor behaviors developed to facilitate human communication. The underlying neural mechanisms of speech involve sensory-motor interactions that incorporate feedback information for online monitoring and control of produced speech sounds. In the present study, we adopted an auditory feedback pitch perturbation paradigm and combined it with functional magnetic resonance imaging (fMRI) recordings in order to identify brain areas involved in speech production and motor control. Subjects underwent fMRI scanning while they produced a steady vowel sound /a/ (speaking) or listened to the playback of their own vowel production (playback). During each condition, the auditory feedback from vowel production was either normal (no perturbation) or perturbed by an upward (+600 cents) pitch shift stimulus randomly. Analysis of BOLD responses during speaking (with and without shift) vs. rest revealed activation of a complex network including bilateral superior temporal gyrus (STG), Heschl's gyrus, precentral gyrus, supplementary motor area (SMA), Rolandic operculum, postcentral gyrus and right inferior frontal gyrus (IFG). Performance correlation analysis showed that the subjects produced compensatory vocal responses that significantly correlated with BOLD response increases in bilateral STG and left precentral gyrus. However, during playback, the activation network was limited to cortical auditory areas including bilateral STG and Heschl's gyrus. Moreover, the contrast between speaking vs. playback highlighted a distinct functional network that included bilateral precentral gyrus, SMA, IFG, postcentral gyrus and insula. These findings suggest that speech motor control involves feedback error detection in sensory (e.g. auditory) cortices that subsequently activate motor-related areas for the adjustment of speech parameters during speaking. PMID:25623499
Krishnan, Ananthanarayan; Gandour, Jackson T
2014-12-01
Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information.
Krishnan, Ananthanarayan; Gandour, Jackson T.
2015-01-01
Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information. PMID:25838636
Decade Review (1999-2009): Artificial Intelligence Techniques in Student Modeling
NASA Astrophysics Data System (ADS)
Drigas, Athanasios S.; Argyri, Katerina; Vrettaros, John
Artificial Intelligence applications in educational field are getting more and more popular during the last decade (1999-2009) and that is why much relevant research has been conducted. In this paper, we present the most interesting attempts to apply artificial intelligence methods such as fuzzy logic, neural networks, genetic programming and hybrid approaches such as neuro - fuzzy systems and genetic programming neural networks (GPNN) in student modeling. This latest research trend is a part of every Intelligent Tutoring System and aims at generating and updating a student model in order to modify learning content to fit individual needs or to provide reliable assessment and feedback to student's answers. In this paper, we make a brief presentation of methods used to point out their qualities and then we attempt a navigation to the most representative studies sought in the decade of our interest after classifying them according to the principal aim they attempted to serve.
Fast tomographic methods for the tokamak ISTTOK
NASA Astrophysics Data System (ADS)
Carvalho, P. J.; Thomsen, H.; Gori, S.; Toussaint, U. v.; Weller, A.; Coelho, R.; Neto, A.; Pereira, T.; Silva, C.; Fernandes, H.
2008-04-01
The achievement of long duration, alternating current discharges on the tokamak IST-TOK requires a real-time plasma position control system. The plasma position determination based on magnetic probes system has been found to be inadequate during the current inversion due to the reduced plasma current. A tomography diagnostic has been therefore installed to supply the required feedback to the control system. Several tomographic methods are available for soft X-ray or bolo-metric tomography, among which the Cormack and Neural networks methods stand out due to their inherent speed of up to 1000 reconstructions per second, with currently available technology. This paper discusses the application of these algorithms on fusion devices while comparing performance and reliability of the results. It has been found that although the Cormack based inversion proved to be faster, the neural networks reconstruction has fewer artifacts and is more accurate.
Wei, Yanling; Park, Ju H; Karimi, Hamid Reza; Tian, Yu-Chu; Jung, Hoyoul; Yanling Wei; Park, Ju H; Karimi, Hamid Reza; Yu-Chu Tian; Hoyoul Jung; Tian, Yu-Chu; Wei, Yanling; Jung, Hoyoul; Karimi, Hamid Reza; Park, Ju H
2018-06-01
Continuous-time semi-Markovian jump neural networks (semi-MJNNs) are those MJNNs whose transition rates are not constant but depend on the random sojourn time. Addressing stochastic synchronization of semi-MJNNs with time-varying delay, an improved stochastic stability criterion is derived in this paper to guarantee stochastic synchronization of the response systems with the drive systems. This is achieved through constructing a semi-Markovian Lyapunov-Krasovskii functional together as well as making use of a novel integral inequality and the characteristics of cumulative distribution functions. Then, with a linearization procedure, controller synthesis is carried out for stochastic synchronization of the drive-response systems. The desired state-feedback controller gains can be determined by solving a linear matrix inequality-based optimization problem. Simulation studies are carried out to demonstrate the effectiveness and less conservatism of the presented approach.
The NASA F-15 Intelligent Flight Control Systems: Generation II
NASA Technical Reports Server (NTRS)
Buschbacher, Mark; Bosworth, John
2006-01-01
The Second Generation (Gen II) control system for the F-15 Intelligent Flight Control System (IFCS) program implements direct adaptive neural networks to demonstrate robust tolerance to faults and failures. The direct adaptive tracking controller integrates learning neural networks (NNs) with a dynamic inversion control law. The term direct adaptive is used because the error between the reference model and the aircraft response is being compensated or directly adapted to minimize error without regard to knowing the cause of the error. No parameter estimation is needed for this direct adaptive control system. In the Gen II design, the feedback errors are regulated with a proportional-plus-integral (PI) compensator. This basic compensator is augmented with an online NN that changes the system gains via an error-based adaptation law to improve aircraft performance at all times, including normal flight, system failures, mispredicted behavior, or changes in behavior resulting from damage.
Cascade control of superheated steam temperature with neuro-PID controller.
Zhang, Jianhua; Zhang, Fenfang; Ren, Mifeng; Hou, Guolian; Fang, Fang
2012-11-01
In this paper, an improved cascade control methodology for superheated processes is developed, in which the primary PID controller is implemented by neural networks trained by minimizing error entropy criterion. The entropy of the tracking error can be estimated recursively by utilizing receding horizon window technique. The measurable disturbances in superheated processes are input to the neuro-PID controller besides the sequences of tracking error in outer loop control system, hence, feedback control is combined with feedforward control in the proposed neuro-PID controller. The convergent condition of the neural networks is analyzed. The implementation procedures of the proposed cascade control approach are summarized. Compared with the neuro-PID controller using minimizing squared error criterion, the proposed neuro-PID controller using minimizing error entropy criterion may decrease fluctuations of the superheated steam temperature. A simulation example shows the advantages of the proposed method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Adaptive neural network motion control for aircraft under uncertainty conditions
NASA Astrophysics Data System (ADS)
Efremov, A. V.; Tiaglik, M. S.; Tiumentsev, Yu V.
2018-02-01
We need to provide motion control of modern and advanced aircraft under diverse uncertainty conditions. This problem can be solved by using adaptive control laws. We carry out an analysis of the capabilities of these laws for such adaptive systems as MRAC (Model Reference Adaptive Control) and MPC (Model Predictive Control). In the case of a nonlinear control object, the most efficient solution to the adaptive control problem is the use of neural network technologies. These technologies are suitable for the development of both a control object model and a control law for the object. The approximate nature of the ANN model was taken into account by introducing additional compensating feedback into the control system. The capabilities of adaptive control laws under uncertainty in the source data are considered. We also conduct simulations to assess the contribution of adaptivity to the behavior of the system.
Neural-network dedicated processor for solving competitive assignment problems
NASA Technical Reports Server (NTRS)
Eberhardt, Silvio P. (Inventor)
1993-01-01
A neural-network processor for solving first-order competitive assignment problems consists of a matrix of N x M processing units, each of which corresponds to the pairing of a first number of elements of (R sub i) with a second number of elements (C sub j), wherein limits of the first number are programmed in row control superneurons, and limits of the second number are programmed in column superneurons as MIN and MAX values. The cost (weight) W sub ij of the pairings is programmed separately into each PU. For each row and column of PU's, a dedicated constraint superneuron insures that the number of active neurons within the associated row or column fall within a specified range. Annealing is provided by gradually increasing the PU gain for each row and column or increasing positive feedback to each PU, the latter being effective to increase hysteresis of each PU or by combining both of these techniques.
Zouari, Farouk; Ibeas, Asier; Boulkroune, Abdesselem; Cao, Jinde; Mehdi Arefi, Mohammad
2018-06-01
This study addresses the issue of the adaptive output tracking control for a category of uncertain nonstrict-feedback delayed incommensurate fractional-order systems in the presence of nonaffine structures, unmeasured pseudo-states, unknown control directions, unknown actuator nonlinearities and output constraints. Firstly, the mean value theorem and the Gaussian error function are introduced to eliminate the difficulties that arise from the nonaffine structures and the unknown actuator nonlinearities, respectively. Secondly, the immeasurable tracking error variables are suitably estimated by constructing a fractional-order linear observer. Thirdly, the neural network, the Razumikhin Lemma, the variable separation approach, and the smooth Nussbaum-type function are used to deal with the uncertain nonlinear dynamics, the unknown time-varying delays, the nonstrict feedback and the unknown control directions, respectively. Fourthly, asymmetric barrier Lyapunov functions are employed to overcome the violation of the output constraints and to tune online the parameters of the adaptive neural controller. Through rigorous analysis, it is proved that the boundedness of all variables in the closed-loop system and the semi global asymptotic tracking are ensured without transgression of the constraints. The principal contributions of this study can be summarized as follows: (1) based on Caputo's definitions and new lemmas, methods concerning the controllability, observability and stability analysis of integer-order systems are extended to fractional-order ones, (2) the output tracking objective for a relatively large class of uncertain systems is achieved with a simple controller and less tuning parameters. Finally, computer-simulation studies from the robotic field are given to demonstrate the effectiveness of the proposed controller. Copyright © 2018 Elsevier Ltd. All rights reserved.
Lindquist, Kristen A.; Adebayo, Morenikeji; Barrett, Lisa Feldman
2016-01-01
Negative stimuli do not only evoke fear or disgust, but can also evoke a state of ‘morbid fascination’ which is an urge to approach and explore a negative stimulus. In the present neuroimaging study, we applied an innovative method to investigate the neural systems involved in typical and atypical conceptualizations of negative images. Participants received false feedback labeling their mental experience as fear, disgust or morbid fascination. This manipulation was successful; participants judged the false feedback correct for 70% of the trials on average. The neuroimaging results demonstrated differential activity within regions in the ‘neural reference space for discrete emotion’ depending on the type of feedback. We found robust differences in the ventrolateral prefrontal cortex, the dorsomedial prefrontal cortex and the lateral orbitofrontal cortex comparing morbid fascination to control feedback. More subtle differences in the dorsomedial prefrontal cortex and the lateral orbitofrontal cortex were also found between morbid fascination feedback and the other emotion feedback conditions. This study is the first to forward evidence about the neural representation of the experimentally unexplored state of morbid fascination. In line with a constructionist framework, our findings suggest that neural resources associated with the process of conceptualization contribute to the neural representation of this state. PMID:26180088
Information processing in echo state networks at the edge of chaos.
Boedecker, Joschka; Obst, Oliver; Lizier, Joseph T; Mayer, N Michael; Asada, Minoru
2012-09-01
We investigate information processing in randomly connected recurrent neural networks. It has been shown previously that the computational capabilities of these networks are maximized when the recurrent layer is close to the border between a stable and an unstable dynamics regime, the so called edge of chaos. The reasons, however, for this maximized performance are not completely understood. We adopt an information-theoretical framework and are for the first time able to quantify the computational capabilities between elements of these networks directly as they undergo the phase transition to chaos. Specifically, we present evidence that both information transfer and storage in the recurrent layer are maximized close to this phase transition, providing an explanation for why guiding the recurrent layer toward the edge of chaos is computationally useful. As a consequence, our study suggests self-organized ways of improving performance in recurrent neural networks, driven by input data. Moreover, the networks we study share important features with biological systems such as feedback connections and online computation on input streams. A key example is the cerebral cortex, which was shown to also operate close to the edge of chaos. Consequently, the behavior of model systems as studied here is likely to shed light on reasons why biological systems are tuned into this specific regime.
Neural responses to maternal criticism in healthy youth
Siegle, Greg J.; Dahl, Ronald E.; Hooley, Jill M.; Silk, Jennifer S.
2015-01-01
Parental criticism can have positive and negative effects on children’s and adolescents’ behavior; yet, it is unclear how youth react to, understand and process parental criticism. We proposed that youth would engage three sets of neural processes in response to parental criticism including the following: (i) activating emotional reactions, (ii) regulating those reactions and (iii) social cognitive processing (e.g. understanding the parent’s mental state). To examine neural processes associated with both emotional and social processing of parental criticism in personally relevant and ecologically valid social contexts, typically developing youth were scanned while they listened to their mother providing critical, praising and neutral statements. In response to maternal criticism, youth showed increased brain activity in affective networks (e.g. subcortical–limbic regions including lentiform nucleus and posterior insula), but decreased activity in cognitive control networks (e.g. dorsolateral prefrontal cortex and caudal anterior cingulate cortex) and social cognitive networks (e.g. temporoparietal junction and posterior cingulate cortex/precuneus). These results suggest that youth may respond to maternal criticism with increased emotional reactivity but decreased cognitive control and social cognitive processing. A better understanding of children’s responses to parental criticism may provide insights into the ways that parental feedback can be modified to be more helpful to behavior and development in youth. PMID:25338632
Stabilization of burn conditions in a thermonuclear reactor using artificial neural networks
NASA Astrophysics Data System (ADS)
Vitela, Javier E.; Martinell, Julio J.
1998-02-01
In this work we develop an artificial neural network (ANN) for the feedback stabilization of a thermonuclear reactor at nearly ignited burn conditions. A volume-averaged zero-dimensional nonlinear model is used to represent the time evolution of the electron density, the relative density of alpha particles and the temperature of the plasma, where a particular scaling law for the energy confinement time previously used by other authors, was adopted. The control actions include the concurrent modulation of the D-T refuelling rate, the injection of a neutral He-4 beam and an auxiliary heating power modulation, which are constrained to take values within a maximum and minimum levels. For this purpose a feedforward multilayer artificial neural network with sigmoidal activation function is trained using a back-propagation through-time technique. Numerical examples are used to illustrate the behaviour of the resulting ANN-dynamical system configuration. It is concluded that the resulting ANN can successfully stabilize the nonlinear model of the thermonuclear reactor at nearly ignited conditions for temperature and density departures significantly far from their nominal operating values. The NN-dynamical system configuration is shown to be robust with respect to the thermalization time of the alpha particles for perturbations within the region used to train the NN.
Design strategies for dynamic closed-loop optogenetic neurocontrol in vivo
NASA Astrophysics Data System (ADS)
Bolus, M. F.; Willats, A. A.; Whitmire, C. J.; Rozell, C. J.; Stanley, G. B.
2018-04-01
Objective. Controlling neural activity enables the possibility of manipulating sensory perception, cognitive processes, and body movement, in addition to providing a powerful framework for functionally disentangling the neural circuits that underlie these complex phenomena. Over the last decade, optogenetic stimulation has become an increasingly important and powerful tool for understanding neural circuit function, owing to the ability to target specific cell types and bidirectionally modulate neural activity. To date, most stimulation has been provided in open-loop or in an on/off closed-loop fashion, where previously-determined stimulation is triggered by an event. Here, we describe and demonstrate a design approach for precise optogenetic control of neuronal firing rate modulation using feedback to guide stimulation continuously. Approach. Using the rodent somatosensory thalamus as an experimental testbed for realizing desired time-varying patterns of firing rate modulation, we utilized a moving average exponential filter to estimate firing rate online from single-unit spiking measured extracellularly. This estimate of instantaneous rate served as feedback for a proportional integral (PI) controller, which was designed during the experiment based on a linear-nonlinear Poisson (LNP) model of the neuronal response to light. Main results. The LNP model fit during the experiment enabled robust closed-loop control, resulting in good tracking of sinusoidal and non-sinusoidal targets, and rejection of unmeasured disturbances. Closed-loop control also enabled manipulation of trial-to-trial variability. Significance. Because neuroscientists are faced with the challenge of dissecting the functions of circuit components, the ability to maintain control of a region of interest in spite of changes in ongoing neural activity will be important for disambiguating function within networks. Closed-loop stimulation strategies are ideal for control that is robust to such changes, and the employment of continuous feedback to adjust stimulation in real-time can improve the quality of data collected using optogenetic manipulation.
Memory feedback PID control for exponential synchronisation of chaotic Lur'e systems
NASA Astrophysics Data System (ADS)
Zhang, Ruimei; Zeng, Deqiang; Zhong, Shouming; Shi, Kaibo
2017-09-01
This paper studies the problem of exponential synchronisation of chaotic Lur'e systems (CLSs) via memory feedback proportional-integral-derivative (PID) control scheme. First, a novel augmented Lyapunov-Krasovskii functional (LKF) is constructed, which can make full use of the information on time delay and activation function. Second, improved synchronisation criteria are obtained by using new integral inequalities, which can provide much tighter bounds than what the existing integral inequalities can produce. In comparison with existing results, in which only proportional control or proportional derivative (PD) control is used, less conservative results are derived for CLSs by PID control. Third, the desired memory feedback controllers are designed in terms of the solution to linear matrix inequalities. Finally, numerical simulations of Chua's circuit and neural network are provided to show the effectiveness and advantages of the proposed results.
Comparison of two reconfigurable N×N interconnects for a recurrent neural network
NASA Astrophysics Data System (ADS)
Berger, Christoph; Collings, Neil; Pourzand, Ali R.; Volkel, Reinnard
1996-11-01
Two different methods of pattern replication (conventional and interlaced fan-out) have been investigated and experimentally tested in a reconfigurable 5X5 optical interconnect. Similar alignment problems due to imaging errors (field curvature) were observed in both systems. We conclude that of the two methods the interlaced fan-out is better suited to avoid these imaging errors, to reduce system size and to implement an optical feedback loop.
Trainable Gene Regulation Networks with Applications to Drosophila Pattern Formation
NASA Technical Reports Server (NTRS)
Mjolsness, Eric
2000-01-01
This chapter will very briefly introduce and review some computational experiments in using trainable gene regulation network models to simulate and understand selected episodes in the development of the fruit fly, Drosophila melanogaster. For details the reader is referred to the papers introduced below. It will then introduce a new gene regulation network model which can describe promoter-level substructure in gene regulation. As described in chapter 2, gene regulation may be thought of as a combination of cis-acting regulation by the extended promoter of a gene (including all regulatory sequences) by way of the transcription complex, and of trans-acting regulation by the transcription factor products of other genes. If we simplify the cis-action by using a phenomenological model which can be tuned to data, such as a unit or other small portion of an artificial neural network, then the full transacting interaction between multiple genes during development can be modelled as a larger network which can again be tuned or trained to data. The larger network will in general need to have recurrent (feedback) connections since at least some real gene regulation networks do. This is the basic modeling approach taken, which describes how a set of recurrent neural networks can be used as a modeling language for multiple developmental processes including gene regulation within a single cell, cell-cell communication, and cell division. Such network models have been called "gene circuits", "gene regulation networks", or "genetic regulatory networks", sometimes without distinguishing the models from the actual modeled systems.
van Duijvenvoorde, Anna C. K.; Bakermans-Kranenburg, Marian J.; Crone, Eveline A.
2016-01-01
Abstract Negative social feedback often generates aggressive feelings and behavior. Prior studies have investigated the neural basis of negative social feedback, but the underlying neural mechanisms of aggression regulation following negative social feedback remain largely undiscovered. In the current study, participants viewed pictures of peers with feedback (positive, neutral or negative) to the participant’s personal profile. Next, participants responded to the peer feedback by pressing a button, thereby producing a loud noise toward the peer, as an index of aggression. Behavioral analyses showed that negative feedback led to more aggression (longer noise blasts). Conjunction neuroimaging analyses revealed that both positive and negative feedback were associated with increased activity in the medial prefrontal cortex (PFC) and bilateral insula. In addition, more activation in the right dorsal lateral PFC (dlPFC) during negative feedback vs neutral feedback was associated with shorter noise blasts in response to negative social feedback, suggesting a potential role of dlPFC in aggression regulation, or top-down control over affective impulsive actions. This study demonstrates a role of the dlPFC in the regulation of aggressive social behavior. PMID:26755768
Virtual Proprioception for eccentric training.
LeMoyne, Robert; Mastroianni, Timothy
2017-07-01
Wireless inertial sensors enable quantified feedback, which can be applied to evaluate the efficacy of therapy and rehabilitation. In particular eccentric training promotes a beneficial rehabilitation and strength training strategy. Virtual Proprioception for eccentric training applies real-time feedback from a wireless gyroscope platform enabled through a software application for a smartphone. Virtual Proprioception for eccentric training is applied to the eccentric phase of a biceps brachii strength training and contrasted to a biceps brachii strength training scenario without feedback. During the operation of Virtual Proprioception for eccentric training the intent is to not exceed a prescribed gyroscope signal threshold based on the real-time presentation of the gyroscope signal, in order to promote the eccentric aspect of the strength training endeavor. The experimental trial data is transmitted wireless through connectivity to the Internet as an email attachment for remote post-processing. A feature set is derived from the gyroscope signal for machine learning classification of the two scenarios of Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback. Considerable classification accuracy is achieved through the application of a multilayer perceptron neural network for distinguishing between the Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback.
Autaptic effects on synchrony of neurons coupled by electrical synapses
NASA Astrophysics Data System (ADS)
Kim, Youngtae
2017-07-01
In this paper, we numerically study the effects of a special synapse known as autapse on synchronization of population of Morris-Lecar (ML) neurons coupled by electrical synapses. Several configurations of the ML neuronal populations such as a pair or a ring or a globally coupled network with and without autapses are examined. While most of the papers on the autaptic effects on synchronization have used networks of neurons of same spiking rate, we use the network of neurons of different spiking rates. We find that the optimal autaptic coupling strength and the autaptic time delay enhance synchronization in our neural networks. We use the phase response curve analysis to explain the enhanced synchronization by autapses. Our findings reveal the important relationship between the intraneuronal feedback loop and the interneuronal coupling.
Yang, Qinmin; Jagannathan, Sarangapani
2012-04-01
In this paper, reinforcement learning state- and output-feedback-based adaptive critic controller designs are proposed by using the online approximators (OLAs) for a general multi-input and multioutput affine unknown nonlinear discretetime systems in the presence of bounded disturbances. The proposed controller design has two entities, an action network that is designed to produce optimal signal and a critic network that evaluates the performance of the action network. The critic estimates the cost-to-go function which is tuned online using recursive equations derived from heuristic dynamic programming. Here, neural networks (NNs) are used both for the action and critic whereas any OLAs, such as radial basis functions, splines, fuzzy logic, etc., can be utilized. For the output-feedback counterpart, an additional NN is designated as the observer to estimate the unavailable system states, and thus, separation principle is not required. The NN weight tuning laws for the controller schemes are also derived while ensuring uniform ultimate boundedness of the closed-loop system using Lyapunov theory. Finally, the effectiveness of the two controllers is tested in simulation on a pendulum balancing system and a two-link robotic arm system.
Zarate, Jean Mary
2013-01-01
Singing provides a unique opportunity to examine music performance—the musical instrument is contained wholly within the body, thus eliminating the need for creating artificial instruments or tasks in neuroimaging experiments. Here, more than two decades of voice and singing research will be reviewed to give an overview of the sensory-motor control of the singing voice, starting from the vocal tract and leading up to the brain regions involved in singing. Additionally, to demonstrate how sensory feedback is integrated with vocal motor control, recent functional magnetic resonance imaging (fMRI) research on somatosensory and auditory feedback processing during singing will be presented. The relationship between the brain and singing behavior will be explored also by examining: (1) neuroplasticity as a function of various lengths and types of training, (2) vocal amusia due to a compromised singing network, and (3) singing performance in individuals with congenital amusia. Finally, the auditory-motor control network for singing will be considered alongside dual-stream models of auditory processing in music and speech to refine both these theoretical models and the singing network itself. PMID:23761746
Why don't you like me? Midfrontal theta power in response to unexpected peer rejection feedback.
van der Molen, M J W; Dekkers, L M S; Westenberg, P M; van der Veen, F M; van der Molen, M W
2017-02-01
Social connectedness theory posits that the brain processes social rejection as a threat to survival. Recent electrophysiological evidence suggests that midfrontal theta (4-8Hz) oscillations in the EEG provide a window on the processing of social rejection. Here we examined midfrontal theta dynamics (power and inter-trial phase synchrony) during the processing of social evaluative feedback. We employed the Social Judgment paradigm in which 56 undergraduate women (mean age=19.67 years) were asked to communicate their expectancies about being liked vs. disliked by unknown peers. Expectancies were followed by feedback indicating social acceptance vs. rejection. Results revealed a significant increase in EEG theta power to unexpected social rejection feedback. This EEG theta response could be source-localized to brain regions typically reported during activation of the saliency network (i.e., dorsal anterior cingulate cortex, insula, inferior frontal gyrus, frontal pole, and the supplementary motor area). Theta phase dynamics mimicked the behavior of the time-domain averaged feedback-related negativity (FRN) by showing stronger phase synchrony for feedback that was unexpected vs. expected. Theta phase, however, differed from the FRN by also displaying stronger phase synchrony in response to rejection vs. acceptance feedback. Together, this study highlights distinct roles for midfrontal theta power and phase synchrony in response to social evaluative feedback. Our findings contribute to the literature by showing that midfrontal theta oscillatory power is sensitive to social rejection but only when peer rejection is unexpected, and this theta response is governed by a widely distributed neural network implicated in saliency detection and conflict monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks
Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.
2015-01-01
The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies. PMID:26496502
Yamashita, Yuichi; Okumura, Tetsu; Okanoya, Kazuo; Tani, Jun
2011-01-01
How the brain learns and generates temporal sequences is a fundamental issue in neuroscience. The production of birdsongs, a process which involves complex learned sequences, provides researchers with an excellent biological model for this topic. The Bengalese finch in particular learns a highly complex song with syntactical structure. The nucleus HVC (HVC), a premotor nucleus within the avian song system, plays a key role in generating the temporal structures of their songs. From lesion studies, the nucleus interfacialis (NIf) projecting to the HVC is considered one of the essential regions that contribute to the complexity of their songs. However, the types of interaction between the HVC and the NIf that can produce complex syntactical songs remain unclear. In order to investigate the function of interactions between the HVC and NIf, we have proposed a neural network model based on previous biological evidence. The HVC is modeled by a recurrent neural network (RNN) that learns to generate temporal patterns of songs. The NIf is modeled as a mechanism that provides auditory feedback to the HVC and generates random noise that feeds into the HVC. The model showed that complex syntactical songs can be replicated by simple interactions between deterministic dynamics of the RNN and random noise. In the current study, the plausibility of the model is tested by the comparison between the changes in the songs of actual birds induced by pharmacological inhibition of the NIf and the changes in the songs produced by the model resulting from modification of parameters representing NIf functions. The efficacy of the model demonstrates that the changes of songs induced by pharmacological inhibition of the NIf can be interpreted as a trade-off between the effects of noise and the effects of feedback on the dynamics of the RNN of the HVC. These facts suggest that the current model provides a convincing hypothesis for the functional role of NIf–HVC interaction. PMID:21559065
Boumans, Tiny; Gobes, Sharon M. H.; Poirier, Colline; Theunissen, Frederic E.; Vandersmissen, Liesbeth; Pintjens, Wouter; Verhoye, Marleen; Bolhuis, Johan J.; Van der Linden, Annemie
2008-01-01
Background Male songbirds learn their songs from an adult tutor when they are young. A network of brain nuclei known as the ‘song system’ is the likely neural substrate for sensorimotor learning and production of song, but the neural networks involved in processing the auditory feedback signals necessary for song learning and maintenance remain unknown. Determining which regions show preferential responsiveness to the bird's own song (BOS) is of great importance because neurons sensitive to self-generated vocalisations could mediate this auditory feedback process. Neurons in the song nuclei and in a secondary auditory area, the caudal medial mesopallium (CMM), show selective responses to the BOS. The aim of the present study is to investigate the emergence of BOS selectivity within the network of primary auditory sub-regions in the avian pallium. Methods and Findings Using blood oxygen level-dependent (BOLD) fMRI, we investigated neural responsiveness to natural and manipulated self-generated vocalisations and compared the selectivity for BOS and conspecific song in different sub-regions of the thalamo-recipient area Field L. Zebra finch males were exposed to conspecific song, BOS and to synthetic variations on BOS that differed in spectro-temporal and/or modulation phase structure. We found significant differences in the strength of BOLD responses between regions L2a, L2b and CMM, but no inter-stimuli differences within regions. In particular, we have shown that the overall signal strength to song and synthetic variations thereof was different within two sub-regions of Field L2: zone L2a was significantly more activated compared to the adjacent sub-region L2b. Conclusions Based on our results we suggest that unlike nuclei in the song system, sub-regions in the primary auditory pallium do not show selectivity for the BOS, but appear to show different levels of activity with exposure to any sound according to their place in the auditory processing stream. PMID:18781203
From network heterogeneities to familiarity detection and hippocampal memory management
Wang, Jane X.; Poe, Gina; Zochowski, Michal
2009-01-01
Hippocampal-neocortical interactions are key to the rapid formation of novel associative memories in the hippocampus and consolidation to long term storage sites in the neocortex. We investigated the role of network correlates during information processing in hippocampal-cortical networks. We found that changes in the intrinsic network dynamics due to the formation of structural network heterogeneities alone act as a dynamical and regulatory mechanism for stimulus novelty and familiarity detection, thereby controlling memory management in the context of memory consolidation. This network dynamic, coupled with an anatomically established feedback between the hippocampus and the neocortex, recovered heretofore unexplained properties of neural activity patterns during memory management tasks which we observed during sleep in multiunit recordings from behaving animals. Our simple dynamical mechanism shows an experimentally matched progressive shift of memory activation from the hippocampus to the neocortex and thus provides the means to achieve an autonomous off-line progression of memory consolidation. PMID:18999453
Padhi, Radhakant; Unnikrishnan, Nishant; Wang, Xiaohua; Balakrishnan, S N
2006-12-01
Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.
Sensory-motor networks involved in speech production and motor control: an fMRI study.
Behroozmand, Roozbeh; Shebek, Rachel; Hansen, Daniel R; Oya, Hiroyuki; Robin, Donald A; Howard, Matthew A; Greenlee, Jeremy D W
2015-04-01
Speaking is one of the most complex motor behaviors developed to facilitate human communication. The underlying neural mechanisms of speech involve sensory-motor interactions that incorporate feedback information for online monitoring and control of produced speech sounds. In the present study, we adopted an auditory feedback pitch perturbation paradigm and combined it with functional magnetic resonance imaging (fMRI) recordings in order to identify brain areas involved in speech production and motor control. Subjects underwent fMRI scanning while they produced a steady vowel sound /a/ (speaking) or listened to the playback of their own vowel production (playback). During each condition, the auditory feedback from vowel production was either normal (no perturbation) or perturbed by an upward (+600 cents) pitch-shift stimulus randomly. Analysis of BOLD responses during speaking (with and without shift) vs. rest revealed activation of a complex network including bilateral superior temporal gyrus (STG), Heschl's gyrus, precentral gyrus, supplementary motor area (SMA), Rolandic operculum, postcentral gyrus and right inferior frontal gyrus (IFG). Performance correlation analysis showed that the subjects produced compensatory vocal responses that significantly correlated with BOLD response increases in bilateral STG and left precentral gyrus. However, during playback, the activation network was limited to cortical auditory areas including bilateral STG and Heschl's gyrus. Moreover, the contrast between speaking vs. playback highlighted a distinct functional network that included bilateral precentral gyrus, SMA, IFG, postcentral gyrus and insula. These findings suggest that speech motor control involves feedback error detection in sensory (e.g. auditory) cortices that subsequently activate motor-related areas for the adjustment of speech parameters during speaking. Copyright © 2015 Elsevier Inc. All rights reserved.
Takeoka, Aya; Vollenweider, Isabel; Courtine, Grégoire; Arber, Silvia
2014-12-18
Spinal cord injuries alter motor function by disconnecting neural circuits above and below the lesion, rendering sensory inputs a primary source of direct external drive to neuronal networks caudal to the injury. Here, we studied mice lacking functional muscle spindle feedback to determine the role of this sensory channel in gait control and locomotor recovery after spinal cord injury. High-resolution kinematic analysis of intact mutant mice revealed proficient execution in basic locomotor tasks but poor performance in a precision task. After injury, wild-type mice spontaneously recovered basic locomotor function, whereas mice with deficient muscle spindle feedback failed to regain control over the hindlimb on the lesioned side. Virus-mediated tracing demonstrated that mutant mice exhibit defective rearrangements of descending circuits projecting to deprived spinal segments during recovery. Our findings reveal an essential role for muscle spindle feedback in directing basic locomotor recovery and facilitating circuit reorganization after spinal cord injury. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian
2017-04-01
This paper proposes the combination of two model-free controller tuning techniques, namely linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first used to find stabilising feedback controller using input-output experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected under perturbed VRFT to ensure good exploration. The Q-learning controller learned with a batch fitted Q iteration algorithm uses two neural networks, one for the Q-function estimator and one for the controller, respectively. The VRFT-Q learning approach is validated on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS). Extensive simulations for the two independent control channels of the MIMO AS show that the Q-learning controllers clearly improve performance over the VRFT controllers.
Nikolaev, Anton; Zheng, Lei; Wardill, Trevor J; O'Kane, Cahir J; de Polavieja, Gonzalo G; Juusola, Mikko
2009-01-01
Retinal networks must adapt constantly to best present the ever changing visual world to the brain. Here we test the hypothesis that adaptation is a result of different mechanisms at several synaptic connections within the network. In a companion paper (Part I), we showed that adaptation in the photoreceptors (R1-R6) and large monopolar cells (LMC) of the Drosophila eye improves sensitivity to under-represented signals in seconds by enhancing both the amplitude and frequency distribution of LMCs' voltage responses to repeated naturalistic contrast series. In this paper, we show that such adaptation needs both the light-mediated conductance and feedback-mediated synaptic conductance. A faulty feedforward pathway in histamine receptor mutant flies speeds up the LMC output, mimicking extreme light adaptation. A faulty feedback pathway from L2 LMCs to photoreceptors slows down the LMC output, mimicking dark adaptation. These results underline the importance of network adaptation for efficient coding, and as a mechanism for selectively regulating the size and speed of signals in neurons. We suggest that concert action of many different mechanisms and neural connections are responsible for adaptation to visual stimuli. Further, our results demonstrate the need for detailed circuit reconstructions like that of the Drosophila lamina, to understand how networks process information.
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
A neural network model of ventriloquism effect and aftereffect.
Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro
2012-01-01
Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.
Li, Yang; Oku, Makito; He, Guoguang; Aihara, Kazuyuki
2017-04-01
In this study, a method is proposed that eliminates spiral waves in a locally connected chaotic neural network (CNN) under some simplified conditions, using a dynamic phase space constraint (DPSC) as a control method. In this method, a control signal is constructed from the feedback internal states of the neurons to detect phase singularities based on their amplitude reduction, before modulating a threshold value to truncate the refractory internal states of the neurons and terminate the spirals. Simulations showed that with appropriate parameter settings, the network was directed from a spiral wave state into either a plane wave (PW) state or a synchronized oscillation (SO) state, where the control vanished automatically and left the original CNN model unaltered. Each type of state had a characteristic oscillation frequency, where spiral wave states had the highest, and the intra-control dynamics was dominated by low-frequency components, thereby indicating slow adjustments to the state variables. In addition, the PW-inducing and SO-inducing control processes were distinct, where the former generally had longer durations but smaller average proportions of affected neurons in the network. Furthermore, variations in the control parameter allowed partial selectivity of the control results, which were accompanied by modulation of the control processes. The results of this study broaden the applicability of DPSC to chaos control and they may also facilitate the utilization of locally connected CNNs in memory retrieval and the exploration of traveling wave dynamics in biological neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sarikaya, Duygu; Corso, Jason J; Guru, Khurshid A
2017-07-01
Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.
Predicting Reading and Mathematics from Neural Activity for Feedback Learning
ERIC Educational Resources Information Center
Peters, Sabine; Van der Meulen, Mara; Zanolie, Kiki; Crone, Eveline A.
2017-01-01
Although many studies use feedback learning paradigms to study the process of learning in laboratory settings, little is known about their relevance for real-world learning settings such as school. In a large developmental sample (N = 228, 8-25 years), we investigated whether performance and neural activity during a feedback learning task…
Neural Correlates of the Lombard Effect in Primate Auditory Cortex
Eliades, Steven J.
2012-01-01
Speaking is a sensory-motor process that involves constant self-monitoring to ensure accurate vocal production. Self-monitoring of vocal feedback allows rapid adjustment to correct perceived differences between intended and produced vocalizations. One important behavior in vocal feedback control is a compensatory increase in vocal intensity in response to noise masking during vocal production, commonly referred to as the Lombard effect. This behavior requires mechanisms for continuously monitoring auditory feedback during speaking. However, the underlying neural mechanisms are poorly understood. Here we show that when marmoset monkeys vocalize in the presence of masking noise that disrupts vocal feedback, the compensatory increase in vocal intensity is accompanied by a shift in auditory cortex activity toward neural response patterns seen during vocalizations under normal feedback condition. Furthermore, we show that neural activity in auditory cortex during a vocalization phrase predicts vocal intensity compensation in subsequent phrases. These observations demonstrate that the auditory cortex participates in self-monitoring during the Lombard effect, and may play a role in the compensation of noise masking during feedback-mediated vocal control. PMID:22855821
Integration of photoactive and electroactive components with vertical cavity surface emitting lasers
Bryan, R.P.; Esherick, P.; Jewell, J.L.; Lear, K.L.; Olbright, G.R.
1997-04-29
A monolithically integrated optoelectronic device is provided which integrates a vertical cavity surface emitting laser and either a photosensitive or an electrosensitive device either as input or output to the vertical cavity surface emitting laser either in parallel or series connection. Both vertical and side-by-side arrangements are disclosed, and optical and electronic feedback means are provided. Arrays of these devices can be configured to enable optical computing and neural network applications. 9 figs.
Integration of photoactive and electroactive components with vertical cavity surface emitting lasers
Bryan, Robert P.; Esherick, Peter; Jewell, Jack L.; Lear, Kevin L.; Olbright, Gregory R.
1997-01-01
A monolithically integrated optoelectronic device is provided which integrates a vertical cavity surface emitting laser and either a photosensitive or an electrosensitive device either as input or output to the vertical cavity surface emitting laser either in parallel or series connection. Both vertical and side-by-side arrangements are disclosed, and optical and electronic feedback means are provided. Arrays of these devices can be configured to enable optical computing and neural network applications.
Prediction of Sym-H index by NARX neural network from IMF and solar wind data
NASA Astrophysics Data System (ADS)
Cai, L.; Ma, S.-Y.; Liu, R.-S.; Schlegel, K.; Zhou, Y.-L.; Luehr, H.
2009-04-01
Similar to Dst, the Sym-H index is also an indicator of magnetic storm intensity, but having distinct advantage of higher time-resolution. In this study an artificial neural network (ANN) of Nonlinear Auto Regressive with eXogenous inputs (NARX) has been developed to predict for the first time Sym-H index from solar wind and IMF parameters. In total 73 great storm events during 1998 to 2006 are used, out of which 67 are selected to train the network and the other 6 samples including 2 super-storms for test. The newly developed NARX model shows much better capability than usual BP and Elman network in Sym-H prediction. When using IMF Bz, By and total B with a history length of 90 minutes along with solar wind proton density Np and velocity Vsw as the original external inputs of the ANN to predict Sym-H index one hour later, the cross-correlation between NARX network predicted and Kyoto observed Sym-H is 0.95 for the 6 test storms as a whole, even as high as 0.95 and 0.98 respectively for the two super-storms. This excellent performance of the NARX model can mainly be attributed to a feedback from the output neuron with a suitable length of about 120 min. to the external input. It is such a feedback that makes the ring current status properly brought into effect in the prediction of storm-time Sym-H index by our NARX network. Furthermore, different parameter combinations with different history length (70 to 120 min.) for IMF and solar wind data as external inputs are examined along with different hidden neuron number. It is found that the NARX network with 10 hidden units and with 100 min. length of Bz, Np and Vsw as external inputs provides the best results in Sym-H prediction. Besides, efforts have also been made to predict Sym-H longer time ahead, showing that the NARX network can predict Sym-H index 180 min. ahead with correlation coefficient of 0.94 between predicted and observed Sym-H and RMSE less than 19 nT for the 6 test samples.
A neural network based artificial vision system for licence plate recognition.
Draghici, S
1997-02-01
This paper presents a neural network based artificial vision system able to analyze the image of a car given by a camera, locate the registration plate and recognize the registration number of the car. The paper describes in detail various practical problems encountered in implementing this particular application and the solutions used to solve them. The main features of the system presented are: controlled stability-plasticity behavior, controlled reliability threshold, both off-line and on-line learning, self assessment of the output reliability and high reliability based on high level multiple feedback. The system has been designed using a modular approach. Sub-modules can be upgraded and/or substituted independently, thus making the system potentially suitable in a large variety of vision applications. The OCR engine was designed as an interchangeable plug-in module. This allows the user to choose an OCR engine which is suited to the particular application and to upgrade it easily in the future. At present, there are several versions of this OCR engine. One of them is based on a fully connected feedforward artificial neural network with sigmoidal activation functions. This network can be trained with various training algorithms such as error backpropagation. An alternative OCR engine is based on the constraint based decomposition (CBD) training architecture. The system has showed the following performances (on average) on real-world data: successful plate location and segmentation about 99%, successful character recognition about 98% and successful recognition of complete registration plates about 80%.
NASA Astrophysics Data System (ADS)
Hsiao, Feng-Hsiag
2016-10-01
In this study, a novel approach via improved genetic algorithm (IGA)-based fuzzy observer is proposed to realise exponential optimal H∞ synchronisation and secure communication in multiple time-delay chaotic (MTDC) systems. First, an original message is inserted into the MTDC system. Then, a neural-network (NN) model is employed to approximate the MTDC system. Next, a linear differential inclusion (LDI) state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, this study proposes a delay-dependent exponential stability criterion derived in terms of Lyapunov's direct method, thus ensuring that the trajectories of the slave system approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI). Due to GA's random global optimisation search capabilities, the lower and upper bounds of the search space can be set so that the GA will seek better fuzzy observer feedback gains, accelerating feedback gain-based synchronisation via the LMI-based approach. IGA, which exhibits better performance than traditional GA, is used to synthesise a fuzzy observer to not only realise the exponential synchronisation, but also achieve optimal H∞ performance by minimizing the disturbance attenuation level and recovering the transmitted message. Finally, a numerical example with simulations is given in order to demonstrate the effectiveness of our approach.
Control system of hexacopter using color histogram footprint and convolutional neural network
NASA Astrophysics Data System (ADS)
Ruliputra, R. N.; Darma, S.
2017-07-01
The development of unmanned aerial vehicles (UAV) has been growing rapidly in recent years. The use of logic thinking which is implemented into the program algorithms is needed to make a smart system. By using visual input from a camera, UAV is able to fly autonomously by detecting a target. However, some weaknesses arose as usage in the outdoor environment might change the target's color intensity. Color histogram footprint overcomes the problem because it divides color intensity into separate bins that make the detection tolerant to the slight change of color intensity. Template matching compare its detection result with a template of the reference image to determine the target position and use it to position the vehicle in the middle of the target with visual feedback control based on Proportional-Integral-Derivative (PID) controller. Color histogram footprint method localizes the target by calculating the back projection of its histogram. It has an average success rate of 77 % from a distance of 1 meter. It can position itself in the middle of the target by using visual feedback control with an average positioning time of 73 seconds. After the hexacopter is in the middle of the target, Convolutional Neural Networks (CNN) classifies a number contained in the target image to determine a task depending on the classified number, either landing, yawing, or return to launch. The recognition result shows an optimum success rate of 99.2 %.
Neural dynamics underlying emotional transmissions between individuals
Levit-Binnun, Nava; Hendler, Talma; Lerner, Yulia
2017-01-01
Abstract Emotional experiences are frequently shaped by the emotional responses of co-present others. Research has shown that people constantly monitor and adapt to the incoming social–emotional signals, even without face-to-face interaction. And yet, the neural processes underlying such emotional transmissions have not been directly studied. Here, we investigated how the human brain processes emotional cues which arrive from another, co-attending individual. We presented continuous emotional feedback to participants who viewed a movie in the scanner. Participants in the social group (but not in the control group) believed that the feedback was coming from another person who was co-viewing the same movie. We found that social–emotional feedback significantly affected the neural dynamics both in the core affect and in the medial pre-frontal regions. Specifically, the response time-courses in those regions exhibited increased similarity across recipients and increased neural alignment with the timeline of the feedback in the social compared with control group. Taken in conjunction with previous research, this study suggests that emotional cues from others shape the neural dynamics across the whole neural continuum of emotional processing in the brain. Moreover, it demonstrates that interpersonal neural alignment can serve as a neural mechanism through which affective information is conveyed between individuals. PMID:28575520
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.
Burbank, Kendra S
2015-12-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons
Burbank, Kendra S.
2015-01-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks. PMID:26633645
Remembering forward: Neural correlates of memory and prediction in human motor adaptation
Scheidt, Robert A; Zimbelman, Janice L; Salowitz, Nicole M G; Suminski, Aaron J; Mosier, Kristine M; Houk, James; Simo, Lucia
2011-01-01
We used functional MR imaging (FMRI), a robotic manipulandum and systems identification techniques to examine neural correlates of predictive compensation for spring-like loads during goal-directed wrist movements in neurologically-intact humans. Although load changed unpredictably from one trial to the next, subjects nevertheless used sensorimotor memories from recent movements to predict and compensate upcoming loads. Prediction enabled subjects to adapt performance so that the task was accomplished with minimum effort. Population analyses of functional images revealed a distributed, bilateral network of cortical and subcortical activity supporting predictive load compensation during visual target capture. Cortical regions - including prefrontal, parietal and hippocampal cortices - exhibited trial-by-trial fluctuations in BOLD signal consistent with the storage and recall of sensorimotor memories or “states” important for spatial working memory. Bilateral activations in associative regions of the striatum demonstrated temporal correlation with the magnitude of kinematic performance error (a signal that could drive reward-optimizing reinforcement learning and the prospective scaling of previously learned motor programs). BOLD signal correlations with load prediction were observed in the cerebellar cortex and red nuclei (consistent with the idea that these structures generate adaptive fusimotor signals facilitating cancellation of expected proprioceptive feedback, as required for conditional feedback adjustments to ongoing motor commands and feedback error learning). Analysis of single subject images revealed that predictive activity was at least as likely to be observed in more than one of these neural systems as in just one. We conclude therefore that motor adaptation is mediated by predictive compensations supported by multiple, distributed, cortical and subcortical structures. PMID:21840405
Feedback Synthesizes Neural Codes for Motion.
Clarke, Stephen E; Maler, Leonard
2017-05-08
In senses as diverse as vision, hearing, touch, and the electrosense, sensory neurons receive bottom-up input from the environment, as well as top-down input from feedback loops involving higher brain regions [1-4]. Through connectivity with local inhibitory interneurons, these feedback loops can exert both positive and negative control over fundamental aspects of neural coding, including bursting [5, 6] and synchronous population activity [7, 8]. Here we show that a prominent midbrain feedback loop synthesizes a neural code for motion reversal in the hindbrain electrosensory ON- and OFF-type pyramidal cells. This top-down mechanism generates an accurate bidirectional encoding of object position, despite the inability of the electrosensory afferents to generate a consistent bottom-up representation [9, 10]. The net positive activity of this midbrain feedback is additionally regulated through a hindbrain feedback loop, which reduces stimulus-induced bursting and also dampens the ON and OFF cell responses to interfering sensory input [11]. We demonstrate that synthesis of motion representations and cancellation of distracting signals are mediated simultaneously by feedback, satisfying an accepted definition of spatial attention [12]. The balance of excitatory and inhibitory feedback establishes a "focal" distance for optimized neural coding, whose connection to a classic motion-tracking behavior provides new insight into the computational roles of feedback and active dendrites in spatial localization [13, 14]. Copyright © 2017 Elsevier Ltd. All rights reserved.
Resquín, Francisco; Gonzalez-Vargas, Jose; Ibáñez, Jaime; Brunetti, Fernando; Pons, José Luis
2016-01-01
Hybrid robotic systems represent a novel research field, where functional electrical stimulation (FES) is combined with a robotic device for rehabilitation of motor impairment. Under this approach, the design of robust FES controllers still remains an open challenge. In this work, we aimed at developing a learning FES controller to assist in the performance of reaching movements in a simple hybrid robotic system setting. We implemented a Feedback Error Learning (FEL) control strategy consisting of a feedback PID controller and a feedforward controller based on a neural network. A passive exoskeleton complemented the FES controller by compensating the effects of gravity. We carried out experiments with healthy subjects to validate the performance of the system. Results show that the FEL control strategy is able to adjust the FES intensity to track the desired trajectory accurately without the need of a previous mathematical model. PMID:27990245
Faghihi, Faramarz; Moustafa, Ahmed A.
2015-01-01
Information processing in the hippocampus begins by transferring spiking activity of the entorhinal cortex (EC) into the dentate gyrus (DG). Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modeled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of granule cells of the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron’s encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking). This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed. PMID:25859189
Xie, Jiaheng; Liu, Xiao; Dajun Zeng, Daniel
2018-01-01
Recent years have seen increased worldwide popularity of e-cigarette use. However, the risks of e-cigarettes are underexamined. Most e-cigarette adverse event studies have achieved low detection rates due to limited subject sample sizes in the experiments and surveys. Social media provides a large data repository of consumers' e-cigarette feedback and experiences, which are useful for e-cigarette safety surveillance. However, it is difficult to automatically interpret the informal and nontechnical consumer vocabulary about e-cigarettes in social media. This issue hinders the use of social media content for e-cigarette safety surveillance. Recent developments in deep neural network methods have shown promise for named entity extraction from noisy text. Motivated by these observations, we aimed to design a deep neural network approach to extract e-cigarette safety information in social media. Our deep neural language model utilizes word embedding as the representation of text input and recognizes named entity types with the state-of-the-art Bidirectional Long Short-Term Memory (Bi-LSTM) Recurrent Neural Network. Our Bi-LSTM model achieved the best performance compared to 3 baseline models, with a precision of 94.10%, a recall of 91.80%, and an F-measure of 92.94%. We identified 1591 unique adverse events and 9930 unique e-cigarette components (ie, chemicals, flavors, and devices) from our research testbed. Although the conditional random field baseline model had slightly better precision than our approach, our Bi-LSTM model achieved much higher recall, resulting in the best F-measure. Our method can be generalized to extract medical concepts from social media for other medical applications. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Li, YuHui; Jin, FeiTeng
2017-01-01
The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680
Banis, Stella; Geerligs, Linda; Lorist, Monicque M.
2014-01-01
Sex-specific prevalence rates in mental and physical disorders may be partly explained by sex differences in physiological stress responses. Neural networks that might be involved are those underlying feedback processing. Aim of the present EEG study was to investigate whether acute stress alters feedback processing, and whether stress effects differ between men and women. Male and female participants performed a gambling task, in a control and a stress condition. Stress was induced by exposing participants to a noise stressor. Brain activity was analyzed using both event-related potential and time-frequency analyses, measuring the feedback-related negativity (FRN) and feedback-related changes in theta and beta oscillatory power, respectively. While the FRN and feedback-related theta power were similarly affected by stress induction in both sexes, feedback-related beta power depended on the combination of stress induction condition and sex. FRN amplitude and theta power increases were smaller in the stress relative to the control condition in both sexes, demonstrating that acute noise stress impairs performance monitoring irrespective of sex. However, in the stress but not in the control condition, early lower beta-band power increases were larger for men than women, indicating that stress effects on feedback processing are partly sex-dependent. Our findings suggest that sex-specific effects on feedback processing may comprise a factor underlying sex-specific stress responses. PMID:24755943
Upper Torso Control for HOAP-2 Using Neural Networks
NASA Technical Reports Server (NTRS)
Sandoval, Steven P.
2005-01-01
Humanoid robots have similar physical builds and motion patterns as humans. Not only does this provide a suitable operating environment for the humanoid but it also opens up many research doors on how humans function. The overall objective is replacing humans operating in unsafe environments. A first target application is assembly of structures for future lunar-planetary bases. The initial development platform is a Fujitsu HOAP-2 humanoid robot. The goal for the project is to demonstrate the capability of a HOAP-2 to autonomously construct a cubic frame using provided tubes and joints. This task will require the robot to identify several items, pick them up, transport them to the build location, then properly assemble the structure. The ability to grasp and assemble the pieces will require improved motor control and the addition of tactile feedback sensors. In recent years, learning-based control is becoming more and more popular; for implementing this method we will be using the Adaptive Neural Fuzzy Inference System (ANFIS). When using neural networks for control, no complex models of the system must be constructed in advance-only input/output relationships are required to model the system.
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
Neural mechanisms underlying auditory feedback control of speech
Reilly, Kevin J.; Guenther, Frank H.
2013-01-01
The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557
Chikara, Rupesh K; Chang, Erik C; Lu, Yi-Chen; Lin, Dar-Shong; Lin, Chin-Teng; Ko, Li-Wei
2018-01-01
A reward or punishment can modulate motivation and emotions, which in turn affect cognitive processing. The present simultaneous functional magnetic resonance imaging-electroencephalography study examines neural mechanisms of response inhibition under the influence of a monetary reward or punishment by implementing a modified stop-signal task in a virtual battlefield scenario. The participants were instructed to play as snipers who open fire at a terrorist target but withhold shooting in the presence of a hostage. The participants performed the task under three different feedback conditions in counterbalanced order: a reward condition where each successfully withheld response added a bonus (i.e., positive feedback) to the startup credit, a punishment condition where each failure in stopping deduced a penalty (i.e., negative feedback), and a no-feedback condition where response outcome had no consequences and served as a control setting. Behaviorally both reward and punishment conditions led to significantly down-regulated inhibitory function in terms of the critical stop-signal delay. As for the neuroimaging results, increased activities were found for the no-feedback condition in regions previously reported to be associated with response inhibition, including the right inferior frontal gyrus and the pre-supplementary motor area. Moreover, higher activation of the lingual gyrus, posterior cingulate gyrus (PCG) and inferior parietal lobule were found in the reward condition, while stronger activation of the precuneus gyrus was found in the punishment condition. The positive feedback was also associated with stronger changes of delta, theta, and alpha synchronization in the PCG than were the negative or no-feedback conditions. These findings depicted the intertwining relationship between response inhibition and motivation networks.
PDF Signaling Is an Integral Part of the Drosophila Circadian Molecular Oscillator.
Mezan, Shaul; Feuz, Jean Daniel; Deplancke, Bart; Kadener, Sebastian
2016-10-11
Circadian clocks generate 24-hr rhythms in physiology and behavior. Despite numerous studies, it is still uncertain how circadian rhythms emerge from their molecular and neural constituents. Here, we demonstrate a tight connection between the molecular and neuronal circadian networks. Using fluorescent transcriptional reporters in a Drosophila ex vivo brain culture system, we identified a reciprocal negative regulation between the master circadian regulator CLK and expression of pdf, the main circadian neuropeptide. We show that PDF feedback is required for maintaining normal oscillation pattern in CLK-driven transcription. Interestingly, we found that CLK and neuronal firing suppresses pdf transcription, likely through a common pathway involving the transcription factors DHR38 and SR, establishing a direct link between electric activity and the circadian system. In sum, our work provides evidence for the existence of an uncharacterized CLK-PDF feedback loop that tightly wraps together the molecular oscillator with the circadian neuronal network in Drosophila. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Foong, Shaohui; Sun, Zhenglong
2016-08-12
In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.
Combined contributions of feedforward and feedback inputs to bottom-up attention
Khorsand, Peyman; Moore, Tirin; Soltani, Alireza
2015-01-01
In order to deal with a large amount of information carried by visual inputs entering the brain at any given point in time, the brain swiftly uses the same inputs to enhance processing in one part of visual field at the expense of the others. These processes, collectively called bottom-up attentional selection, are assumed to solely rely on feedforward processing of the external inputs, as it is implied by the nomenclature. Nevertheless, evidence from recent experimental and modeling studies points to the role of feedback in bottom-up attention. Here, we review behavioral and neural evidence that feedback inputs are important for the formation of signals that could guide attentional selection based on exogenous inputs. Moreover, we review results from a modeling study elucidating mechanisms underlying the emergence of these signals in successive layers of neural populations and how they depend on feedback from higher visual areas. We use these results to interpret and discuss more recent findings that can further unravel feedforward and feedback neural mechanisms underlying bottom-up attention. We argue that while it is descriptively useful to separate feedforward and feedback processes underlying bottom-up attention, these processes cannot be mechanistically separated into two successive stages as they occur at almost the same time and affect neural activity within the same brain areas using similar neural mechanisms. Therefore, understanding the interaction and integration of feedforward and feedback inputs is crucial for better understanding of bottom-up attention. PMID:25784883
Cingulate, Frontal and Parietal Cortical Dysfunction in Attention-Deficit/Hyperactivity Disorder
Bush, George
2011-01-01
Functional and structural neuroimaging have identified abnormalities of the brain that are likely to contribute to the neuropathophysiology of attention-deficit/hyperactivity disorder (ADHD). In particular, hypofunction of the brain regions comprising the cingulo-frontal-parietal (CFP) cognitive-attention network have been consistently observed across studies. These are major components of neural systems that are relevant to ADHD, including cognitive/attention networks, motor systems and reward/feedback-based processing systems. Moreover, these areas interact with other brain circuits that have been implicated in ADHD, such as the “default mode” resting state network. ADHD imaging data related to CFP network dysfunction will be selectively highlighted here to help facilitate its integration with the other information presented in this special issue. Together, these reviews will help shed light on the neurobiology of ADHD. PMID:21489409
The ventral visual pathway: an expanded neural framework for the processing of object quality.
Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Ungerleider, Leslie G; Mishkin, Mortimer
2013-01-01
Since the original characterization of the ventral visual pathway, our knowledge of its neuroanatomy, functional properties, and extrinsic targets has grown considerably. Here we synthesize this recent evidence and propose that the ventral pathway is best understood as a recurrent occipitotemporal network containing neural representations of object quality both utilized and constrained by at least six distinct cortical and subcortical systems. Each system serves its own specialized behavioral, cognitive, or affective function, collectively providing the raison d'être for the ventral visual pathway. This expanded framework contrasts with the depiction of the ventral visual pathway as a largely serial staged hierarchy culminating in singular object representations and more parsimoniously incorporates attentional, contextual, and feedback effects. Published by Elsevier Ltd.
Panda, Priyadarshini; Roy, Kaushik
2017-01-01
Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations. PMID:29311774
A neural based intelligent flight control system for the NASA F-15 flight research aircraft
NASA Technical Reports Server (NTRS)
Urnes, James M.; Hoy, Stephen E.; Ladage, Robert N.; Stewart, James
1993-01-01
A flight control concept that can identify aircraft stability properties and continually optimize the aircraft flying qualities has been developed by McDonnell Aircraft Company under a contract with the NASA-Dryden Flight Research Facility. This flight concept, termed the Intelligent Flight Control System, utilizes Neural Network technology to identify the host aircraft stability and control properties during flight, and use this information to design on-line the control system feedback gains to provide continuous optimum flight response. This self-repairing capability can provide high performance flight maneuvering response throughout large flight envelopes, such as needed for the National Aerospace Plane. Moreover, achieving this response early in the vehicle's development schedule will save cost.
Spatiotemporal neural network dynamics for the processing of dynamic facial expressions.
Sato, Wataru; Kochiyama, Takanori; Uono, Shota
2015-07-24
The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150-200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300-350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual-motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions.
A Telescopic Binary Learning Machine for Training Neural Networks.
Brunato, Mauro; Battiti, Roberto
2017-03-01
This paper proposes a new algorithm based on multiscale stochastic local search with binary representation for training neural networks [binary learning machine (BLM)]. We study the effects of neighborhood evaluation strategies, the effect of the number of bits per weight and that of the maximum weight range used for mapping binary strings to real values. Following this preliminary investigation, we propose a telescopic multiscale version of local search, where the number of bits is increased in an adaptive manner, leading to a faster search and to local minima of better quality. An analysis related to adapting the number of bits in a dynamic way is presented. The control on the number of bits, which happens in a natural manner in the proposed method, is effective to increase the generalization performance. The learning dynamics are discussed and validated on a highly nonlinear artificial problem and on real-world tasks in many application domains; BLM is finally applied to a problem requiring either feedforward or recurrent architectures for feedback control.
Jerath, Ravinder; Crawford, Molly W.; Barnes, Vernon A.
2015-01-01
The Global Workspace Theory and Information Integration Theory are two of the most currently accepted consciousness models; however, these models do not address many aspects of conscious experience. We compare these models to our previously proposed consciousness model in which the thalamus fills-in processed sensory information from corticothalamic feedback loops within a proposed 3D default space, resulting in the recreation of the internal and external worlds within the mind. This 3D default space is composed of all cells of the body, which communicate via gap junctions and electrical potentials to create this unified space. We use 3D illustrations to explain how both visual and non-visual sensory information may be filled-in within this dynamic space, creating a unified seamless conscious experience. This neural sensory memory space is likely generated by baseline neural oscillatory activity from the default mode network, other salient networks, brainstem, and reticular activating system. PMID:26379573
Spatiotemporal neural network dynamics for the processing of dynamic facial expressions
Sato, Wataru; Kochiyama, Takanori; Uono, Shota
2015-01-01
The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708
Peng, Xiao; Wu, Huaiqin; Song, Ka; Shi, Jiaxin
2017-10-01
This paper is concerned with the global Mittag-Leffler synchronization and the synchronization in finite time for fractional-order neural networks (FNNs) with discontinuous activations and time delays. Firstly, the properties with respect to Mittag-Leffler convergence and convergence in finite time, which play a critical role in the investigation of the global synchronization of FNNs, are developed, respectively. Secondly, the novel state-feedback controller, which includes time delays and discontinuous factors, is designed to realize the synchronization goal. By applying the fractional differential inclusion theory, inequality analysis technique and the proposed convergence properties, the sufficient conditions to achieve the global Mittag-Leffler synchronization and the synchronization in finite time are addressed in terms of linear matrix inequalities (LMIs). In addition, the upper bound of the setting time of the global synchronization in finite time is explicitly evaluated. Finally, two examples are given to demonstrate the validity of the proposed design method and theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.
Jankovic, M V
2003-01-01
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.
De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.
2012-01-01
Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use. PMID:22629108
1992-09-01
finding an inverse plant such as was done by Bertrand [BD91] and by Levin, Gewirtzman and Inbar in a binary type inverse controller [LGI91], to self tuning...gain robust control. 2) Self oscillating adaptive controller. 3) Gain scheduling. 4) Self tuning. 5) Model-reference adaptive systems. Although the...of multidimensional systems (CS881 as well as aircraft [HG90]. The self oscillating method is also a feedback based mechanism, utilizing a relay in the
1991-09-01
34 ofetworker eqmpleuoaorreation withbounethat basis vectors (Lawley & Maxwell , 1963). naletwk arn ungd eqatsi wthe boune E It is possible to think of the...passive sonar system IJCNN- signal Aerospace Technology Center, John 89 Washington proceedings Hopkins University) Analysis of hidden Succesful use of...establish the weighted equations and C3 applications interconnmctions of the net and electronic feedback based AUTH: A/CONNELL, JOHN C ., JR. CORP, Naval
Sunlight, Sea Ice, and the Ice Albedo Feedback in a Changing Artic Sea Ice Cover
2015-11-30
information from the PIOMAS model [J. Zhang], melt pond coverage from MODIS [Rösel et al., 2012], and ice-age estimates [Maslanik et al., 2011] to...determined from MODIS satellite data using an artificial neural network, Cryosph., 6(2), 431–446, doi:10.5194/tc- 6-431-2012. PUBLICATIONS Carmack...from MODIS , and ice-age estimates to this dataset. We have used this extented dataset to build a climatology of the partitioning of solar heat between
Neural correlates of anticipation and processing of performance feedback in social anxiety.
Heitmann, Carina Y; Peterburs, Jutta; Mothes-Lasch, Martin; Hallfarth, Marlit C; Böhme, Stephanie; Miltner, Wolfgang H R; Straube, Thomas
2014-12-01
Fear of negative evaluation, such as negative social performance feedback, is the core symptom of social anxiety. The present study investigated the neural correlates of anticipation and perception of social performance feedback in social anxiety. High (HSA) and low (LSA) socially anxious individuals were asked to give a speech on a personally relevant topic and received standardized but appropriate expert performance feedback in a succeeding experimental session in which neural activity was measured during anticipation and presentation of negative and positive performance feedback concerning the speech performance, or a neutral feedback-unrelated control condition. HSA compared to LSA subjects reported greater anxiety during anticipation of negative feedback. Functional magnetic resonance imaging results showed deactivation of medial prefrontal brain areas during anticipation of negative feedback relative to the control and the positive condition, and medial prefrontal and insular hyperactivation during presentation of negative as well as positive feedback in HSA compared to LSA subjects. The results indicate distinct processes underlying feedback processing during anticipation and presentation of feedback in HSA as compared to LSA individuals. In line with the role of the medial prefrontal cortex in self-referential information processing and the insula in interoception, social anxiety seems to be associated with lower self-monitoring during feedback anticipation, and an increased self-focus and interoception during feedback presentation, regardless of feedback valence. © 2014 Wiley Periodicals, Inc.
van Schie, C C; Chiu, C D; Rombouts, S A R B; Heiser, W J; Elzinga, B M
2018-02-27
The way we view ourselves may play an important role in our responses to interpersonal interactions. In this study, we investigate how feedback valence, consistency of feedback with self-knowledge and global self-esteem influence affective and neural responses to social feedback. Participants (N = 46) with a high range of self-esteem levels performed the social feedback task in an MRI scanner. Negative, intermediate and positive feedback was provided, supposedly by another person based on a personal interview. Participants rated their mood and applicability of feedback to the self. Analyses on trial basis on neural and affective responses are used to incorporate applicability of individual feedback words. Lower self-esteem related to low mood especially after receiving non-applicable negative feedback. Higher self-esteem related to increased PCC and precuneus activation (i.e., self-referential processing) for applicable negative feedback. Lower self-esteem related to decreased mPFC, insula, ACC and PCC activation (i.e, self-referential processing) during positive feedback and decreased TPJ activation (i.e., other referential processing) for applicable positive feedback. Self-esteem and consistency of feedback with self-knowledge appear to guide our affective and neural responses to social feedback. This may be highly relevant for the interpersonal problems that individuals face with low self-esteem and negative self-views.
Hoshino, Osamu
2006-12-01
Although details of cortical interneurons in anatomy and physiology have been well understood, little is known about how they contribute to ongoing spontaneous neuronal activity that could have a great impact on subsequent neuronal information processing. Simulating a cortical neural network model of an early sensory area, we investigated whether and how two distinct types of inhibitory interneurons, or fast-spiking interneurons with narrow axonal arbors and slow-spiking interneurons with wide axonal arbors, have a spatiotemporal influence on the ongoing activity of principal cells and subsequent cognitive information processing. In the model, dynamic cell assemblies, or population activation of principal cells, expressed information about specific sensory features. Within cell assemblies, fast-spiking interneurons give a feedback inhibitory effect on principal cells. Between cell assemblies, slow-spiking interneurons give a lateral inhibitory effect on principal cells. Here, we show that these interneurons keep the network at a subthreshold level for action potential generation under the ongoing state, by which the reaction time of principal cells to sensory stimulation could be accelerated. We suggest that the best timing of inhibition mediated by fast-spiking interneurons and slow-spiking interneurons allows the network to remain near threshold for rapid responses to input.
Mejias, Jorge F; Payeur, Alexandre; Selin, Erik; Maler, Leonard; Longtin, André
2014-01-01
The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry-also known as "open-loop feedback"-, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.
Samarasinghe, S; Ling, H
In this paper, we show how to extend our previously proposed novel continuous time Recurrent Neural Networks (RNN) approach that retains the advantage of continuous dynamics offered by Ordinary Differential Equations (ODE) while enabling parameter estimation through adaptation, to larger signalling networks using a modular approach. Specifically, the signalling network is decomposed into several sub-models based on important temporal events in the network. Each sub-model is represented by the proposed RNN and trained using data generated from the corresponding ODE model. Trained sub-models are assembled into a whole system RNN which is then subjected to systems dynamics and sensitivity analyses. The concept is illustrated by application to G1/S transition in cell cycle using Iwamoto et al. (2008) ODE model. We decomposed the G1/S network into 3 sub-models: (i) E2F transcription factor release; (ii) E2F and CycE positive feedback loop for elevating cyclin levels; and (iii) E2F and CycA negative feedback to degrade E2F. The trained sub-models accurately represented system dynamics and parameters were in good agreement with the ODE model. The whole system RNN however revealed couple of parameters contributing to compounding errors due to feedback and required refinement to sub-model 2. These related to the reversible reaction between CycE/CDK2 and p27, its inhibitor. The revised whole system RNN model very accurately matched dynamics of the ODE system. Local sensitivity analysis of the whole system model further revealed the most dominant influence of the above two parameters in perturbing G1/S transition, giving support to a recent hypothesis that the release of inhibitor p27 from Cyc/CDK complex triggers cell cycle stage transition. To make the model useful in a practical setting, we modified each RNN sub-model with a time relay switch to facilitate larger interval input data (≈20min) (original model used data for 30s or less) and retrained them that produced parameters and protein concentrations similar to the original RNN system. Results thus demonstrated the reliability of the proposed RNN method for modelling relatively large networks by modularisation for practical settings. Advantages of the method are its ability to represent accurate continuous system dynamics and ease of: parameter estimation through training with data from a practical setting, model analysis (40% faster than ODE), fine tuning parameters when more data are available, sub-model extension when new elements and/or interactions come to light and model expansion with addition of sub-models. Copyright © 2017 Elsevier B.V. All rights reserved.
2018-01-01
During active behaviours like running, swimming, whisking or sniffing, motor actions shape sensory input and sensory percepts guide future motor commands. Ongoing cycles of sensory and motor processing constitute a closed-loop feedback system which is central to motor control and, it has been argued, for perceptual processes. This closed-loop feedback is mediated by brainwide neural circuits but how the presence of feedback signals impacts on the dynamics and function of neurons is not well understood. Here we present a simple theory suggesting that closed-loop feedback between the brain/body/environment can modulate neural gain and, consequently, change endogenous neural fluctuations and responses to sensory input. We support this theory with modeling and data analysis in two vertebrate systems. First, in a model of rodent whisking we show that negative feedback mediated by whisking vibrissa can suppress coherent neural fluctuations and neural responses to sensory input in the barrel cortex. We argue this suppression provides an appealing account of a brain state transition (a marked change in global brain activity) coincident with the onset of whisking in rodents. Moreover, this mechanism suggests a novel signal detection mechanism that selectively accentuates active, rather than passive, whisker touch signals. This mechanism is consistent with a predictive coding strategy that is sensitive to the consequences of motor actions rather than the difference between the predicted and actual sensory input. We further support the theory by re-analysing previously published two-photon data recorded in zebrafish larvae performing closed-loop optomotor behaviour in a virtual swim simulator. We show, as predicted by this theory, that the degree to which each cell contributes in linking sensory and motor signals well explains how much its neural fluctuations are suppressed by closed-loop optomotor behaviour. More generally we argue that our results demonstrate the dependence of neural fluctuations, across the brain, on closed-loop brain/body/environment interactions strongly supporting the idea that brain function cannot be fully understood through open-loop approaches alone. PMID:29342146
Buckley, Christopher L; Toyoizumi, Taro
2018-01-01
During active behaviours like running, swimming, whisking or sniffing, motor actions shape sensory input and sensory percepts guide future motor commands. Ongoing cycles of sensory and motor processing constitute a closed-loop feedback system which is central to motor control and, it has been argued, for perceptual processes. This closed-loop feedback is mediated by brainwide neural circuits but how the presence of feedback signals impacts on the dynamics and function of neurons is not well understood. Here we present a simple theory suggesting that closed-loop feedback between the brain/body/environment can modulate neural gain and, consequently, change endogenous neural fluctuations and responses to sensory input. We support this theory with modeling and data analysis in two vertebrate systems. First, in a model of rodent whisking we show that negative feedback mediated by whisking vibrissa can suppress coherent neural fluctuations and neural responses to sensory input in the barrel cortex. We argue this suppression provides an appealing account of a brain state transition (a marked change in global brain activity) coincident with the onset of whisking in rodents. Moreover, this mechanism suggests a novel signal detection mechanism that selectively accentuates active, rather than passive, whisker touch signals. This mechanism is consistent with a predictive coding strategy that is sensitive to the consequences of motor actions rather than the difference between the predicted and actual sensory input. We further support the theory by re-analysing previously published two-photon data recorded in zebrafish larvae performing closed-loop optomotor behaviour in a virtual swim simulator. We show, as predicted by this theory, that the degree to which each cell contributes in linking sensory and motor signals well explains how much its neural fluctuations are suppressed by closed-loop optomotor behaviour. More generally we argue that our results demonstrate the dependence of neural fluctuations, across the brain, on closed-loop brain/body/environment interactions strongly supporting the idea that brain function cannot be fully understood through open-loop approaches alone.
NASA Astrophysics Data System (ADS)
Ortiz, M.; Pinales, J. C.; Graber, H. C.; Wilkinson, J.; Lund, B.
2016-02-01
Melt ponds on sea ice play a significant and complex role on the thermodynamics in the Marginal Ice Zone (MIZ). Ponding reduces the sea ice's ability to reflect sunlight, and in consequence, exacerbates the albedo positive feedback cycle. In order to understand how melt ponds work and their effect on the heat uptake of sea ice, we must quantify ponds through their seasonal evolution first. A semi-supervised neural network three-class learning scheme using a gradient descent with momentum and adaptive learning rate backpropagation function is applied to classify melt ponds/melt areas in the Beaufort Sea region. The network uses high resolution panchromatic satellite images from the MEDEA program, which are collocated with autonomous platform arrays from the Marginal Ice Zone Program, including ice mass-balance buoys, arctic weather stations and wave buoys. The goal of the study is to capture the spatial variation of melt onset and freeze-up of the ponds within the MIZ, and gather ponding statistics such as size and concentration. The innovation of this work comes from training the neural network as the melt ponds evolve over time; making the machine learning algorithm time-dependent, which has not been previously done. We will achieve this by analyzing the image histograms through quantification of the minima and maxima intensity changes as well as linking textural variation information of the imagery. We will compare the evolution of the melt ponds against several different array sites on the sea ice to explore if there are spatial differences among the separated platforms in the MIZ.
Predicting reading and mathematics from neural activity for feedback learning.
Peters, Sabine; Van der Meulen, Mara; Zanolie, Kiki; Crone, Eveline A
2017-01-01
Although many studies use feedback learning paradigms to study the process of learning in laboratory settings, little is known about their relevance for real-world learning settings such as school. In a large developmental sample (N = 228, 8-25 years), we investigated whether performance and neural activity during a feedback learning task predicted reading and mathematics performance 2 years later. The results indicated that feedback learning performance predicted both reading and mathematics performance. Activity during feedback learning in left superior dorsolateral prefrontal cortex (DLPFC) predicted reading performance, whereas activity in presupplementary motor area/anterior cingulate cortex (pre-SMA/ACC) predicted mathematical performance. Moreover, left superior DLPFC and pre-SMA/ACC activity predicted unique variance in reading and mathematics ability over behavioral testing of feedback learning performance alone. These results provide valuable insights into the relationship between laboratory-based learning tasks and learning in school settings, and the value of neural assessments for prediction of school performance over behavioral testing alone. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Bhaskar, A. T.; Vichare, G.
2017-12-01
Here, an attempt is made to develop a prediction model for SYMH and ASYH geomagnetic indices using Artificial Neural Network (ANN). SYMH and ASYH indices represent longitudinal symmetric and asymmetric component of the ring current. The ring current state depends on its past conditions therefore, it is necessary to consider its history for prediction. To account this effect Nonlinear Autoregressive Network with eXogenous inputs (NARX) is implemented. This network considers input history of 30 minutes and output feedback of 120 minutes. Solar wind parameters mainly velocity, density and interplanetary magnetic field are used as inputs. SYMH and ASYH indices during geomagnetic storms of 1998-2013, having minimum SYMH <-85 nT are used as the target for training two independent networks. We present the prediction of SYMH and ASYH indices during 9 geomagnetic storms of solar cycle 24 including the recent largest storm occurred on St. Patrick's day, 2015. The present prediction model reproduces the entire time profile of SYMH and ASYH indices along with small variations of 10-30 minutes to good extent within noise level, indicating significant contribution of interplanetary sources and past state of the magnetosphere. However, during the main phase of major storms, residuals (observed-modeled) are found to be large, suggesting influence of internal factors such as magnetospheric processes.
Neural correlates of prosocial peer influence on public goods game donations during adolescence.
Van Hoorn, Jorien; Van Dijk, Eric; Güroğlu, Berna; Crone, Eveline A
2016-06-01
A unique feature of adolescent social re-orientation is heightened sensitivity to peer influence when taking risks. However, positive peer influence effects are not yet well understood. The present fMRI study tested a novel hypothesis, by examining neural correlates of prosocial peer influence on donation decisions in adolescence. Participants (age 12-16 years; N = 61) made decisions in anonymous groups about the allocation of tokens between themselves and the group in a public goods game. Two spectator groups of same-age peers-in fact youth actors-were allegedly online during some of the decisions. The task had a within-subjects design with three conditions: (1) EVALUATION: spectators evaluated decisions with likes for large donations to the group, (2) Spectator: spectators were present but no evaluative feedback was displayed and (3) Alone: no spectators nor feedback. Results showed that prosocial behavior increased in the presence of peers, and even more when participants received evaluative feedback from peers. Peer presence resulted in enhanced activity in several social brain regions including medial prefrontal cortex, temporal parietal junction (TPJ), precuneus and superior temporal sulcus. TPJ activity correlated with donations, which suggests similar networks for prosocial behavior and sensitivity to peers. These findings highlight the importance of peers in fostering prosocial development throughout adolescence. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Learning from ISS-modular adaptive NN control of nonlinear strict-feedback systems.
Wang, Cong; Wang, Min; Liu, Tengfei; Hill, David J
2012-10-01
This paper studies learning from adaptive neural control (ANC) for a class of nonlinear strict-feedback systems with unknown affine terms. To achieve the purpose of learning, a simple input-to-state stability (ISS) modular ANC method is first presented to ensure the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in finite time. Subsequently, it is proven that learning with the proposed stable ISS-modular ANC can be achieved. The cascade structure and unknown affine terms of the considered systems make it very difficult to achieve learning using existing methods. To overcome these difficulties, the stable closed-loop system in the control process is decomposed into a series of linear time-varying (LTV) perturbed subsystems with the appropriate state transformation. Using a recursive design, the partial persistent excitation condition for the radial basis function neural network (NN) is established, which guarantees exponential stability of LTV perturbed subsystems. Consequently, accurate approximation of the closed-loop system dynamics is achieved in a local region along recurrent orbits of closed-loop signals, and learning is implemented during a closed-loop feedback control process. The learned knowledge is reused to achieve stability and an improved performance, thereby avoiding the tremendous repeated training process of NNs. Simulation studies are given to demonstrate the effectiveness of the proposed method.
Wang, Cheng-Te; Lee, Chung-Ting; Wang, Xiao-Jing; Lo, Chung-Chuan
2013-01-01
Recent physiological studies have shown that neurons in various regions of the central nervous systems continuously receive noisy excitatory and inhibitory synaptic inputs in a balanced and covaried fashion. While this balanced synaptic input (BSI) is typically described in terms of maintaining the stability of neural circuits, a number of experimental and theoretical studies have suggested that BSI plays a proactive role in brain functions such as top-down modulation for executive control. Two issues have remained unclear in this picture. First, given the noisy nature of neuronal activities in neural circuits, how do the modulatory effects change if the top-down control implements BSI with different ratios between inhibition and excitation? Second, how is a top-down BSI realized via only excitatory long-range projections in the neocortex? To address the first issue, we systematically tested how the inhibition/excitation ratio affects the accuracy and reaction times of a spiking neural circuit model of perceptual decision. We defined an energy function to characterize the network dynamics, and found that different ratios modulate the energy function of the circuit differently and form two distinct functional modes. To address the second issue, we tested BSI with long-distance projection to inhibitory neurons that are either feedforward or feedback, depending on whether these inhibitory neurons do or do not receive inputs from local excitatory cells, respectively. We found that BSI occurs in both cases. Furthermore, when relying on feedback inhibitory neurons, through the recurrent interactions inside the circuit, BSI dynamically and automatically speeds up the decision by gradually reducing its inhibitory component in the course of a trial when a decision process takes too long. PMID:23626812
Wang, Cheng-Te; Lee, Chung-Ting; Wang, Xiao-Jing; Lo, Chung-Chuan
2013-01-01
Recent physiological studies have shown that neurons in various regions of the central nervous systems continuously receive noisy excitatory and inhibitory synaptic inputs in a balanced and covaried fashion. While this balanced synaptic input (BSI) is typically described in terms of maintaining the stability of neural circuits, a number of experimental and theoretical studies have suggested that BSI plays a proactive role in brain functions such as top-down modulation for executive control. Two issues have remained unclear in this picture. First, given the noisy nature of neuronal activities in neural circuits, how do the modulatory effects change if the top-down control implements BSI with different ratios between inhibition and excitation? Second, how is a top-down BSI realized via only excitatory long-range projections in the neocortex? To address the first issue, we systematically tested how the inhibition/excitation ratio affects the accuracy and reaction times of a spiking neural circuit model of perceptual decision. We defined an energy function to characterize the network dynamics, and found that different ratios modulate the energy function of the circuit differently and form two distinct functional modes. To address the second issue, we tested BSI with long-distance projection to inhibitory neurons that are either feedforward or feedback, depending on whether these inhibitory neurons do or do not receive inputs from local excitatory cells, respectively. We found that BSI occurs in both cases. Furthermore, when relying on feedback inhibitory neurons, through the recurrent interactions inside the circuit, BSI dynamically and automatically speeds up the decision by gradually reducing its inhibitory component in the course of a trial when a decision process takes too long.
Chen, Liang; Xue, Wei; Tokuda, Naoyuki
2010-08-01
In many pattern classification/recognition applications of artificial neural networks, an object to be classified is represented by a fixed sized 2-dimensional array of uniform type, which corresponds to the cells of a 2-dimensional grid of the same size. A general neural network structure, called an undistricted neural network, which takes all the elements in the array as inputs could be used for problems such as these. However, a districted neural network can be used to reduce the training complexity. A districted neural network usually consists of two levels of sub-neural networks. Each of the lower level neural networks, called a regional sub-neural network, takes the elements in a region of the array as its inputs and is expected to output a temporary class label, called an individual opinion, based on the partial information of the entire array. The higher level neural network, called an assembling sub-neural network, uses the outputs (opinions) of regional sub-neural networks as inputs, and by consensus derives the label decision for the object. Each of the sub-neural networks can be trained separately and thus the training is less expensive. The regional sub-neural networks can be trained and performed in parallel and independently, therefore a high speed can be achieved. We prove theoretically in this paper, using a simple model, that a districted neural network is actually more stable than an undistricted neural network in noisy environments. We conjecture that the result is valid for all neural networks. This theory is verified by experiments involving gender classification and human face recognition. We conclude that a districted neural network is highly recommended for neural network applications in recognition or classification of 2-dimensional array patterns in highly noisy environments. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Asymmetric interjoint feedback contributes to postural control of redundant multi-link systems
NASA Astrophysics Data System (ADS)
Bunderson, Nathan E.; Ting, Lena H.; Burkholder, Thomas J.
2007-09-01
Maintaining the postural configuration of a limb such as an arm or leg is a fundamental neural control task that involves the coordination of multiple linked body segments. Biological systems are known to use a complex network of inter- and intra-joint feedback mechanisms arising from muscles, spinal reflexes and higher neuronal structures to stabilize the limbs. While previous work has shown that a small amount of asymmetric heterogenic feedback contributes to the behavior of these systems, a satisfactory functional explanation for this non-conservative feedback structure has not been put forth. We hypothesized that an asymmetric multi-joint control strategy would confer both an energetic and stability advantage in maintaining endpoint position of a kinematically redundant system. We tested this hypothesis by using optimal control models incorporating symmetric versus asymmetric feedback with the goal of maintaining the endpoint location of a kinematically redundant, planar limb. Asymmetric feedback improved endpoint control performance of the limb by 16%, reduced energetic cost by 21% and increased interjoint coordination by 40% compared to the symmetric feedback system. The overall effect of the asymmetry was that proximal joint motion resulted in greater torque generation at distal joints than vice versa. The asymmetric organization is consistent with heterogenic stretch reflex gains measured experimentally. We conclude that asymmetric feedback has a functionally relevant role in coordinating redundant degrees of freedom to maintain the position of the hand or foot.
Asymmetric interjoint feedback contributes to postural control of redundant multi-link systems
Bunderson, Nathan E.; Ting, Lena H.; Burkholder, Thomas J.
2008-01-01
Maintaining the postural configuration of a limb such as an arm or leg is a fundamental neural control task that involves the coordination of multiple linked body segments. Biological systems are known to use a complex network of inter- and intra-joint feedback mechanisms arising from muscles, spinal reflexes, and higher neuronal structures to stabilize the limbs. While previous work has shown that a small amount of asymmetric heterogenic feedback contributes to the behavior of these systems, a satisfactory functional explanation for this nonconservative feedback structure has not been put forth. We hypothesized that an asymmetric multi-joint control strategy would confer both an energetic and stability advantage in maintaining endpoint position of a kinematically redundant system. We tested this hypothesis by using optimal control models incorporating symmetric versus asymmetric feedback with the goal of maintaining the endpoint location of a kinematically redundant, planar limb. Asymmetric feedback improved endpoint control performance of the limb by 16%, reduced energetic cost by 21% and increased interjoint coordination by 40% compared to the symmetric feedback system. The overall effect of the asymmetry was that proximal joint motion resulted in greater torque generation at distal joints than vice versa. The asymmetric organization is consistent with heterogenic stretch reflex gains measured experimentally. We conclude that asymmetric feedback has a functionally relevant role in coordinating redundant degrees of freedom to maintain the position of the hand or foot. PMID:17873426
Interaction in Spoken Word Recognition Models: Feedback Helps.
Magnuson, James S; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D
2018-01-01
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.
Interaction in Spoken Word Recognition Models: Feedback Helps
Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.
2018-01-01
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593
van Schie, Charlotte C; Chiu, Chui-De; Rombouts, Serge A R B; Heiser, Willem J; Elzinga, Bernet M
2018-01-01
Abstract The way we view ourselves may play an important role in our responses to interpersonal interactions. In this study, we investigate how feedback valence, consistency of feedback with self-knowledge and global self-esteem influence affective and neural responses to social feedback. Participants (N = 46) with a high range of self-esteem levels performed the social feedback task in an MRI scanner. Negative, intermediate and positive feedback was provided, supposedly by another person based on a personal interview. Participants rated their mood and applicability of feedback to the self. Analyses on trial basis on neural and affective responses are used to incorporate applicability of individual feedback words. Lower self-esteem related to low mood especially after receiving non-applicable negative feedback. Higher self-esteem related to increased posterior cingulate cortex and precuneus activation (i.e. self-referential processing) for applicable negative feedback. Lower self-esteem related to decreased medial prefrontal cortex, insula, anterior cingulate cortex and posterior cingulate cortex activation (i.e. self-referential processing) during positive feedback and decreased temporoparietal junction activation (i.e. other referential processing) for applicable positive feedback. Self-esteem and consistency of feedback with self-knowledge appear to guide our affective and neural responses to social feedback. This may be highly relevant for the interpersonal problems that individuals face with low self-esteem and negative self-views. PMID:29490088
1995-11-01
network - based AFS concepts. Neural networks can addition of vanes in each engine exhaust for thrust provide...parameter estimation programs 19-11 8.6 Neural Network Based Methods unknown parameters of the postulated state space model Artificial neural network ...Forward Neural Network the network that the applicability of the recurrent neural and ii) Recurrent Neural Network [117-119]. network to
Using artificial neural networks to constrain the halo baryon fraction during reionization
NASA Astrophysics Data System (ADS)
Sullivan, David; Iliev, Ilian T.; Dixon, Keri L.
2018-01-01
Radiative feedback from stars and galaxies has been proposed as a potential solution to many of the tensions with simplistic galaxy formation models based on Λcold dark matter, such as the faint end of the ultraviolet (UV) luminosity function. The total energy budget of radiation could exceed that of galactic winds and supernovae combined, which has driven the development of sophisticated algorithms that evolve both the radiation field and the hydrodynamical response of gas simultaneously, in a cosmological context. We probe self-feedback on galactic scales using the adaptive mesh refinement, radiative transfer, hydrodynamics, and N-body code RAMSES-RT. Unlike previous studies which assume a homogeneous UV background, we self-consistently evolve both the radiation field and gas to constrain the halo baryon fraction during cosmic reionization. We demonstrate that the characteristic halo mass with mean baryon fraction half the cosmic mean, Mc(z), shows very little variation as a function of mass-weighted ionization fraction. Furthermore, we find that the inclusion of metal cooling and the ability to resolve scales small enough for self-shielding to become efficient leads to a significant drop in Mc when compared to recent studies. Finally, we develop an artificial neural network that is capable of predicting the baryon fraction of haloes based on recent tidal interactions, gas temperature, and mass-weighted ionization fraction. Such a model can be applied to any reionization history, and trivially incorporated into semi-analytical models of galaxy formation.
Schweighofer, N; Spoelstra, J; Arbib, M A; Kawato, M
1998-01-01
The cerebellum is essential for the control of multijoint movements; when the cerebellum is lesioned, the performance error is more than the summed errors produced by single joints. In the companion paper (Schweighofer et al., 1998), a functional anatomical model for visually guided arm movement was proposed. The model comprised a basic feedforward/feedback controller with realistic transmission delays and was connected to a two-link, six-muscle, planar arm. In the present study, we examined the role of the cerebellum in reaching movements by embedding a novel, detailed cerebellar neural network in this functional control model. We could derive realistic cerebellar inputs and the role of the cerebellum in learning to control the arm was assessed. This cerebellar network learned the part of the inverse dynamics of the arm not provided by the basic feedforward/feedback controller. Despite realistically low inferior olive firing rates and noisy mossy fibre inputs, the model could reduce the error between intended and planned movements. The responses of the different cell groups were comparable to those of biological cell groups. In particular, the modelled Purkinje cells exhibited directional tuning after learning and the parallel fibres, due to their length, provide Purkinje cells with the input required for this coordination task. The inferior olive responses contained two different components; the earlier response, locked to movement onset, was always present and the later response disappeared after learning. These results support the theory that the cerebellum is involved in motor learning.
The Role of Corticostriatal Systems in Speech Category Learning
Yi, Han-Gyol; Maddox, W. Todd; Mumford, Jeanette A.; Chandrasekaran, Bharath
2016-01-01
One of the most difficult category learning problems for humans is learning nonnative speech categories. While feedback-based category training can enhance speech learning, the mechanisms underlying these benefits are unclear. In this functional magnetic resonance imaging study, we investigated neural and computational mechanisms underlying feedback-dependent speech category learning in adults. Positive feedback activated a large corticostriatal network including the dorsolateral prefrontal cortex, inferior parietal lobule, middle temporal gyrus, caudate, putamen, and the ventral striatum. Successful learning was contingent upon the activity of domain-general category learning systems: the fast-learning reflective system, involving the dorsolateral prefrontal cortex that develops and tests explicit rules based on the feedback content, and the slow-learning reflexive system, involving the putamen in which the stimuli are implicitly associated with category responses based on the reward value in feedback. Computational modeling of response strategies revealed significant use of reflective strategies early in training and greater use of reflexive strategies later in training. Reflexive strategy use was associated with increased activation in the putamen. Our results demonstrate a critical role for the reflexive corticostriatal learning system as a function of response strategy and proficiency during speech category learning. Keywords: category learning, fMRI, corticostriatal systems, speech, putamen PMID:25331600
Wu, Yen-Chi; Lee, Kyu-Sun; Song, Yan; Gehrke, Stephan; Lu, Bingwei
2017-05-01
Notch (N) signaling is central to the self-renewal of neural stem cells (NSCs) and other tissue stem cells. Its deregulation compromises tissue homeostasis and contributes to tumorigenesis and other diseases. How N regulates stem cell behavior in health and disease is not well understood. Here we show that N regulates bantam (ban) microRNA to impact cell growth, a process key to NSC maintenance and particularly relied upon by tumor-forming cancer stem cells. Notch signaling directly regulates ban expression at the transcriptional level, and ban in turn feedback regulates N activity through negative regulation of the Notch inhibitor Numb. This feedback regulatory mechanism helps maintain the robustness of N signaling activity and NSC fate. Moreover, we show that a Numb-Myc axis mediates the effects of ban on nucleolar and cellular growth independently or downstream of N. Our results highlight intricate transcriptional as well as translational control mechanisms and feedback regulation in the N signaling network, with important implications for NSC biology and cancer biology.
Motion video compression system with neural network having winner-take-all function
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)
1997-01-01
A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.
Neural networks for aircraft control
NASA Technical Reports Server (NTRS)
Linse, Dennis
1990-01-01
Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.
Kortink, Elise D; Weeda, Wouter D; Crowley, Michael J; Gunther Moor, Bregtje; van der Molen, Melle J W
2018-06-01
Monitoring social threat is essential for maintaining healthy social relationships, and recent studies suggest a neural alarm system that governs our response to social rejection. Frontal-midline theta (4-8 Hz) oscillatory power might act as a neural correlate of this system by being sensitive to unexpected social rejection. Here, we examined whether frontal-midline theta is modulated by individual differences in personality constructs sensitive to social disconnection. In addition, we examined the sensitivity of feedback-related brain potentials (i.e., the feedback-related negativity and P3) to social feedback. Sixty-five undergraduate female participants (mean age = 19.69 years) participated in the Social Judgment Paradigm, a fictitious peer-evaluation task in which participants provided expectancies about being liked/disliked by peer strangers. Thereafter, they received feedback signaling social acceptance/rejection. A community structure analysis was employed to delineate personality profiles in our data. Results provided evidence of two subgroups: one group scored high on attachment-related anxiety and fear of negative evaluation, whereas the other group scored high on attachment-related avoidance and low on fear of negative evaluation. In both groups, unexpected rejection feedback yielded a significant increase in theta power. The feedback-related negativity was sensitive to unexpected feedback, regardless of valence, and was largest for unexpected rejection feedback. The feedback-related P3 was significantly enhanced in response to expected social acceptance feedback. Together, these findings confirm the sensitivity of frontal midline theta oscillations to the processing of social threat, and suggest that this alleged neural alarm system behaves similarly in individuals that differ in personality constructs relevant to social evaluation.
The ventral visual pathway: An expanded neural framework for the processing of object quality
Kravitz, Dwight J.; Saleem, Kadharbatcha S.; Baker, Chris I.; Ungerleider, Leslie G.; Mishkin, Mortimer
2012-01-01
Since the original characterization of the ventral visual pathway our knowledge of its neuroanatomy, functional properties, and extrinsic targets has grown considerably. Here we synthesize this recent evidence and propose that the ventral pathway is best understood as a recurrent occipitotemporal network containing neural representations of object quality both utilized and constrained by at least six distinct cortical and subcortical systems. Each system serves its own specialized behavioral, cognitive, or affective function, collectively providing the raison d’etre for the ventral visual pathway. This expanded framework contrasts with the depiction of the ventral visual pathway as a largely serial staged hierarchy that culminates in singular object representations for utilization mainly by ventrolateral prefrontal cortex and, more parsimoniously than this account, incorporates attentional, contextual, and feedback effects. PMID:23265839
Evaluative-feedback stimuli selectively activate the self-related brain area: an fMRI study.
Pan, Xiaohong; Hu, Yang; Li, Lei; Li, Jianqi
2009-11-06
Evaluative-feedback, occurring in our daily life, generally contains subjective appraisal of one's specific abilities and personality characteristics besides objective right-or-wrong information. Traditional psychological researches have proved it to be important in building up one's self-concept; however, the neural basis underlying its cognitive processing remains unclear. The present neuroimaging study revealed the mechanism of evaluative-feedback processing at the neural level. 19 healthy Chinese subjects participated in this experiment, and completed the time-estimation task to better their performance according to four types of feedback, namely positive evaluative- and performance-feedback as well as negative evaluative- and performance-feedback. Neuroimaging findings showed that evaluative- rather than performance-feedback can induce increased activities mainly distributed in the cortical midline structures (CMS), including medial prefrontal cortex (BA 8/9)/anterior cigulate cortex (ACC, BA 20), precuneus (BA 7/31) adjacent to posterior cingulate gyrus (PCC, BA 23) of both hemispheres, as well as right inferior lobule (BA 40). This phenomenon can provide evidence that evaluative-feedback may significantly elicit the self-related processing in our brain. In addition, our results also revealed that more brain areas, particularly some self-related neural substrates were activated by the positive evaluative-feedback, in comparative with the negative one. In sum, this study suggested that evaluative-feedback was closely correlated with the self-concept processing, which distinguished it from the performance-feedback.
Kobza, Stefan; Ferrea, Stefano; Schnitzler, Alfons; Pollok, Bettina; Südmeyer, Martin; Bellebaum, Christian
2012-01-01
Feedback to both actively performed and observed behaviour allows adaptation of future actions. Positive feedback leads to increased activity of dopamine neurons in the substantia nigra, whereas dopamine neuron activity is decreased following negative feedback. Dopamine level reduction in unmedicated Parkinson's Disease patients has been shown to lead to a negative learning bias, i.e. enhanced learning from negative feedback. Recent findings suggest that the neural mechanisms of active and observational learning from feedback might differ, with the striatum playing a less prominent role in observational learning. Therefore, it was hypothesized that unmedicated Parkinson's Disease patients would show a negative learning bias only in active but not in observational learning. In a between-group design, 19 Parkinson's Disease patients and 40 healthy controls engaged in either an active or an observational probabilistic feedback-learning task. For both tasks, transfer phases aimed to assess the bias to learn better from positive or negative feedback. As expected, actively learning patients showed a negative learning bias, whereas controls learned better from positive feedback. In contrast, no difference between patients and controls emerged for observational learning, with both groups showing better learning from positive feedback. These findings add to neural models of reinforcement-learning by suggesting that dopamine-modulated input to the striatum plays a minor role in observational learning from feedback. Future research will have to elucidate the specific neural underpinnings of observational learning.
Marzullo, Timothy Charles; Lehmkuhle, Mark J; Gage, Gregory J; Kipke, Daryl R
2010-04-01
Closed-loop neural interface technology that combines neural ensemble decoding with simultaneous electrical microstimulation feedback is hypothesized to improve deep brain stimulation techniques, neuromotor prosthetic applications, and epilepsy treatment. Here we describe our iterative results in a rat model of a sensory and motor neurophysiological feedback control system. Three rats were chronically implanted with microelectrode arrays in both the motor and visual cortices. The rats were subsequently trained over a period of weeks to modulate their motor cortex ensemble unit activity upon delivery of intra-cortical microstimulation (ICMS) of the visual cortex in order to receive a food reward. Rats were given continuous feedback via visual cortex ICMS during the response periods that was representative of the motor cortex ensemble dynamics. Analysis revealed that the feedback provided the animals with indicators of the behavioral trials. At the hardware level, this preparation provides a tractable test model for improving the technology of closed-loop neural devices.
Automatic Adaptation to Fast Input Changes in a Time-Invariant Neural Circuit
Bharioke, Arjun; Chklovskii, Dmitri B.
2015-01-01
Neurons must faithfully encode signals that can vary over many orders of magnitude despite having only limited dynamic ranges. For a correlated signal, this dynamic range constraint can be relieved by subtracting away components of the signal that can be predicted from the past, a strategy known as predictive coding, that relies on learning the input statistics. However, the statistics of input natural signals can also vary over very short time scales e.g., following saccades across a visual scene. To maintain a reduced transmission cost to signals with rapidly varying statistics, neuronal circuits implementing predictive coding must also rapidly adapt their properties. Experimentally, in different sensory modalities, sensory neurons have shown such adaptations within 100 ms of an input change. Here, we show first that linear neurons connected in a feedback inhibitory circuit can implement predictive coding. We then show that adding a rectification nonlinearity to such a feedback inhibitory circuit allows it to automatically adapt and approximate the performance of an optimal linear predictive coding network, over a wide range of inputs, while keeping its underlying temporal and synaptic properties unchanged. We demonstrate that the resulting changes to the linearized temporal filters of this nonlinear network match the fast adaptations observed experimentally in different sensory modalities, in different vertebrate species. Therefore, the nonlinear feedback inhibitory network can provide automatic adaptation to fast varying signals, maintaining the dynamic range necessary for accurate neuronal transmission of natural inputs. PMID:26247884
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex
Procyk, Emmanuel; Dominey, Peter Ford
2016-01-01
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a pertinent framework to model local cortical dynamics and their contribution to higher cognitive function. PMID:27286251
Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.
2015-01-01
The answer to the question of how the brain incorporates sensory feedback and links it with motor function to achieve goal-directed movement during vocalization remains unclear. We investigated the mechanisms of voice pitch motor control by examining the spectro-temporal dynamics of EEG signals when non-musicians (NM), relative pitch (RP), and absolute pitch (AP) musicians maintained vocalizations of a vowel sound and received randomized ± 100 cents pitch-shift stimuli in their auditory feedback. We identified a phase-synchronized (evoked) fronto-central activation within the theta band (5–8 Hz) that temporally overlapped with compensatory vocal responses to pitch-shifted auditory feedback and was significantly stronger in RP and AP musicians compared with non-musicians. A second component involved a non-phase-synchronized (induced) frontal activation within the delta band (1–4 Hz) that emerged at approximately 1 s after the stimulus onset. The delta activation was significantly stronger in the NM compared with RP and AP groups and correlated with the pitch rebound error (PRE), indicating the degree to which subjects failed to re-adjust their voice pitch to baseline after the stimulus offset. We propose that the evoked theta is a neurophysiological marker of enhanced pitch processing in musicians and reflects mechanisms by which humans incorporate auditory feedback to control their voice pitch. We also suggest that the delta activation reflects adaptive neural processes by which vocal production errors are monitored and used to update the state of sensory-motor networks for driving subsequent vocal behaviors. This notion is corroborated by our findings showing that larger PREs were associated with greater delta band activity in the NM compared with RP and AP groups. These findings provide new insights into the neural mechanisms of auditory feedback processing for vocal pitch motor control. PMID:25873858
Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language
NASA Astrophysics Data System (ADS)
Tanadi, Theo
2018-03-01
Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.
Behavioral and neural effects of congruency of visual feedback during short-term motor learning.
Ossmy, Ori; Mukamel, Roy
2018-05-15
Visual feedback can facilitate or interfere with movement execution. Here, we describe behavioral and neural mechanisms by which the congruency of visual feedback during physical practice of a motor skill modulates subsequent performance gains. 18 healthy subjects learned to execute rapid sequences of right hand finger movements during fMRI scans either with or without visual feedback. Feedback consisted of a real-time, movement-based display of virtual hands that was either congruent (right virtual hand movement), or incongruent (left virtual hand movement yoked to the executing right hand). At the group level, right hand performance gains following training with congruent visual feedback were significantly higher relative to training without visual feedback. Conversely, performance gains following training with incongruent visual feedback were significantly lower. Interestingly, across individual subjects these opposite effects correlated. Activation in the Supplementary Motor Area (SMA) during training corresponded to individual differences in subsequent performance gains. Furthermore, functional coupling of SMA with visual cortices predicted individual differences in behavior. Our results demonstrate that some individuals are more sensitive than others to congruency of visual feedback during short-term motor learning and that neural activation in SMA correlates with such inter-individual differences. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, Guo-Qing; Xu, Jing; Wei, Jun
2018-04-01
Two algorithms based on machine learning neural networks are proposed—the shallow learning (S-L) and deep learning (D-L) algorithms—that can potentially be used in atmosphere-only typhoon forecast models to provide flow-dependent typhoon-induced sea surface temperature cooling (SSTC) for improving typhoon predictions. The major challenge of existing SSTC algorithms in forecast models is how to accurately predict SSTC induced by an upcoming typhoon, which requires information not only from historical data but more importantly also from the target typhoon itself. The S-L algorithm composes of a single layer of neurons with mixed atmospheric and oceanic factors. Such a structure is found to be unable to represent correctly the physical typhoon-ocean interaction. It tends to produce an unstable SSTC distribution, for which any perturbations may lead to changes in both SSTC pattern and strength. The D-L algorithm extends the neural network to a 4 × 5 neuron matrix with atmospheric and oceanic factors being separated in different layers of neurons, so that the machine learning can determine the roles of atmospheric and oceanic factors in shaping the SSTC. Therefore, it produces a stable crescent-shaped SSTC distribution, with its large-scale pattern determined mainly by atmospheric factors (e.g., winds) and small-scale features by oceanic factors (e.g., eddies). Sensitivity experiments reveal that the D-L algorithms improve maximum wind intensity errors by 60-70% for four case study simulations, compared to their atmosphere-only model runs.
Subramaniam, Karuna; Hooker, Christine I; Biagianti, Bruno; Fisher, Melissa; Nagarajan, Srikantan; Vinogradov, Sophia
2015-01-01
Amotivation in schizophrenia is a central predictor of poor functioning, and is thought to occur due to deficits in anticipating future rewards, suggesting that impairments in anticipating pleasure can contribute to functional disability in schizophrenia. In healthy comparison (HC) participants, reward anticipation is associated with activity in frontal-striatal networks. By contrast, schizophrenia (SZ) participants show hypoactivation within these frontal-striatal networks during this motivated anticipatory brain state. Here, we examined neural activation in SZ and HC participants during the anticipatory phase of stimuli that predicted immediate upcoming reward and punishment, and during the feedback/outcome phase, in relation to trait measures of hedonic pleasure and real-world functional capacity. SZ patients showed hypoactivation in ventral striatum during reward anticipation. Additionally, we found distinct differences between HC and SZ groups in their association between reward-related immediate anticipatory neural activity and their reported experience of pleasure. HC participants recruited reward-related regions in striatum that significantly correlated with subjective consummatory pleasure, while SZ patients revealed activation in attention-related regions, such as the IPL, which correlated with consummatory pleasure and functional capacity. These findings may suggest that SZ patients activate compensatory attention processes during anticipation of immediate upcoming rewards, which likely contribute to their functional capacity in daily life.
Real-time functional magnetic resonance imaging neurofeedback in motor neurorehabilitation.
Linden, David E J; Turner, Duncan L
2016-08-01
Recent developments in functional magnetic resonance imaging (fMRI) have catalyzed a new field of translational neuroscience. Using fMRI to monitor the aspects of task-related changes in neural activation or brain connectivity, investigators can offer feedback of simple or complex neural signals/patterns back to the participant on a quasireal-time basis [real-time-fMRI-based neurofeedback (rt-fMRI-NF)]. Here, we introduce some background methodology of the new developments in this field and give a perspective on how they may be used in neurorehabilitation in the future. The development of rt-fMRI-NF has been used to promote self-regulation of activity in several brain regions and networks. In addition, and unlike other noninvasive techniques, rt-fMRI-NF can access specific subcortical regions and in principle any region that can be monitored using fMRI including the cerebellum, brainstem and spinal cord. In Parkinson's disease and stroke, rt-fMRI-NF has been demonstrated to alter neural activity after the self-regulation training was completed and to modify specific behaviours. Future exploitation of rt-fMRI-NF could be used to induce neuroplasticity in brain networks that are involved in certain neurological conditions. However, currently, the use of rt-fMRI-NF in randomized, controlled clinical trials is in its infancy.
Subramaniam, Karuna; Hooker, Christine I.; Biagianti, Bruno; Fisher, Melissa; Nagarajan, Srikantan; Vinogradov, Sophia
2015-01-01
Amotivation in schizophrenia is a central predictor of poor functioning, and is thought to occur due to deficits in anticipating future rewards, suggesting that impairments in anticipating pleasure can contribute to functional disability in schizophrenia. In healthy comparison (HC) participants, reward anticipation is associated with activity in frontal–striatal networks. By contrast, schizophrenia (SZ) participants show hypoactivation within these frontal–striatal networks during this motivated anticipatory brain state. Here, we examined neural activation in SZ and HC participants during the anticipatory phase of stimuli that predicted immediate upcoming reward and punishment, and during the feedback/outcome phase, in relation to trait measures of hedonic pleasure and real-world functional capacity. SZ patients showed hypoactivation in ventral striatum during reward anticipation. Additionally, we found distinct differences between HC and SZ groups in their association between reward-related immediate anticipatory neural activity and their reported experience of pleasure. HC participants recruited reward-related regions in striatum that significantly correlated with subjective consummatory pleasure, while SZ patients revealed activation in attention-related regions, such as the IPL, which correlated with consummatory pleasure and functional capacity. These findings may suggest that SZ patients activate compensatory attention processes during anticipation of immediate upcoming rewards, which likely contribute to their functional capacity in daily life. PMID:26413478
The Role of Competitive Inhibition and Top-Down Feedback in Binding during Object Recognition
Wyatte, Dean; Herd, Seth; Mingus, Brian; O’Reilly, Randall
2012-01-01
How does the brain bind together visual features that are processed concurrently by different neurons into a unified percept suitable for processes such as object recognition? Here, we describe how simple, commonly accepted principles of neural processing can interact over time to solve the brain’s binding problem. We focus on mechanisms of neural inhibition and top-down feedback. Specifically, we describe how inhibition creates competition among neural populations that code different features, effectively suppressing irrelevant information, and thus minimizing illusory conjunctions. Top-down feedback contributes to binding in a similar manner, but by reinforcing relevant features. Together, inhibition and top-down feedback contribute to a competitive environment that ensures only the most appropriate features are bound together. We demonstrate this overall proposal using a biologically realistic neural model of vision that processes features across a hierarchy of interconnected brain areas. Finally, we argue that temporal synchrony plays only a limited role in binding – it does not simultaneously bind multiple objects, but does aid in creating additional contrast between relevant and irrelevant features. Thus, our overall theory constitutes a solution to the binding problem that relies only on simple neural principles without any binding-specific processes. PMID:22719733
Biologically inspired computation and learning in Sensorimotor Systems
NASA Astrophysics Data System (ADS)
Lee, Daniel D.; Seung, H. S.
2001-11-01
Networking systems presently lack the ability to intelligently process the rich multimedia content of the data traffic they carry. Endowing artificial systems with the ability to adapt to changing conditions requires algorithms that can rapidly learn from examples. We demonstrate the application of such learning algorithms on an inexpensive quadruped robot constructed to perform simple sensorimotor tasks. The robot learns to track a particular object by discovering the salient visual and auditory cues unique to that object. The system uses a convolutional neural network that automatically combines color, luminance, motion, and auditory information. The weights of the networks are adjusted using feedback from a teacher to reflect the reliability of the various input channels in the surrounding environment. Additionally, the robot is able to compensate for its own motion by adapting the parameters of a vestibular ocular reflex system.
Optogenetic feedback control of neural activity
Newman, Jonathan P; Fong, Ming-fai; Millard, Daniel C; Whitmire, Clarissa J; Stanley, Garrett B; Potter, Steve M
2015-01-01
Optogenetic techniques enable precise excitation and inhibition of firing in specified neuronal populations and artifact-free recording of firing activity. Several studies have suggested that optical stimulation provides the precision and dynamic range requisite for closed-loop neuronal control, but no approach yet permits feedback control of neuronal firing. Here we present the ‘optoclamp’, a feedback control technology that provides continuous, real-time adjustments of bidirectional optical stimulation in order to lock spiking activity at specified targets over timescales ranging from seconds to days. We demonstrate how this system can be used to decouple neuronal firing levels from ongoing changes in network excitability due to multi-hour periods of glutamatergic or GABAergic neurotransmission blockade in vitro as well as impinging vibrissal sensory drive in vivo. This technology enables continuous, precise optical control of firing in neuronal populations in order to disentangle causally related variables of circuit activation in a physiologically and ethologically relevant manner. DOI: http://dx.doi.org/10.7554/eLife.07192.001 PMID:26140329
Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios
2018-06-21
Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.
Model-Based Adaptive Event-Triggered Control of Strict-Feedback Nonlinear Systems.
Li, Yuan-Xin; Yang, Guang-Hong
2018-04-01
This paper is concerned with the adaptive event-triggered control problem of nonlinear continuous-time systems in strict-feedback form. By using the event-sampled neural network (NN) to approximate the unknown nonlinear function, an adaptive model and an associated event-triggered controller are designed by exploiting the backstepping method. In the proposed method, the feedback signals and the NN weights are aperiodically updated only when the event-triggered condition is violated. A positive lower bound on the minimum intersample time is guaranteed to avoid accumulation point. The closed-loop stability of the resulting nonlinear impulsive dynamical system is rigorously proved via Lyapunov analysis under an adaptive event sampling condition. In comparing with the traditional adaptive backstepping design with a fixed sample period, the event-triggered method samples the state and updates the NN weights only when it is necessary. Therefore, the number of transmissions can be significantly reduced. Finally, two simulation examples are presented to show the effectiveness of the proposed control method.
Fuzzy Adaptive Control for Intelligent Autonomous Space Exploration Problems
NASA Technical Reports Server (NTRS)
Esogbue, Augustine O.
1998-01-01
The principal objective of the research reported here is the re-design, analysis and optimization of our newly developed neural network fuzzy adaptive controller model for complex processes capable of learning fuzzy control rules using process data and improving its control through on-line adaption. The learned improvement is according to a performance objective function that provides evaluative feedback; this performance objective is broadly defined to meet long-range goals over time. Although fuzzy control had proven effective for complex, nonlinear, imprecisely-defined processes for which standard models and controls are either inefficient, impractical or cannot be derived, the state of the art prior to our work showed that procedures for deriving fuzzy control, however, were mostly ad hoc heuristics. The learning ability of neural networks was exploited to systematically derive fuzzy control and permit on-line adaption and in the process optimize control. The operation of neural networks integrates very naturally with fuzzy logic. The neural networks which were designed and tested using simulation software and simulated data, followed by realistic industrial data were reconfigured for application on several platforms as well as for the employment of improved algorithms. The statistical procedures of the learning process were investigated and evaluated with standard statistical procedures (such as ANOVA, graphical analysis of residuals, etc.). The computational advantage of dynamic programming-like methods of optimal control was used to permit on-line fuzzy adaptive control. Tests for the consistency, completeness and interaction of the control rules were applied. Comparisons to other methods and controllers were made so as to identify the major advantages of the resulting controller model. Several specific modifications and extensions were made to the original controller. Additional modifications and explorations have been proposed for further study. Some of these are in progress in our laboratory while others await additional support. All of these enhancements will improve the attractiveness of the controller as an effective tool for the on line control of an array of complex process environments.
On-line, adaptive state estimator for active noise control
NASA Technical Reports Server (NTRS)
Lim, Tae W.
1994-01-01
Dynamic characteristics of airframe structures are expected to vary as aircraft flight conditions change. Accurate knowledge of the changing dynamic characteristics is crucial to enhancing the performance of the active noise control system using feedback control. This research investigates the development of an adaptive, on-line state estimator using a neural network concept to conduct active noise control. In this research, an algorithm has been developed that can be used to estimate displacement and velocity responses at any locations on the structure from a limited number of acceleration measurements and input force information. The algorithm employs band-pass filters to extract from the measurement signal the frequency contents corresponding to a desired mode. The filtered signal is then used to train a neural network which consists of a linear neuron with three weights. The structure of the neural network is designed as simple as possible to increase the sampling frequency as much as possible. The weights obtained through neural network training are then used to construct the transfer function of a mode in z-domain and to identify modal properties of each mode. By using the identified transfer function and interpolating the mode shape obtained at sensor locations, the displacement and velocity responses are estimated with reasonable accuracy at any locations on the structure. The accuracy of the response estimates depends on the number of modes incorporated in the estimates and the number of sensors employed to conduct mode shape interpolation. Computer simulation demonstrates that the algorithm is capable of adapting to the varying dynamic characteristics of structural properties. Experimental implementation of the algorithm on a DSP (digital signal processing) board for a plate structure is underway. The algorithm is expected to reach the sampling frequency range of about 10 kHz to 20 kHz which needs to be maintained for a typical active noise control application.
Orlowska-Kowalska, Teresa; Kaminski, Marcin
2014-01-01
The paper deals with the implementation of optimized neural networks (NNs) for state variable estimation of the drive system with an elastic joint. The signals estimated by NNs are used in the control structure with a state-space controller and additional feedbacks from the shaft torque and the load speed. High estimation quality is very important for the correct operation of a closed-loop system. The precision of state variables estimation depends on the generalization properties of NNs. A short review of optimization methods of the NN is presented. Two techniques typical for regularization and pruning methods are described and tested in detail: the Bayesian regularization and the Optimal Brain Damage methods. Simulation results show good precision of both optimized neural estimators for a wide range of changes of the load speed and the load torque, not only for nominal but also changed parameters of the drive system. The simulation results are verified in a laboratory setup.
ERIC Educational Resources Information Center
Lau, Jennifer Y. F.; Guyer, Amanda E.; Tone, Erin B.; Jenness, Jessica; Parrish, Jessica M.; Pine, Daniel S.; Nelson, Eric E.
2012-01-01
Peer rejection powerfully predicts adolescent anxiety. While cognitive differences influence anxious responses to social feedback, little is known about neural contributions. Twelve anxious and twelve age-, gender- and IQ-matched, psychiatrically healthy adolescents received "not interested" and "interested" feedback from unknown peers during a…
NASA Astrophysics Data System (ADS)
Liu, Xing-fa; Cen, Ming
2007-12-01
Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal. PMID:26042002
Toporikova, Natalia; Butera, Robert J
2013-02-01
Neuromodulators, such as amines and neuropeptides, alter the activity of neurons and neuronal networks. In this work, we investigate how neuromodulators, which activate G(q)-protein second messenger systems, can modulate the bursting frequency of neurons in a critical portion of the respiratory neural network, the pre-Bötzinger complex (preBötC). These neurons are a vital part of the ponto-medullary neuronal network, which generates a stable respiratory rhythm whose frequency is regulated by neuromodulator release from the nearby Raphe nucleus. Using a simulated 50-cell network of excitatory preBötC neurons with a heterogeneous distribution of persistent sodium conductance and Ca(2+), we determined conditions for frequency modulation in such a network by simulating interaction between Raphe and preBötC nuclei. We found that the positive feedback between the Raphe excitability and preBötC activity induces frequency modulation in the preBötC neurons. In addition, the frequency of the respiratory rhythm can be regulated via phasic release of excitatory neuromodulators from the Raphe nucleus. We predict that the application of a G(q) antagonist will eliminate this frequency modulation by the Raphe and keep the network frequency constant and low. In contrast, application of a G(q) agonist will result in a high frequency for all levels of Raphe stimulation. Our modeling results also suggest that high [K(+)] requirement in respiratory brain slice experiments may serve as a compensatory mechanism for low neuromodulatory tone. Copyright © 2012 Elsevier B.V. All rights reserved.
A novel recurrent neural network with finite-time convergence for linear programming.
Liu, Qingshan; Cao, Jinde; Chen, Guanrong
2010-11-01
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
Modular, Hierarchical Learning By Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Baldi, Pierre F.; Toomarian, Nikzad
1996-01-01
Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.
Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico
2012-07-24
The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual perception, sensory integration, recognition of movement, re-mapping on the somatosensory and motor cortex, storage in memory, and response control. Results from the congruent vs. incongruent trials revealed greater activity for the former condition than the latter in a network including cingulate cortex, right inferior and middle frontal gyrus that are involved in the go-signal and in decision control. Results on healthy subjects would suggest the appropriateness of an abstract visual feedback provided during motor training. The task contributes to highlight the potential of fMRI in improving the understanding of visual motor processes and may also be useful in detecting brain reorganisation during training.
Control of Complex Dynamic Systems by Neural Networks
NASA Technical Reports Server (NTRS)
Spall, James C.; Cristion, John A.
1993-01-01
This paper considers the use of neural networks (NN's) in controlling a nonlinear, stochastic system with unknown process equations. The NN is used to model the resulting unknown control law. The approach here is based on using the output error of the system to train the NN controller without the need to construct a separate model (NN or other type) for the unknown process dynamics. To implement such a direct adaptive control approach, it is required that connection weights in the NN be estimated while the system is being controlled. As a result of the feedback of the unknown process dynamics, however, it is not possible to determine the gradient of the loss function for use in standard (back-propagation-type) weight estimation algorithms. Therefore, this paper considers the use of a new stochastic approximation algorithm for this weight estimation, which is based on a 'simultaneous perturbation' gradient approximation that only requires the system output error. It is shown that this algorithm can greatly enhance the efficiency over more standard stochastic approximation algorithms based on finite-difference gradient approximations.
Kobza, Stefan; Ferrea, Stefano; Schnitzler, Alfons; Pollok, Bettina
2012-01-01
Feedback to both actively performed and observed behaviour allows adaptation of future actions. Positive feedback leads to increased activity of dopamine neurons in the substantia nigra, whereas dopamine neuron activity is decreased following negative feedback. Dopamine level reduction in unmedicated Parkinson’s Disease patients has been shown to lead to a negative learning bias, i.e. enhanced learning from negative feedback. Recent findings suggest that the neural mechanisms of active and observational learning from feedback might differ, with the striatum playing a less prominent role in observational learning. Therefore, it was hypothesized that unmedicated Parkinson’s Disease patients would show a negative learning bias only in active but not in observational learning. In a between-group design, 19 Parkinson’s Disease patients and 40 healthy controls engaged in either an active or an observational probabilistic feedback-learning task. For both tasks, transfer phases aimed to assess the bias to learn better from positive or negative feedback. As expected, actively learning patients showed a negative learning bias, whereas controls learned better from positive feedback. In contrast, no difference between patients and controls emerged for observational learning, with both groups showing better learning from positive feedback. These findings add to neural models of reinforcement-learning by suggesting that dopamine-modulated input to the striatum plays a minor role in observational learning from feedback. Future research will have to elucidate the specific neural underpinnings of observational learning. PMID:23185586
van Duijvenvoorde, Anna C K; Zanolie, Kiki; Rombouts, Serge A R B; Raijmakers, Maartje E J; Crone, Eveline A
2008-09-17
How children learn from positive and negative performance feedback lies at the foundation of successful learning and is therefore of great importance for educational practice. In this study, we used functional magnetic resonance imaging (fMRI) to examine the neural developmental changes related to feedback-based learning when performing a rule search and application task. Behavioral results from three age groups (8-9, 11-13, and 18-25 years of age) demonstrated that, compared with adults, 8- to 9-year-old children performed disproportionally more inaccurately after receiving negative feedback relative to positive feedback. Additionally, imaging data pointed toward a qualitative difference in how children and adults use performance feedback. That is, dorsolateral prefrontal cortex and superior parietal cortex were more active after negative feedback for adults, but after positive feedback for children (8-9 years of age). For 11- to 13-year-olds, these regions did not show differential feedback sensitivity, suggesting that the transition occurs around this age. Pre-supplementary motor area/anterior cingulate cortex, in contrast, was more active after negative feedback in both 11- to 13-year-olds and adults, but not 8- to 9-year-olds. Together, the current data show that cognitive control areas are differentially engaged during feedback-based learning across development. Adults engage these regions after signals of response adjustment (i.e., negative feedback). Young children engage these regions after signals of response continuation (i.e., positive feedback). The neural activation patterns found in 11- to 13-year-olds indicate a transition around this age toward an increased influence of negative feedback on performance adjustment. This is the first developmental fMRI study to compare qualitative changes in brain activation during feedback learning across distinct stages of development.
NASA Astrophysics Data System (ADS)
Wu, Wei; Cui, Bao-Tong
2007-07-01
In this paper, a synchronization scheme for a class of chaotic neural networks with time-varying delays is presented. This class of chaotic neural networks covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks, and bidirectional associative memory networks. The obtained criteria are expressed in terms of linear matrix inequalities, thus they can be efficiently verified. A comparison between our results and the previous results shows that our results are less restrictive.
NASA Technical Reports Server (NTRS)
Thakoor, Anil
1990-01-01
Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.
The neural network to determine the mechanical properties of the steels
NASA Astrophysics Data System (ADS)
Yemelyanov, Vitaliy; Yemelyanova, Nataliya; Safonova, Marina; Nedelkin, Aleksey
2018-04-01
The authors describe the neural network structure and software that is designed and developed to determine the mechanical properties of steels. The neural network is developed to refine upon the values of the steels properties. The results of simulations of the developed neural network are shown. The authors note the low standard error of the proposed neural network. To realize the proposed neural network the specialized software has been developed.
Finite-Time Adaptive Control for a Class of Nonlinear Systems With Nonstrict Feedback Structure.
Sun, Yumei; Chen, Bing; Lin, Chong; Wang, Honghong
2017-09-18
This paper focuses on finite-time adaptive neural tracking control for nonlinear systems in nonstrict feedback form. A semiglobal finite-time practical stability criterion is first proposed. Correspondingly, the finite-time adaptive neural control strategy is given by using this criterion. Unlike the existing results on adaptive neural/fuzzy control, the proposed adaptive neural controller guarantees that the tracking error converges to a sufficiently small domain around the origin in finite time, and other closed-loop signals are bounded. At last, two examples are used to test the validity of our results.
Locking of correlated neural activity to ongoing oscillations
Helias, Moritz
2017-01-01
Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity. In these network states a global oscillatory cycle modulates the propensity of neurons to fire. Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain. A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate. Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations. While the dependence of correlations on the mean rate is well understood in feed-forward networks, it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle. We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks. We identify the mechanisms by which covariances depend on a driving periodic stimulus. Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks. Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons (via the external input and network feedback) and the time-varying variances of single unit activities. For some parameters, the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior. Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis. PMID:28604771
Differences in Feedback- and Inhibition-Related Neural Activity in Adult ADHD
ERIC Educational Resources Information Center
Dibbets, Pauline; Evers, Lisbeth; Hurks, Petra; Marchetta, Natalie; Jolles, Jelle
2009-01-01
The objective of this study was to examine response inhibition- and feedback-related neural activity in adults with attention deficit hyperactivity disorder (ADHD) using event-related functional MRI. Sixteen male adults with ADHD and 13 healthy/normal controls participated in this study and performed a modified Go/NoGo task. Behaviourally,…
Continous Representation Learning via User Feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
Representation learning is a deep-learning based technique for extracting features from data for the purpose of machine learning. This requires a large amount of data, on order tens of thousands to millions of samples, to properly teach the deep neural network. This a system for continuous representation learning, where the system may be improved with a small number of additional samples (order 10-100). The unique characteristics of this invention include a human-computer feedback component, where assess the quality of the current representation and then provides a better representation to the system. The system then mixes the new data with oldmore » training examples to avoid overfitting and improve overall performance of the system. The model can be exported and shared with other users, and it may be applied to additional images the system hasn't seen before.« less
Adaptive Control for Microgravity Vibration Isolation System
NASA Technical Reports Server (NTRS)
Yang, Bong-Jun; Calise, Anthony J.; Craig, James I.; Whorton, Mark S.
2005-01-01
Most active vibration isolation systems that try to a provide quiescent acceleration environment for space science experiments have utilized linear design methods. In this paper, we address adaptive control augmentation of an existing classical controller that employs a high-gain acceleration feedback together with a low-gain position feedback to center the isolated platform. The control design feature includes parametric and dynamic uncertainties because the hardware of the isolation system is built as a payload-level isolator, and the acceleration Sensor exhibits a significant bias. A neural network is incorporated to adaptively compensate for the system uncertainties, and a high-pass filter is introduced to mitigate the effect of the measurement bias. Simulations show that the adaptive control improves the performance of the existing acceleration controller and keep the level of the isolated platform deviation to that of the existing control system.
Adaptive Inner-Loop Rover Control
NASA Technical Reports Server (NTRS)
Kulkarni, Nilesh; Ippolito, Corey; Krishnakumar, Kalmanje; Al-Ali, Khalid M.
2006-01-01
Adaptive control technology is developed for the inner-loop speed and steering control of the MAX Rover. MAX, a CMU developed rover, is a compact low-cost 4-wheel drive, 4-wheel steer (double Ackerman), high-clearance agile durable chassis, outfitted with sensors and electronics that make it ideally suited for supporting research relevant to intelligent teleoperation and as a low-cost autonomous robotic test bed and appliance. The design consists of a feedback linearization based controller with a proportional - integral (PI) feedback that is augmented by an online adaptive neural network. The adaptation law has guaranteed stability properties for safe operation. The control design is retrofit in nature so that it fits inside the outer-loop path planning algorithms. Successful hardware implementation of the controller is illustrated for several scenarios consisting of actuator failures and modeling errors in the nominal design.
A closed-loop model of the respiratory system: focus on hypercapnia and active expiration.
Molkov, Yaroslav I; Shevtsova, Natalia A; Park, Choongseok; Ben-Tal, Alona; Smith, Jeffrey C; Rubin, Jonathan E; Rybak, Ilya A
2014-01-01
Breathing is a vital process providing the exchange of gases between the lungs and atmosphere. During quiet breathing, pumping air from the lungs is mostly performed by contraction of the diaphragm during inspiration, and muscle contraction during expiration does not play a significant role in ventilation. In contrast, during intense exercise or severe hypercapnia forced or active expiration occurs in which the abdominal "expiratory" muscles become actively involved in breathing. The mechanisms of this transition remain unknown. To study these mechanisms, we developed a computational model of the closed-loop respiratory system that describes the brainstem respiratory network controlling the pulmonary subsystem representing lung biomechanics and gas (O2 and CO2) exchange and transport. The lung subsystem provides two types of feedback to the neural subsystem: a mechanical one from pulmonary stretch receptors and a chemical one from central chemoreceptors. The neural component of the model simulates the respiratory network that includes several interacting respiratory neuron types within the Bötzinger and pre-Bötzinger complexes, as well as the retrotrapezoid nucleus/parafacial respiratory group (RTN/pFRG) representing the central chemoreception module targeted by chemical feedback. The RTN/pFRG compartment contains an independent neural generator that is activated at an increased CO2 level and controls the abdominal motor output. The lung volume is controlled by two pumps, a major one driven by the diaphragm and an additional one activated by abdominal muscles and involved in active expiration. The model represents the first attempt to model the transition from quiet breathing to breathing with active expiration. The model suggests that the closed-loop respiratory control system switches to active expiration via a quantal acceleration of expiratory activity, when increases in breathing rate and phrenic amplitude no longer provide sufficient ventilation. The model can be used for simulation of closed-loop control of breathing under different conditions including respiratory disorders.
NASA Astrophysics Data System (ADS)
Al-Rabadi, Anas N.
2009-10-01
This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.
The neural substrates of driving at a safe distance: a functional MRI study.
Uchiyama, Yuji; Ebe, Kazutoshi; Kozato, Akio; Okada, Tomohisa; Sadato, Norihiro
2003-12-11
An important driving skill is the ability to maintain a safe distance from a preceding car. To determine the neural substrates of this skill we performed functional magnetic resonance imaging of simulated driving in 21 subjects. Subjects used a joystick to adjust their own driving speed in order to maintain a constant distance from a preceding car traveling at varying speeds. The task activated multiple brain regions. Activation of the cerebellum may reflect visual feedback during smooth tracking of the preceding car. Co-activation of the basal ganglia, thalamus and premotor cortex is related to movement selection. Activation of a premotor-parietal network is related to visuo-motor co-ordination. Task performance was negatively correlated with anterior cingulate activity, consistent with the role of this region in error detection and response selection.
Region stability analysis and tracking control of memristive recurrent neural network.
Bao, Gang; Zeng, Zhigang; Shen, Yanjun
2018-02-01
Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automatic learning rate adjustment for self-supervising autonomous robot control
NASA Technical Reports Server (NTRS)
Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.
1992-01-01
Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.
Liang, X B; Wang, J
2000-01-01
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.
Leménager, Tagrid; Dieter, Julia; Hill, Holger; Hoffmann, Sabine; Reinhard, Iris; Beutel, Martin; Vollstädt-Klein, Sabine; Kiefer, Falk; Mann, Karl
2016-01-01
Background and aims Internet gaming addiction appears to be related to self-concept deficits and increased angular gyrus (AG)-related identification with one’s avatar. For increased social network use, a few existing studies suggest striatal-related positive social feedback as an underlying factor. However, whether an impaired self-concept and its reward-based compensation through the online presentation of an idealized version of the self are related to pathological social network use has not been investigated yet. We aimed to compare different stages of pathological Internet game and social network use to explore the neural basis of avatar and self-identification in addictive use. Methods About 19 pathological Internet gamers, 19 pathological social network users, and 19 healthy controls underwent functional magnetic resonance imaging while completing a self-retrieval paradigm, asking participants to rate the degree to which various self-concept-related characteristics described their self, ideal, and avatar. Self-concept-related characteristics were also psychometrically assessed. Results Psychometric testing indicated that pathological Internet gamers exhibited higher self-concept deficits generally, whereas pathological social network users exhibit deficits in emotion regulation only. We observed left AG hyperactivations in Internet gamers during avatar reflection and a correlation with symptom severity. Striatal hypoactivations during self-reflection (vs. ideal reflection) were observed in social network users and were correlated with symptom severity. Discussion and conclusion Internet gaming addiction appears to be linked to increased identification with one’s avatar, evidenced by high left AG activations in pathological Internet gamers. Addiction to social networks seems to be characterized by emotion regulation deficits, reflected by reduced striatal activation during self-reflection compared to during ideal reflection. PMID:27415603
Leménager, Tagrid; Dieter, Julia; Hill, Holger; Hoffmann, Sabine; Reinhard, Iris; Beutel, Martin; Vollstädt-Klein, Sabine; Kiefer, Falk; Mann, Karl
2016-09-01
Background and aims Internet gaming addiction appears to be related to self-concept deficits and increased angular gyrus (AG)-related identification with one's avatar. For increased social network use, a few existing studies suggest striatal-related positive social feedback as an underlying factor. However, whether an impaired self-concept and its reward-based compensation through the online presentation of an idealized version of the self are related to pathological social network use has not been investigated yet. We aimed to compare different stages of pathological Internet game and social network use to explore the neural basis of avatar and self-identification in addictive use. Methods About 19 pathological Internet gamers, 19 pathological social network users, and 19 healthy controls underwent functional magnetic resonance imaging while completing a self-retrieval paradigm, asking participants to rate the degree to which various self-concept-related characteristics described their self, ideal, and avatar. Self-concept-related characteristics were also psychometrically assessed. Results Psychometric testing indicated that pathological Internet gamers exhibited higher self-concept deficits generally, whereas pathological social network users exhibit deficits in emotion regulation only. We observed left AG hyperactivations in Internet gamers during avatar reflection and a correlation with symptom severity. Striatal hypoactivations during self-reflection (vs. ideal reflection) were observed in social network users and were correlated with symptom severity. Discussion and conclusion Internet gaming addiction appears to be linked to increased identification with one's avatar, evidenced by high left AG activations in pathological Internet gamers. Addiction to social networks seems to be characterized by emotion regulation deficits, reflected by reduced striatal activation during self-reflection compared to during ideal reflection.
Goodyear, Kimberly; Parasuraman, Raja; Chernyak, Sergey; de Visser, Ewart; Madhavan, Poornima; Deshpande, Gopikrishna; Krueger, Frank
2017-10-01
As society becomes more reliant on machines and automation, understanding how people utilize advice is a necessary endeavor. Our objective was to reveal the underlying neural associations during advice utilization from expert human and machine agents with fMRI and multivariate Granger causality analysis. During an X-ray luggage-screening task, participants accepted or rejected good or bad advice from either the human or machine agent framed as experts with manipulated reliability (high miss rate). We showed that the machine-agent group decreased their advice utilization compared to the human-agent group and these differences in behaviors during advice utilization could be accounted for by high expectations of reliable advice and changes in attention allocation due to miss errors. Brain areas involved with the salience and mentalizing networks, as well as sensory processing involved with attention, were recruited during the task and the advice utilization network consisted of attentional modulation of sensory information with the lingual gyrus as the driver during the decision phase and the fusiform gyrus as the driver during the feedback phase. Our findings expand on the existing literature by showing that misses degrade advice utilization, which is represented in a neural network involving salience detection and self-processing with perceptual integration.
Models of vocal learning in the songbird: Historical frameworks and the stabilizing critic.
Nick, Teresa A
2015-10-01
Birdsong is a form of sensorimotor learning that involves a mirror-like system that activates with both song hearing and production. Early models of song learning, based on behavioral measures, identified key features of vocal plasticity, such as the requirements for memorization of a tutor song and auditory feedback during song practice. The concept of a comparator, which compares the memory of the tutor song to auditory feedback, featured prominently. Later models focused on linking anatomically-defined neural modules to behavioral concepts, such as the comparator. Exploiting the anatomical modularity of the songbird brain, localized lesions illuminated mechanisms of the neural song system. More recent models have integrated neuronal mechanisms identified in other systems with observations in songbirds. While these models explain multiple aspects of song learning, they must incorporate computational elements based on unknown biological mechanisms to bridge the motor-to-sensory delay and/or transform motor signals into the sensory domain. Here, I introduce the stabilizing critic hypothesis, which enables sensorimotor learning by (1) placing a purely sensory comparator afferent of the song system and (2) endowing song system disinhibitory interneuron networks with the capacity both to bridge the motor-sensory delay through prolonged bursting and to stabilize song segments selectively based on the comparator signal. These proposed networks stabilize an otherwise variable signal generated by both putative mirror neurons and a cortical-basal ganglia-thalamic loop. This stabilized signal then temporally converges with a matched premotor signal in the efferent song motor cortex, promoting spike-timing-dependent plasticity in the premotor circuitry and behavioral song learning. © 2014 Wiley Periodicals, Inc.
Kleber, Boris; Zeitouni, Anthony G; Friberg, Anders; Zatorre, Robert J
2013-04-03
Somatosensation plays an important role in the motor control of vocal functions, yet its neural correlate and relation to vocal learning is not well understood. We used fMRI in 17 trained singers and 12 nonsingers to study the effects of vocal-fold anesthesia on the vocal-motor singing network as a function of singing expertise. Tasks required participants to sing musical target intervals under normal conditions and after anesthesia. At the behavioral level, anesthesia altered pitch accuracy in both groups, but singers were less affected than nonsingers, indicating an experience-dependent effect of the intervention. At the neural level, this difference was accompanied by distinct patterns of decreased activation in singers (cortical and subcortical sensory and motor areas) and nonsingers (subcortical motor areas only) respectively, suggesting that anesthesia affected the higher-level voluntary (explicit) motor and sensorimotor integration network more in experienced singers, and the lower-level (implicit) subcortical motor loops in nonsingers. The right anterior insular cortex (AIC) was identified as the principal area dissociating the effect of expertise as a function of anesthesia by three separate sources of evidence. First, it responded differently to anesthesia in singers (decreased activation) and nonsingers (increased activation). Second, functional connectivity between AIC and bilateral A1, M1, and S1 was reduced in singers but augmented in nonsingers. Third, increased BOLD activity in right AIC in singers was correlated with larger pitch deviation under anesthesia. We conclude that the right AIC and sensory-motor areas play a role in experience-dependent modulation of feedback integration for vocal motor control during singing.
Computer simulations of neural mechanisms explaining upper and lower limb excitatory neural coupling
2010-01-01
Background When humans perform rhythmic upper and lower limb locomotor-like movements, there is an excitatory effect of upper limb exertion on lower limb muscle recruitment. To investigate potential neural mechanisms for this behavioral observation, we developed computer simulations modeling interlimb neural pathways among central pattern generators. We hypothesized that enhancement of muscle recruitment from interlimb spinal mechanisms was not sufficient to explain muscle enhancement levels observed in experimental data. Methods We used Matsuoka oscillators for the central pattern generators (CPG) and determined parameters that enhanced amplitudes of rhythmic steady state bursts. Potential mechanisms for output enhancement were excitatory and inhibitory sensory feedback gains, excitatory and inhibitory interlimb coupling gains, and coupling geometry. We first simulated the simplest case, a single CPG, and then expanded the model to have two CPGs and lastly four CPGs. In the two and four CPG models, the lower limb CPGs did not receive supraspinal input such that the only mechanisms available for enhancing output were interlimb coupling gains and sensory feedback gains. Results In a two-CPG model with inhibitory sensory feedback gains, only excitatory gains of ipsilateral flexor-extensor/extensor-flexor coupling produced reciprocal upper-lower limb bursts and enhanced output up to 26%. In a two-CPG model with excitatory sensory feedback gains, excitatory gains of contralateral flexor-flexor/extensor-extensor coupling produced reciprocal upper-lower limb bursts and enhanced output up to 100%. However, within a given excitatory sensory feedback gain, enhancement due to excitatory interlimb gains could only reach levels up to 20%. Interconnecting four CPGs to have ipsilateral flexor-extensor/extensor-flexor coupling, contralateral flexor-flexor/extensor-extensor coupling, and bilateral flexor-extensor/extensor-flexor coupling could enhance motor output up to 32%. Enhancement observed in experimental data exceeded 32%. Enhancement within this symmetrical four-CPG neural architecture was more sensitive to relatively small interlimb coupling gains. Excitatory sensory feedback gains could produce greater output amplitudes, but larger gains were required for entrainment compared to inhibitory sensory feedback gains. Conclusions Based on these simulations, symmetrical interlimb coupling can account for much, but not all of the excitatory neural coupling between upper and lower limbs during rhythmic locomotor-like movements. PMID:21143960
Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control
Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.
1997-01-01
One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.
The Role of Corticostriatal Systems in Speech Category Learning.
Yi, Han-Gyol; Maddox, W Todd; Mumford, Jeanette A; Chandrasekaran, Bharath
2016-04-01
One of the most difficult category learning problems for humans is learning nonnative speech categories. While feedback-based category training can enhance speech learning, the mechanisms underlying these benefits are unclear. In this functional magnetic resonance imaging study, we investigated neural and computational mechanisms underlying feedback-dependent speech category learning in adults. Positive feedback activated a large corticostriatal network including the dorsolateral prefrontal cortex, inferior parietal lobule, middle temporal gyrus, caudate, putamen, and the ventral striatum. Successful learning was contingent upon the activity of domain-general category learning systems: the fast-learning reflective system, involving the dorsolateral prefrontal cortex that develops and tests explicit rules based on the feedback content, and the slow-learning reflexive system, involving the putamen in which the stimuli are implicitly associated with category responses based on the reward value in feedback. Computational modeling of response strategies revealed significant use of reflective strategies early in training and greater use of reflexive strategies later in training. Reflexive strategy use was associated with increased activation in the putamen. Our results demonstrate a critical role for the reflexive corticostriatal learning system as a function of response strategy and proficiency during speech category learning. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Stienen, Bernard M C; Schindler, Konrad; de Gelder, Beatrice
2012-07-01
Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.
An Introduction to Neural Networks for Hearing Aid Noise Recognition.
ERIC Educational Resources Information Center
Kim, Jun W.; Tyler, Richard S.
1995-01-01
This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…
Matching and selection of a specific subjective experience: conjugate matching and experience.
Vimal, Ram Lakhan Pandey
2010-06-01
We incorporate the dual-mode concept in our dual-aspect PE-SE (proto-experience-subjective experience) framework. The two modes are: (1) the non-tilde mode that is the physical (material) and mental aspect of cognition (memory and attention) related feedback signals in a neural-network, which refers to the cognitive nearest past approaching towards present; and (2) the tilde mode that is the material and mental aspect of the feed-forward signals due to external environmental input and internal endogenous input, which pertains to the nearest future approaching towards present and is a entropy-reversed representation of non-tilde mode. Furthermore, one could argue that there are at least five sub-pathways in the stimulus-dependent feed-forward pathway and cognitive feedback pathway for information transfer in the brain dynamics: (i) classical axonal-dendritic neural sub-pathway including electromagnetic information field sub-pathway; (ii) quantum dendritic-dendritic microtubule (MT) (dendritic webs) sub-pathway; (iii) Ca(++)-related astroglial-neural sub-pathway; (iv) (a) the sub-pathway related to extrasynaptic signal transmission between fine distal dendrites of cortical neurons for the local subtle modulation due to voltages created by intradendritic dual-aspect charged surface effects within the Debye layer around endogenous structures such as microtubules (MT) and endoplasmic reticulum (ER) in dendrites, and (b) the sub-pathway related to extracellular volume transmission as fields of neural activity for the global modulation in axonal-dendritic neural sub-pathway; and (v) the sub-pathway related to information transmission via soliton propagation. We propose that: (i) the quantum conjugate matching between experiences in the mental aspect of the tilde mode and that of the non-tilde mode is related more to the mental aspect of the quantum microtubule-dendritic-web and less to that of the non-quantum sub-pathways; and (ii) the classical matching between experiences in the mental aspect of the tilde mode and that of the non-tilde mode is related to the mental aspect of the non-quantum sub-pathways (such as classical axonal-dendritic neural sub-pathway). In both cases, a specific SE is selected when the tilde mode interacts with the non-tilde mode to match for a specific SE, and when the necessary ingredients of SEs (such as the formation of neural networks, wakefulness, re-entry, attention, working memory, and so on) are satisfied. When the conjugate match is made between the two modes, the world-presence (Now) is disclosed. The material aspects in the tilde mode and that in the non-tilde mode are matched to link structure with function, whereas the mental aspects in the tilde mode and that in the non-tilde mode are matched to link experience with structure and function.
Modified neural networks for rapid recovery of tokamak plasma parameters for real time control
NASA Astrophysics Data System (ADS)
Sengupta, A.; Ranjan, P.
2002-07-01
Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.
Jeng, J T; Lee, T T
2000-01-01
A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.
ChainMail based neural dynamics modeling of soft tissue deformation for surgical simulation.
Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2017-07-20
Realistic and real-time modeling and simulation of soft tissue deformation is a fundamental research issue in the field of surgical simulation. In this paper, a novel cellular neural network approach is presented for modeling and simulation of soft tissue deformation by combining neural dynamics of cellular neural network with ChainMail mechanism. The proposed method formulates the problem of elastic deformation into cellular neural network activities to avoid the complex computation of elasticity. The local position adjustments of ChainMail are incorporated into the cellular neural network as the local connectivity of cells, through which the dynamic behaviors of soft tissue deformation are transformed into the neural dynamics of cellular neural network. Experiments demonstrate that the proposed neural network approach is capable of modeling the soft tissues' nonlinear deformation and typical mechanical behaviors. The proposed method not only improves ChainMail's linear deformation with the nonlinear characteristics of neural dynamics but also enables the cellular neural network to follow the principle of continuum mechanics to simulate soft tissue deformation.
Schiebener, Johannes; Brand, Matthias
2015-06-01
While making decisions under objective risk conditions, the probabilities of the consequences of the available options are either provided or calculable. Brand et al. (Neural Networks 19:1266-1276, 2006) introduced a model describing the neuro-cognitive processes involved in such decisions. In this model, executive functions associated with activity in the fronto-striatal loop are important for developing and applying decision-making strategies, and for verifying, adapting, or revising strategies according to feedback. Emotional rewards and punishments learned from such feedback accompany these processes. In this literature review, we found support for the role of executive functions, but also found evidence for the importance of further cognitive abilities in decision making. Moreover, in addition to reflective processing (driven by cognition), decisions can be guided by impulsive processing (driven by anticipation of emotional reward and punishment). Reflective and impulsive processing may interact during decision making, affecting the evaluation of available options, as both processes are affected by feedback. Decision-making processes are furthermore modulated by individual attributes (e.g., age), and external influences (e.g., stressors). Accordingly, we suggest a revised model of decision making under objective risk conditions.
Neurophysiological correlates of anhedonia in feedback processing
Mies, Gabry W.; Van den Berg, Ivo; Franken, Ingmar H. A.; Smits, Marion; Van der Molen, Maurits W.; Van der Veen, Frederik M.
2013-01-01
Disturbances in feedback processing and a dysregulation of the neural circuit in which the cingulate cortex plays a key role have been frequently observed in depression. Since depression is a heterogeneous disease, instead of focusing on the depressive state in general, this study investigated the relations between the two core symptoms of depression, i.e., depressed mood and anhedonia, and the neural correlates of feedback processing using fMRI. The focus was on the different subdivisions of the anterior cingulate cortex (ACC). Undergraduates with varying levels of depressed mood and anhedonia performed a time-estimation task in which they received positive and negative feedback that was either valid or invalid (i.e., related vs. unrelated to actual performance). The rostral cingulate zone (RCZ), corresponding to the dorsal part of the ACC, was less active in response to feedback in more anhedonic individuals, after correcting for the influence of depressed mood, whereas the subgenual ACC was more active in these individuals. Task performance was not affected by anhedonia, however. No statistically significant effects were found for depressed mood above and beyond the effects of anhedonia. This study therefore implies that increasing levels of anhedonia involve changes in the neural circuitry underlying feedback processing. PMID:23532800
Xu, Bin; Yang, Chenguang; Pan, Yongping
2015-10-01
This paper studies both indirect and direct global neural control of strict-feedback systems in the presence of unknown dynamics, using the dynamic surface control (DSC) technique in a novel manner. A new switching mechanism is designed to combine an adaptive neural controller in the neural approximation domain, together with the robust controller that pulls the transient states back into the neural approximation domain from the outside. In comparison with the conventional control techniques, which could only achieve semiglobally uniformly ultimately bounded stability, the proposed control scheme guarantees all the signals in the closed-loop system are globally uniformly ultimately bounded, such that the conventional constraints on initial conditions of the neural control system can be relaxed. The simulation studies of hypersonic flight vehicle (HFV) are performed to demonstrate the effectiveness of the proposed global neural DSC design.
NASA Technical Reports Server (NTRS)
Baram, Yoram
1992-01-01
Report presents analysis of nested neural networks, consisting of interconnected subnetworks. Analysis based on simplified mathematical models more appropriate for artificial electronic neural networks, partly applicable to biological neural networks. Nested structure allows for retrieval of individual subpatterns. Requires fewer wires and connection devices than fully connected networks, and allows for local reconstruction of damaged subnetworks without rewiring entire network.
Mocanu, Decebal Constantin; Mocanu, Elena; Stone, Peter; Nguyen, Phuong H; Gibescu, Madeleine; Liotta, Antonio
2018-06-19
Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.
Quantum neural networks: Current status and prospects for development
NASA Astrophysics Data System (ADS)
Altaisky, M. V.; Kaputkina, N. E.; Krylov, V. A.
2014-11-01
The idea of quantum artificial neural networks, first formulated in [34], unites the artificial neural network concept with the quantum computation paradigm. Quantum artificial neural networks were first systematically considered in the PhD thesis by T. Menneer (1998). Based on the works of Menneer and Narayanan [42, 43], Kouda, Matsui, and Nishimura [35, 36], Altaisky [2, 68], Zhou [67], and others, quantum-inspired learning algorithms for neural networks were developed, and are now used in various training programs and computer games [29, 30]. The first practically realizable scaled hardware-implemented model of the quantum artificial neural network is obtained by D-Wave Systems, Inc. [33]. It is a quantum Hopfield network implemented on the basis of superconducting quantum interference devices (SQUIDs). In this work we analyze possibilities and underlying principles of an alternative way to implement quantum neural networks on the basis of quantum dots. A possibility of using quantum neural network algorithms in automated control systems, associative memory devices, and in modeling biological and social networks is examined.
Network feedback regulates motor output across a range of modulatory neuron activity
Spencer, Robert M.
2016-01-01
Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5–35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. PMID:27030739
Long, Lijun; Zhao, Jun
2017-07-01
In this paper, the problem of adaptive neural output-feedback control is addressed for a class of multi-input multioutput (MIMO) switched uncertain nonlinear systems with unknown control gains. Neural networks (NNs) are used to approximate unknown nonlinear functions. In order to avoid the conservativeness caused by adoption of a common observer for all subsystems, an MIMO NN switched observer is designed to estimate unmeasurable states. A new switched observer-based adaptive neural control technique for the problem studied is then provided by exploiting the classical average dwell time (ADT) method and the backstepping method and the Nussbaum gain technique. It effectively handles the obstacle about the coexistence of multiple Nussbaum-type function terms, and improves the classical ADT method, since the exponential decline property of Lyapunov functions for individual subsystems is no longer satisfied. It is shown that the technique proposed is able to guarantee semiglobal uniformly ultimately boundedness of all the signals in the closed-loop system under a class of switching signals with ADT, and the tracking errors converge to a small neighborhood of the origin. The effectiveness of the approach proposed is illustrated by its application to a two inverted pendulum system.
Zheng, Zane Z; Munhall, Kevin G; Johnsrude, Ingrid S
2010-08-01
The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not and by examining the overlap with the network recruited during passive listening to speech sounds. We used real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word ("Ted") and either heard this clearly or heard voice-gated masking noise. We compared this to when they listened to yoked stimuli (identical recordings of "Ted" or noise) without speaking. Activity along the STS and superior temporal gyrus bilaterally was significantly greater if the auditory stimulus was (a) processed as the auditory concomitant of speaking and (b) did not match the predicted outcome (noise). The network exhibiting this Feedback Type x Production/Perception interaction includes a superior temporal gyrus/middle temporal gyrus region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts and that processes an error signal in speech-sensitive regions when this and the sensory data do not match.
Zheng, Zane Z.; Munhall, Kevin G; Johnsrude, Ingrid S
2009-01-01
The fluency and reliability of speech production suggests a mechanism that links motor commands and sensory feedback. Here, we examine the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not, and examining the overlap with the network recruited during passive listening to speech sounds. We use real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word (‘Ted’) and either heard this clearly, or heard voice-gated masking noise. We compare this to when they listened to yoked stimuli (identical recordings of ‘Ted’ or noise) without speaking. Activity along the superior temporal sulcus (STS) and superior temporal gyrus (STG) bilaterally was significantly greater if the auditory stimulus was a) processed as the auditory concomitant of speaking and b) did not match the predicted outcome (noise). The network exhibiting this Feedback type by Production/Perception interaction includes an STG/MTG region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts, and that processes an error signal in speech-sensitive regions when this and the sensory data do not match. PMID:19642886
Neural network approaches to capture temporal information
NASA Astrophysics Data System (ADS)
van Veelen, Martijn; Nijhuis, Jos; Spaanenburg, Ben
2000-05-01
The automated design and construction of neural networks receives growing attention of the neural networks community. Both the growing availability of computing power and development of mathematical and probabilistic theory have had severe impact on the design and modelling approaches of neural networks. This impact is most apparent in the use of neural networks to time series prediction. In this paper, we give our views on past, contemporary and future design and modelling approaches to neural forecasting.
The role of symmetry in neural networks and their Laplacian spectra.
de Lange, Siemon C; van den Heuvel, Martijn P; de Reus, Marcel A
2016-11-01
Human and animal nervous systems constitute complexly wired networks that form the infrastructure for neural processing and integration of information. The organization of these neural networks can be analyzed using the so-called Laplacian spectrum, providing a mathematical tool to produce systems-level network fingerprints. In this article, we examine a characteristic central peak in the spectrum of neural networks, including anatomical brain network maps of the mouse, cat and macaque, as well as anatomical and functional network maps of human brain connectivity. We link the occurrence of this central peak to the level of symmetry in neural networks, an intriguing aspect of network organization resulting from network elements that exhibit similar wiring patterns. Specifically, we propose a measure to capture the global level of symmetry of a network and show that, for both empirical networks and network models, the height of the main peak in the Laplacian spectrum is strongly related to node symmetry in the underlying network. Moreover, examination of spectra of duplication-based model networks shows that neural spectra are best approximated using a trade-off between duplication and diversification. Taken together, our results facilitate a better understanding of neural network spectra and the importance of symmetry in neural networks. Copyright © 2016 Elsevier Inc. All rights reserved.
A Phase-Locked Loop Epilepsy Network Emulator.
Watson, P D; Horecka, K M; Cohen, N J; Ratnam, R
2016-10-15
Most seizure forecasting employs statistical learning techniques that lack a representation of the network interactions that give rise to seizures. We present an epilepsy network emulator (ENE) that uses a network of interconnected phase-locked loops (PLLs) to model synchronous, circuit-level oscillations between electrocorticography (ECoG) electrodes. Using ECoG data from a canine-epilepsy model (Davis et al. 2011) and a physiological entropy measure (approximate entropy or ApEn, Pincus 1995), we demonstrate the entropy of the emulator phases increases dramatically during ictal periods across all ECoG recording sites and across all animals in the sample. Further, this increase precedes the observable voltage spikes that characterize seizure activity in the ECoG data. These results suggest that the ENE is sensitive to phase-domain information in the neural circuits measured by ECoG and that an increase in the entropy of this measure coincides with increasing likelihood of seizure activity. Understanding this unpredictable phase-domain electrical activity present in ECoG recordings may provide a target for seizure detection and feedback control.
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.
Gilra, Aditya; Gerstner, Wulfram
2017-11-27
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network
Gerstner, Wulfram
2017-01-01
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically. PMID:29173280
Synchronization Control of Neural Networks With State-Dependent Coefficient Matrices.
Zhang, Junfeng; Zhao, Xudong; Huang, Jun
2016-11-01
This brief is concerned with synchronization control of a class of neural networks with state-dependent coefficient matrices. Being different from the existing drive-response neural networks in the literature, a novel model of drive-response neural networks is established. The concepts of uniformly ultimately bounded (UUB) synchronization and convex hull Lyapunov function are introduced. Then, by using the convex hull Lyapunov function approach, the UUB synchronization design of the drive-response neural networks is proposed, and a delay-independent control law guaranteeing the bounded synchronization of the neural networks is constructed. All present conditions are formulated in terms of bilinear matrix inequalities. By comparison, it is shown that the neural networks obtained in this brief are less conservative than those ones in the literature, and the bounded synchronization is suitable for the novel drive-response neural networks. Finally, an illustrative example is given to verify the validity of the obtained results.
Huang, Ri-Bo; Du, Qi-Shi; Wei, Yu-Tuo; Pang, Zong-Wen; Wei, Hang; Chou, Kuo-Chen
2009-02-07
Predicting the bioactivity of peptides and proteins is an important challenge in drug development and protein engineering. In this study we introduce a novel approach, the so-called "physics and chemistry-driven artificial neural network (Phys-Chem ANN)", to deal with such a problem. Unlike the existing ANN approaches, which were designed under the inspiration of biological neural system, the Phys-Chem ANN approach is based on the physical and chemical principles, as well as the structural features of proteins. In the Phys-Chem ANN model the "hidden layers" are no longer virtual "neurons", but real structural units of proteins and peptides. It is a hybridization approach, which combines the linear free energy concept of quantitative structure-activity relationship (QSAR) with the advanced mathematical technique of ANN. The Phys-Chem ANN approach has adopted an iterative and feedback procedure, incorporating both machine-learning and artificial intelligence capabilities. In addition to making more accurate predictions for the bioactivities of proteins and peptides than is possible with the traditional QSAR approach, the Phys-Chem ANN approach can also provide more insights about the relationship between bioactivities and the structures involved than the ANN approach does. As an example of the application of the Phys-Chem ANN approach, a predictive model for the conformational stability of human lysozyme is presented.
SortNet: learning to rank by a neural preference function.
Rigutini, Leonardo; Papini, Tiziano; Maggini, Marco; Scarselli, Franco
2011-09-01
Relevance ranking consists in sorting a set of objects with respect to a given criterion. However, in personalized retrieval systems, the relevance criteria may usually vary among different users and may not be predefined. In this case, ranking algorithms that adapt their behavior from users' feedbacks must be devised. Two main approaches are proposed in the literature for learning to rank: the use of a scoring function, learned by examples, that evaluates a feature-based representation of each object yielding an absolute relevance score, a pairwise approach, where a preference function is learned to determine the object that has to be ranked first in a given pair. In this paper, we present a preference learning method for learning to rank. A neural network, the comparative neural network (CmpNN), is trained from examples to approximate the comparison function for a pair of objects. The CmpNN adopts a particular architecture designed to implement the symmetries naturally present in a preference function. The learned preference function can be embedded as the comparator into a classical sorting algorithm to provide a global ranking of a set of objects. To improve the ranking performances, an active-learning procedure is devised, that aims at selecting the most informative patterns in the training set. The proposed algorithm is evaluated on the LETOR dataset showing promising performances in comparison with other state-of-the-art algorithms.
Beta Hebbian Learning as a New Method for Exploratory Projection Pursuit.
Quintián, Héctor; Corchado, Emilio
2017-09-01
In this research, a novel family of learning rules called Beta Hebbian Learning (BHL) is thoroughly investigated to extract information from high-dimensional datasets by projecting the data onto low-dimensional (typically two dimensional) subspaces, improving the existing exploratory methods by providing a clear representation of data's internal structure. BHL applies a family of learning rules derived from the Probability Density Function (PDF) of the residual based on the beta distribution. This family of rules may be called Hebbian in that all use a simple multiplication of the output of the neural network with some function of the residuals after feedback. The derived learning rules can be linked to an adaptive form of Exploratory Projection Pursuit and with artificial distributions, the networks perform as the theory suggests they should: the use of different learning rules derived from different PDFs allows the identification of "interesting" dimensions (as far from the Gaussian distribution as possible) in high-dimensional datasets. This novel algorithm, BHL, has been tested over seven artificial datasets to study the behavior of BHL parameters, and was later applied successfully over four real datasets, comparing its results, in terms of performance, with other well-known Exploratory and projection models such as Maximum Likelihood Hebbian Learning (MLHL), Locally-Linear Embedding (LLE), Curvilinear Component Analysis (CCA), Isomap and Neural Principal Component Analysis (Neural PCA).
The Laplacian spectrum of neural networks
de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.
2014-01-01
The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286
Trait self-esteem and neural activities related to self-evaluation and social feedback
Yang, Juan; Xu, Xiaofan; Chen, Yu; Shi, Zhenhao; Han, Shihui
2016-01-01
Self-esteem has been associated with neural responses to self-reflection and attitude toward social feedback but in different brain regions. The distinct associations might arise from different tasks or task-related attitudes in the previous studies. The current study aimed to clarify these by investigating the association between self-esteem and neural responses to evaluation of one’s own personality traits and of others’ opinion about one’s own personality traits. We scanned 25 college students using functional MRI during evaluation of oneself or evaluation of social feedback. Trait self-esteem was measured using the Rosenberg self-esteem scale after scanning. Whole-brain regression analyses revealed that trait self-esteem was associated with the bilateral orbitofrontal activity during evaluation of one’s own positive traits but with activities in the medial prefrontal cortex, posterior cingulate, and occipital cortices during evaluation of positive social feedback. Our findings suggest that trait self-esteem modulates the degree of both affective processes in the orbitofrontal cortex during self-reflection and cognitive processes in the medial prefrontal cortex during evaluation of social feedback. PMID:26842975
Trait self-esteem and neural activities related to self-evaluation and social feedback.
Yang, Juan; Xu, Xiaofan; Chen, Yu; Shi, Zhenhao; Han, Shihui
2016-02-04
Self-esteem has been associated with neural responses to self-reflection and attitude toward social feedback but in different brain regions. The distinct associations might arise from different tasks or task-related attitudes in the previous studies. The current study aimed to clarify these by investigating the association between self-esteem and neural responses to evaluation of one's own personality traits and of others' opinion about one's own personality traits. We scanned 25 college students using functional MRI during evaluation of oneself or evaluation of social feedback. Trait self-esteem was measured using the Rosenberg self-esteem scale after scanning. Whole-brain regression analyses revealed that trait self-esteem was associated with the bilateral orbitofrontal activity during evaluation of one's own positive traits but with activities in the medial prefrontal cortex, posterior cingulate, and occipital cortices during evaluation of positive social feedback. Our findings suggest that trait self-esteem modulates the degree of both affective processes in the orbitofrontal cortex during self-reflection and cognitive processes in the medial prefrontal cortex during evaluation of social feedback.
Introduction to Neural Networks.
1992-03-01
parallel processing of information that can greatly reduce the time required to perform operations which are needed in pattern recognition. Neural network, Artificial neural network , Neural net, ANN.
NASA Technical Reports Server (NTRS)
Hayashi, Isao; Nomura, Hiroyoshi; Wakami, Noboru
1991-01-01
Whereas conventional fuzzy reasonings are associated with tuning problems, which are lack of membership functions and inference rule designs, a neural network driven fuzzy reasoning (NDF) capable of determining membership functions by neural network is formulated. In the antecedent parts of the neural network driven fuzzy reasoning, the optimum membership function is determined by a neural network, while in the consequent parts, an amount of control for each rule is determined by other plural neural networks. By introducing an algorithm of neural network driven fuzzy reasoning, inference rules for making a pendulum stand up from its lowest suspended point are determined for verifying the usefulness of the algorithm.
SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation.
Xue, Yuan; Xu, Tao; Zhang, Han; Long, L Rodney; Huang, Xiaolei
2018-05-03
Inspired by classic Generative Adversarial Networks (GANs), we propose a novel end-to-end adversarial neural network, called SegAN, for the task of medical image segmentation. Since image segmentation requires dense, pixel-level labeling, the single scalar real/fake output of a classic GAN's discriminator may be ineffective in producing stable and sufficient gradient feedback to the networks. Instead, we use a fully convolutional neural network as the segmentor to generate segmentation label maps, and propose a novel adversarial critic network with a multi-scale L 1 loss function to force the critic and segmentor to learn both global and local features that capture long- and short-range spatial relationships between pixels. In our SegAN framework, the segmentor and critic networks are trained in an alternating fashion in a min-max game: The critic is trained by maximizing a multi-scale loss function, while the segmentor is trained with only gradients passed along by the critic, with the aim to minimize the multi-scale loss function. We show that such a SegAN framework is more effective and stable for the segmentation task, and it leads to better performance than the state-of-the-art U-net segmentation method. We tested our SegAN method using datasets from the MICCAI BRATS brain tumor segmentation challenge. Extensive experimental results demonstrate the effectiveness of the proposed SegAN with multi-scale loss: on BRATS 2013 SegAN gives performance comparable to the state-of-the-art for whole tumor and tumor core segmentation while achieves better precision and sensitivity for Gd-enhance tumor core segmentation; on BRATS 2015 SegAN achieves better performance than the state-of-the-art in both dice score and precision.
Ritchie, Marylyn D; White, Bill C; Parker, Joel S; Hahn, Lance W; Moore, Jason H
2003-01-01
Background Appropriate definition of neural network architecture prior to data analysis is crucial for successful data mining. This can be challenging when the underlying model of the data is unknown. The goal of this study was to determine whether optimizing neural network architecture using genetic programming as a machine learning strategy would improve the ability of neural networks to model and detect nonlinear interactions among genes in studies of common human diseases. Results Using simulated data, we show that a genetic programming optimized neural network approach is able to model gene-gene interactions as well as a traditional back propagation neural network. Furthermore, the genetic programming optimized neural network is better than the traditional back propagation neural network approach in terms of predictive ability and power to detect gene-gene interactions when non-functional polymorphisms are present. Conclusion This study suggests that a machine learning strategy for optimizing neural network architecture may be preferable to traditional trial-and-error approaches for the identification and characterization of gene-gene interactions in common, complex human diseases. PMID:12846935
Medical image analysis with artificial neural networks.
Jiang, J; Trundle, P; Ren, J
2010-12-01
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Decker, Arthur J.; Krasowski, Michael J.; Weiland, Kenneth E.
1993-01-01
This report describes an effort at NASA Lewis Research Center to use artificial neural networks to automate the alignment and control of optical measurement systems. Specifically, it addresses the use of commercially available neural network software and hardware to direct alignments of the common laser-beam-smoothing spatial filter. The report presents a general approach for designing alignment records and combining these into training sets to teach optical alignment functions to neural networks and discusses the use of these training sets to train several types of neural networks. Neural network configurations used include the adaptive resonance network, the back-propagation-trained network, and the counter-propagation network. This work shows that neural networks can be used to produce robust sequencers. These sequencers can learn by example to execute the step-by-step procedures of optical alignment and also can learn adaptively to correct for environmentally induced misalignment. The long-range objective is to use neural networks to automate the alignment and operation of optical measurement systems in remote, harsh, or dangerous aerospace environments. This work also shows that when neural networks are trained by a human operator, training sets should be recorded, training should be executed, and testing should be done in a manner that does not depend on intellectual judgments of the human operator.
Combinatorial Optimization by Amoeba-Based Neurocomputer with Chaotic Dynamics
NASA Astrophysics Data System (ADS)
Aono, Masashi; Hirata, Yoshito; Hara, Masahiko; Aihara, Kazuyuki
We demonstrate a computing system based on an amoeba of a true slime mold Physarum capable of producing rich spatiotemporal oscillatory behavior. Our system operates as a neurocomputer because an optical feedback control in accordance with a recurrent neural network algorithm leads the amoeba's photosensitive branches to search for a stable configuration concurrently. We show our system's capability of solving the traveling salesman problem. Furthermore, we apply various types of nonlinear time series analysis to the amoeba's oscillatory behavior in the problem-solving process. The results suggest that an individual amoeba might be characterized as a set of coupled chaotic oscillators.
Network feedback regulates motor output across a range of modulatory neuron activity.
Spencer, Robert M; Blitz, Dawn M
2016-06-01
Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5-35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. Copyright © 2016 the American Physiological Society.
Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.
Nitta, Tohru
2017-10-01
We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).
The effect of the neural activity on topological properties of growing neural networks.
Gafarov, F M; Gafarova, V R
2016-09-01
The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.
LavaNet—Neural network development environment in a general mine planning package
NASA Astrophysics Data System (ADS)
Kapageridis, Ioannis Konstantinou; Triantafyllou, A. G.
2011-04-01
LavaNet is a series of scripts written in Perl that gives access to a neural network simulation environment inside a general mine planning package. A well known and a very popular neural network development environment, the Stuttgart Neural Network Simulator, is used as the base for the development of neural networks. LavaNet runs inside VULCAN™—a complete mine planning package with advanced database, modelling and visualisation capabilities. LavaNet is taking advantage of VULCAN's Perl based scripting environment, Lava, to bring all the benefits of neural network development and application to geologists, mining engineers and other users of the specific mine planning package. LavaNet enables easy development of neural network training data sets using information from any of the data and model structures available, such as block models and drillhole databases. Neural networks can be trained inside VULCAN™ and the results be used to generate new models that can be visualised in 3D. Direct comparison of developed neural network models with conventional and geostatistical techniques is now possible within the same mine planning software package. LavaNet supports Radial Basis Function networks, Multi-Layer Perceptrons and Self-Organised Maps.
Kerr, Robert R; Grayden, David B; Thomas, Doreen A; Gilson, Matthieu; Burkitt, Anthony N
2014-01-01
The brain is able to flexibly select behaviors that adapt to both its environment and its present goals. This cognitive control is understood to occur within the hierarchy of the cortex and relies strongly on the prefrontal and premotor cortices, which sit at the top of this hierarchy. Pyramidal neurons, the principal neurons in the cortex, have been observed to exhibit much stronger responses when they receive inputs at their soma/basal dendrites that are coincident with inputs at their apical dendrites. This corresponds to inputs from both lower-order regions (feedforward) and higher-order regions (feedback), respectively. In addition to this, coherence between oscillations, such as gamma oscillations, in different neuronal groups has been proposed to modulate and route communication in the brain. In this paper, we develop a simple, but novel, neural mass model in which cortical units (or ensembles) exhibit gamma oscillations when they receive coherent oscillatory inputs from both feedforward and feedback connections. By forming these units into circuits that can perform logic operations, we identify the different ways in which operations can be initiated and manipulated by top-down feedback. We demonstrate that more sophisticated and flexible top-down control is possible when the gain of units is modulated by not only top-down feedback but by coherence between the activities of the oscillating units. With these types of units, it is possible to not only add units to, or remove units from, a higher-level unit's logic operation using top-down feedback, but also to modify the type of role that a unit plays in the operation. Finally, we explore how different network properties affect top-down control and processing in large networks. Based on this, we make predictions about the likely connectivities between certain brain regions that have been experimentally observed to be involved in goal-directed behavior and top-down attention.
Kerr, Robert R.; Grayden, David B.; Thomas, Doreen A.; Gilson, Matthieu; Burkitt, Anthony N.
2014-01-01
The brain is able to flexibly select behaviors that adapt to both its environment and its present goals. This cognitive control is understood to occur within the hierarchy of the cortex and relies strongly on the prefrontal and premotor cortices, which sit at the top of this hierarchy. Pyramidal neurons, the principal neurons in the cortex, have been observed to exhibit much stronger responses when they receive inputs at their soma/basal dendrites that are coincident with inputs at their apical dendrites. This corresponds to inputs from both lower-order regions (feedforward) and higher-order regions (feedback), respectively. In addition to this, coherence between oscillations, such as gamma oscillations, in different neuronal groups has been proposed to modulate and route communication in the brain. In this paper, we develop a simple, but novel, neural mass model in which cortical units (or ensembles) exhibit gamma oscillations when they receive coherent oscillatory inputs from both feedforward and feedback connections. By forming these units into circuits that can perform logic operations, we identify the different ways in which operations can be initiated and manipulated by top-down feedback. We demonstrate that more sophisticated and flexible top-down control is possible when the gain of units is modulated by not only top-down feedback but by coherence between the activities of the oscillating units. With these types of units, it is possible to not only add units to, or remove units from, a higher-level unit's logic operation using top-down feedback, but also to modify the type of role that a unit plays in the operation. Finally, we explore how different network properties affect top-down control and processing in large networks. Based on this, we make predictions about the likely connectivities between certain brain regions that have been experimentally observed to be involved in goal-directed behavior and top-down attention. PMID:25152715
Creative-Dynamics Approach To Neural Intelligence
NASA Technical Reports Server (NTRS)
Zak, Michail A.
1992-01-01
Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.
Brain-wide neuronal dynamics during motor adaptation in zebrafish.
Ahrens, Misha B; Li, Jennifer M; Orger, Michael B; Robson, Drew N; Schier, Alexander F; Engert, Florian; Portugues, Ruben
2012-05-09
A fundamental question in neuroscience is how entire neural circuits generate behaviour and adapt it to changes in sensory feedback. Here we use two-photon calcium imaging to record the activity of large populations of neurons at the cellular level, throughout the brain of larval zebrafish expressing a genetically encoded calcium sensor, while the paralysed animals interact fictively with a virtual environment and rapidly adapt their motor output to changes in visual feedback. We decompose the network dynamics involved in adaptive locomotion into four types of neuronal response properties, and provide anatomical maps of the corresponding sites. A subset of these signals occurred during behavioural adjustments and are candidates for the functional elements that drive motor learning. Lesions to the inferior olive indicate a specific functional role for olivocerebellar circuitry in adaptive locomotion. This study enables the analysis of brain-wide dynamics at single-cell resolution during behaviour.
Sensorimotor adaptation is influenced by background music.
Bock, Otmar
2010-06-01
It is well established that listening to music can modify subjects' cognitive performance. The present study evaluates whether this so-called Mozart Effect extends beyond cognitive tasks and includes sensorimotor adaptation. Three subject groups listened to musical pieces that in the author's judgment were serene, neutral, or sad, respectively. This judgment was confirmed by the subjects' introspective reports. While listening to music, subjects engaged in a pointing task that required them to adapt to rotated visual feedback. All three groups adapted successfully, but the speed and magnitude of adaptive improvement was more pronounced with serene music than with the other two music types. In contrast, aftereffects upon restoration of normal feedback were independent of music type. These findings support the existence of a "Mozart effect" for strategic movement control, but not for adaptive recalibration. Possibly, listening to music modifies neural activity in an intertwined cognitive-emotional network.
Li, Yanan; Yang, Chenguang; Ge, Shuzhi Sam; Lee, Tong Heng
2011-04-01
In this paper, adaptive neural network (NN) control is investigated for a class of block triangular multiinput-multioutput nonlinear discrete-time systems with each subsystem in pure-feedback form with unknown control directions. These systems are of couplings in every equation of each subsystem, and different subsystems may have different orders. To avoid the noncausal problem in the control design, the system is transformed into a predictor form by rigorous derivation. By exploring the properties of the block triangular form, implicit controls are developed for each subsystem such that the couplings of inputs and states among subsystems have been completely decoupled. The radial basis function NN is employed to approximate the unknown control. Each subsystem achieves a semiglobal uniformly ultimately bounded stability with the proposed control, and simulation results are presented to demonstrate its efficiency.
Simple robot suggests physical interlimb communication is essential for quadruped walking
Owaki, Dai; Kano, Takeshi; Nagasawa, Ko; Tero, Atsushi; Ishiguro, Akio
2013-01-01
Quadrupeds have versatile gait patterns, depending on the locomotion speed, environmental conditions and animal species. These locomotor patterns are generated via the coordination between limbs and are partly controlled by an intraspinal neural network called the central pattern generator (CPG). Although this forms the basis for current control paradigms of interlimb coordination, the mechanism responsible for interlimb coordination remains elusive. By using a minimalistic approach, we have developed a simple-structured quadruped robot, with the help of which we propose an unconventional CPG model that consists of four decoupled oscillators with only local force feedback in each leg. Our robot exhibits good adaptability to changes in weight distribution and walking speed simply by responding to local feedback, and it can mimic the walking patterns of actual quadrupeds. Our proposed CPG-based control method suggests that physical interaction between legs during movements is essential for interlimb coordination in quadruped walking. PMID:23097501
Simple robot suggests physical interlimb communication is essential for quadruped walking.
Owaki, Dai; Kano, Takeshi; Nagasawa, Ko; Tero, Atsushi; Ishiguro, Akio
2013-01-06
Quadrupeds have versatile gait patterns, depending on the locomotion speed, environmental conditions and animal species. These locomotor patterns are generated via the coordination between limbs and are partly controlled by an intraspinal neural network called the central pattern generator (CPG). Although this forms the basis for current control paradigms of interlimb coordination, the mechanism responsible for interlimb coordination remains elusive. By using a minimalistic approach, we have developed a simple-structured quadruped robot, with the help of which we propose an unconventional CPG model that consists of four decoupled oscillators with only local force feedback in each leg. Our robot exhibits good adaptability to changes in weight distribution and walking speed simply by responding to local feedback, and it can mimic the walking patterns of actual quadrupeds. Our proposed CPG-based control method suggests that physical interaction between legs during movements is essential for interlimb coordination in quadruped walking.
Neuro-adaptive backstepping control of SISO non-affine systems with unknown gain sign.
Ramezani, Zahra; Arefi, Mohammad Mehdi; Zargarzadeh, Hassan; Jahed-Motlagh, Mohammad Reza
2016-11-01
This paper presents two neuro-adaptive controllers for a class of uncertain single-input, single-output (SISO) nonlinear non-affine systems with unknown gain sign. The first approach is state feedback controller, so that a neuro-adaptive state-feedback controller is constructed based on the backstepping technique. The second approach is an observer-based controller and K-filters are designed to estimate the system states. The proposed method relaxes a priori knowledge of control gain sign and therefore by utilizing the Nussbaum-type functions this problem is addressed. In these methods, neural networks are employed to approximate the unknown nonlinear functions. The proposed adaptive control schemes guarantee that all the closed-loop signals are semi-globally uniformly ultimately bounded (SGUUB). Finally, the theoretical results are numerically verified through simulation examples. Simulation results show the effectiveness of the proposed methods. Copyright © 2016 ISA. All rights reserved.
Hovakimyan, N; Nardi, F; Calise, A; Kim, Nakwan
2002-01-01
We consider adaptive output feedback control of uncertain nonlinear systems, in which both the dynamics and the dimension of the regulated system may be unknown. However, the relative degree of the regulated output is assumed to be known. Given a smooth reference trajectory, the problem is to design a controller that forces the system measurement to track it with bounded errors. The classical approach requires a state observer. Finding a good observer for an uncertain nonlinear system is not an obvious task. We argue that it is sufficient to build an observer for the output tracking error. Ultimate boundedness of the error signals is shown through Lyapunov's direct method. The theoretical results are illustrated in the design of a controller for a fourth-order nonlinear system of relative degree two and a high-bandwidth attitude command system for a model R-50 helicopter.
An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks
Cabessa, Jérémie; Villa, Alessandro E. P.
2014-01-01
We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866
How Neural Networks Learn from Experience.
ERIC Educational Resources Information Center
Hinton, Geoffrey E.
1992-01-01
Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…
Wang, Feifei; Tidei, Joseph J; Polich, Eric D; Gao, Yu; Zhao, Huashan; Perrone-Bizzozero, Nora I; Guo, Weixiang; Zhao, Xinyu
2015-09-08
The mammalian embryonic lethal abnormal vision (ELAV)-like protein HuD is a neuronal RNA-binding protein implicated in neuronal development, plasticity, and diseases. Although HuD has long been associated with neuronal development, the functions of HuD in neural stem cell differentiation and the underlying mechanisms have gone largely unexplored. Here we show that HuD promotes neuronal differentiation of neural stem/progenitor cells (NSCs) in the adult subventricular zone by stabilizing the mRNA of special adenine-thymine (AT)-rich DNA-binding protein 1 (SATB1), a critical transcriptional regulator in neurodevelopment. We find that SATB1 deficiency impairs the neuronal differentiation of NSCs, whereas SATB1 overexpression rescues the neuronal differentiation phenotypes resulting from HuD deficiency. Interestingly, we also discover that SATB1 is a transcriptional activator of HuD during NSC neuronal differentiation. In addition, we demonstrate that NeuroD1, a neuronal master regulator, is a direct downstream target of SATB1. Therefore, HuD and SATB1 form a positive regulatory loop that enhances NeuroD1 transcription and subsequent neuronal differentiation. Our results here reveal a novel positive feedback network between an RNA-binding protein and a transcription factor that plays critical regulatory roles in neurogenesis.
Newman, Jonathan P.; Zeller-Townson, Riley; Fong, Ming-Fai; Arcot Desai, Sharanya; Gross, Robert E.; Potter, Steve M.
2013-01-01
Single neuron feedback control techniques, such as voltage clamp and dynamic clamp, have enabled numerous advances in our understanding of ion channels, electrochemical signaling, and neural dynamics. Although commercially available multichannel recording and stimulation systems are commonly used for studying neural processing at the network level, they provide little native support for real-time feedback. We developed the open-source NeuroRighter multichannel electrophysiology hardware and software platform for closed-loop multichannel control with a focus on accessibility and low cost. NeuroRighter allows 64 channels of stimulation and recording for around US $10,000, along with the ability to integrate with other software and hardware. Here, we present substantial enhancements to the NeuroRighter platform, including a redesigned desktop application, a new stimulation subsystem allowing arbitrary stimulation patterns, low-latency data servers for accessing data streams, and a new application programming interface (API) for creating closed-loop protocols that can be inserted into NeuroRighter as plugin programs. This greatly simplifies the design of sophisticated real-time experiments without sacrificing the power and speed of a compiled programming language. Here we present a detailed description of NeuroRighter as a stand-alone application, its plugin API, and an extensive set of case studies that highlight the system’s abilities for conducting closed-loop, multichannel interfacing experiments. PMID:23346047
A High Input Impedance Low Noise Integrated Front-End Amplifier for Neural Monitoring.
Zhou, Zhijun; Warr, Paul A
2016-12-01
Within neural monitoring systems, the front-end amplifier forms the critical element for signal detection and pre-processing, which determines not only the fidelity of the biosignal, but also impacts power consumption and detector size. In this paper, a novel combined feedback loop-controlled approach is proposed to compensate for input leakage currents generated by low noise amplifiers when in integrated circuit form alongside signal leakage into the input bias network. This loop topology ensures the Front-End Amplifier (FEA) maintains a high input impedance across all manufacturing and operational variations. Measured results from a prototype manufactured on the AMS 0.35 [Formula: see text] CMOS technology is provided. This FEA consumes 3.1 [Formula: see text] in 0.042 [Formula: see text], achieves input impedance of 42 [Formula: see text], and 18.2 [Formula: see text] input-referred noise.
A Low Noise Amplifier for Neural Spike Recording Interfaces
Ruiz-Amaya, Jesus; Rodriguez-Perez, Alberto; Delgado-Restituto, Manuel
2015-01-01
This paper presents a Low Noise Amplifier (LNA) for neural spike recording applications. The proposed topology, based on a capacitive feedback network using a two-stage OTA, efficiently solves the triple trade-off between power, area and noise. Additionally, this work introduces a novel transistor-level synthesis methodology for LNAs tailored for the minimization of their noise efficiency factor under area and noise constraints. The proposed LNA has been implemented in a 130 nm CMOS technology and occupies 0.053 mm-sq. Experimental results show that the LNA offers a noise efficiency factor of 2.16 and an input referred noise of 3.8 μVrms for 1.2 V power supply. It provides a gain of 46 dB over a nominal bandwidth of 192 Hz–7.4 kHz and consumes 1.92 μW. The performance of the proposed LNA has been validated through in vivo experiments with animal models. PMID:26437411
A Low Noise Amplifier for Neural Spike Recording Interfaces.
Ruiz-Amaya, Jesus; Rodriguez-Perez, Alberto; Delgado-Restituto, Manuel
2015-09-30
This paper presents a Low Noise Amplifier (LNA) for neural spike recording applications. The proposed topology, based on a capacitive feedback network using a two-stage OTA, efficiently solves the triple trade-off between power, area and noise. Additionally, this work introduces a novel transistor-level synthesis methodology for LNAs tailored for the minimization of their noise efficiency factor under area and noise constraints. The proposed LNA has been implemented in a 130 nm CMOS technology and occupies 0.053 mm-sq. Experimental results show that the LNA offers a noise efficiency factor of 2.16 and an input referred noise of 3.8 μVrms for 1.2 V power supply. It provides a gain of 46 dB over a nominal bandwidth of 192 Hz-7.4 kHz and consumes 1.92 μW. The performance of the proposed LNA has been validated through in vivo experiments with animal models.
Neural network to diagnose lining condition
NASA Astrophysics Data System (ADS)
Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.
2018-03-01
The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.
[Measurement and performance analysis of functional neural network].
Li, Shan; Liu, Xinyu; Chen, Yan; Wan, Hong
2018-04-01
The measurement of network is one of the important researches in resolving neuronal population information processing mechanism using complex network theory. For the quantitative measurement problem of functional neural network, the relation between the measure indexes, i.e. the clustering coefficient, the global efficiency, the characteristic path length and the transitivity, and the network topology was analyzed. Then, the spike-based functional neural network was established and the simulation results showed that the measured network could represent the original neural connections among neurons. On the basis of the former work, the coding of functional neural network in nidopallium caudolaterale (NCL) about pigeon's motion behaviors was studied. We found that the NCL functional neural network effectively encoded the motion behaviors of the pigeon, and there were significant differences in four indexes among the left-turning, the forward and the right-turning. Overall, the establishment method of spike-based functional neural network is available and it is an effective tool to parse the brain information processing mechanism.
Neural network error correction for solving coupled ordinary differential equations
NASA Technical Reports Server (NTRS)
Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.
1992-01-01
A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.