Delayed transiently chaotic neural networks and their application
NASA Astrophysics Data System (ADS)
Chen, Shyan-Shiou
2009-09-01
In this paper, we propose a novel model, a delayed transiently chaotic neural network (DTCNN), and numerically confirm that the model performs better in finding the global minimum for the traveling salesman problem (TSP) than the traditional transiently chaotic neural network. The asymptotic stability and chaotic behavior of the dynamical system with time delay are fully discussed. We not only theoretically prove the existence of Marotto's chaos for the delayed neural network without the cooling schedule by geometrically constructing a transversal homoclinic orbit, but we also discuss the stability of nonautonomous delayed systems using LaSalle's invariance principle. The result of the application to the TSP by the DTCNN might further explain the importance of systems with time delays in the neural system.
Generating Coherent Patterns of Activity from Chaotic Neural Networks
Sussillo, David; Abbott, L. F.
2009-01-01
Neural circuits display complex activity patterns both spontaneously and when responding to a stimulus or generating a motor output. How are these two forms of activity related? We develop a procedure called FORCE learning for modifying synaptic strengths either external to or within a model neural network to change chaotic spontaneous activity into a wide variety of desired activity patterns. FORCE learning works even though the networks we train are spontaneously chaotic and we leave feedback loops intact and unclamped during learning. Using this approach, we construct networks that produce a wide variety of complex output patterns, input-output transformations that require memory, multiple outputs that can be switched by control inputs, and motor patterns matching human motion capture data. Our results reproduce data on pre-movement activity in motor and premotor cortex, and suggest that synaptic plasticity may be a more rapid and powerful modulator of network activity than generally appreciated. PMID:19709635
NASA Astrophysics Data System (ADS)
Wang, Xing-Yuan; Bao, Xue-Mei
2013-05-01
In this paper, we propose a novel block cryptographic scheme based on a spatiotemporal chaotic system and a chaotic neural network (CNN). The employed CNN comprises a 4-neuron layer called a chaotic neuron layer (CNL), where the spatiotemporal chaotic system participates in generating its weight matrix and other parameters. The spatiotemporal chaotic system used in our scheme is the typical coupled map lattice (CML), which can be easily implemented in parallel by hardware. A 160-bit-long binary sequence is used to generate the initial conditions of the CML. The decryption process is symmetric relative to the encryption process. Theoretical analysis and experimental results prove that the block cryptosystem is secure and practical, and suitable for image encryption.
Synthetic Modeling of Autonomous Learning with a Chaotic Neural Network
NASA Astrophysics Data System (ADS)
Funabashi, Masatoshi
We investigate the possible role of intermittent chaotic dynamics called chaotic itinerancy, in interaction with nonsupervised learnings that reinforce and weaken the neural connection depending on the dynamics itself. We first performed hierarchical stability analysis of the Chaotic Neural Network model (CNN) according to the structure of invariant subspaces. Irregular transition between two attractor ruins with positive maximum Lyapunov exponent was triggered by the blowout bifurcation of the attractor spaces, and was associated with riddled basins structure. We secondly modeled two autonomous learnings, Hebbian learning and spike-timing-dependent plasticity (STDP) rule, and simulated the effect on the chaotic itinerancy state of CNN. Hebbian learning increased the residence time on attractor ruins, and produced novel attractors in the minimum higher-dimensional subspace. It also augmented the neuronal synchrony and established the uniform modularity in chaotic itinerancy. STDP rule reduced the residence time on attractor ruins, and brought a wide range of periodicity in emerged attractors, possibly including strange attractors. Both learning rules selectively destroyed and preserved the specific invariant subspaces, depending on the neuron synchrony of the subspace where the orbits are situated. Computational rationale of the autonomous learning is discussed in connectionist perspective.
Learning-induced pattern classification in a chaotic neural network
NASA Astrophysics Data System (ADS)
Li, Yang; Zhu, Ping; Xie, Xiaoping; He, Guoguang; Aihara, Kazuyuki
2012-01-01
In this Letter, we propose a Hebbian learning rule with passive forgetting (HLRPF) for use in a chaotic neural network (CNN). We then define the indices based on the Euclidean distance to investigate the evolution of the weights in a simplified way. Numerical simulations demonstrate that, under suitable external stimulations, the CNN with the proposed HLRPF acts as a fuzzy-like pattern classifier that performs much better than an ordinary CNN. The results imply relationship between learning and recognition.
Chaotic neural network for learnable associative memory recall
NASA Astrophysics Data System (ADS)
Hsu, Charles C.; Szu, Harold H.
2003-04-01
We show that the Fuzzy Membership Function (FMF) is learnable with underlying chaotic neural networks for the open set probability. A sigmoid N-shaped function is used to generate chaotic signals. We postulate that such a chaotic set of innumerable realization forms a FMF exemplified by fuzzy feature maps of eyes, nose, etc., for the invariant face classification. The CNN with FMF plays an important role for fast pattern recognition capability in examples of both habituation and novelty detections. In order to reduce the computation complexity, the nearest-neighborhood weight connection is proposed. In addition, a novel timing-sequence weight-learning algorithm is introduced to increase the capacity and recall of the associative memory. For simplicity, a piece-wise-linear (PWL) N-shaped function was designed and implemented and fabricated in a CMOS chip.
Chaotic hopping between attractors in neural networks.
Marro, Joaquín; Torres, Joaquín J; Cortés, Jesús M
2007-03-01
We present a neurobiologically-inspired stochastic cellular automaton whose state jumps with time between the attractors corresponding to a series of stored patterns. The jumping varies from regular to chaotic as the model parameters are modified. The resulting irregular behavior, which mimics the state of attention in which a system shows a great adaptability to changing stimulus, is a consequence in the model of short-time presynaptic noise which induces synaptic depression. We discuss results from both a mean-field analysis and Monte Carlo simulations. PMID:17196366
An Improved Transiently Chaotic Neural Network with Application to the Maximum Clique Problems
NASA Astrophysics Data System (ADS)
Xu, Xinshun; Tang, Zheng; Wang, Jiahai
By analyzing the dynamic behaviors of the transiently chaotic neural network, we present a improved transiently chaotic neural network(TCNN) model for combinatorial optimization problems and test it on the maximum clique problem. Extensive simulations are performed and the results show that the improved transiently chaotic neural network model can yield satisfactory results on both some graphs of the DIMACS clique instances in the second DIMACS challenge and p-random graphs. It is superior to other algorithms in light of the solution quality and CPU time. Moreover, the improved model uses fewer steps to converge to saturated states in comparison with the original transiently chaotic neural network.
Sinusoidal modulation control method in a chaotic neural network
NASA Astrophysics Data System (ADS)
Zhang, Qihanyue; Xie, Xiaoping; Zhu, Ping; Chen, Hongping; He, Guoguang
2014-08-01
Chaotic neural networks (CNNs) have chaotic dynamic associative memory properties: The memory states appear non-periodically, and cannot be converged to a stored pattern. Thus, it is necessary to control chaos in a CNN in order to recognize associative memory. In this paper, a novel control method, the sinusoidal modulation control method, has been proposed to control chaos in a CNN. In this method, a sinusoidal wave simplified from brain waves is used as a control signal to modulate a parameter of the CNN. The simulation results demonstrate the effectiveness of this control method. The controlled CNN can be applied to information processing. Moreover, the method provides a way to associate brain waves by controlling CNNs.
NASA Astrophysics Data System (ADS)
Bahi, Jacques M.; Couchot, Jean-François; Guyeux, Christophe; Salomon, Michel
2012-03-01
Many research works deal with chaotic neural networks for various fields of application. Unfortunately, up to now, these networks are usually claimed to be chaotic without any mathematical proof. The purpose of this paper is to establish, based on a rigorous theoretical framework, an equivalence between chaotic iterations according to Devaney and a particular class of neural networks. On the one hand, we show how to build such a network, on the other hand, we provide a method to check if a neural network is a chaotic one. Finally, the ability of classical feedforward multilayer perceptrons to learn sets of data obtained from a dynamical system is regarded. Various boolean functions are iterated on finite states. Iterations of some of them are proven to be chaotic as it is defined by Devaney. In that context, important differences occur in the training process, establishing with various neural networks that chaotic behaviors are far more difficult to learn.
A simple chaotic neuron model: stochastic behavior of neural networks.
Aydiner, Ekrem; Vural, Adil M; Ozcelik, Bekir; Kiymac, Kerim; Tan, Uner
2003-05-01
We have briefly reviewed the occurrence of the post-synaptic potentials between neurons, the relationship between EEG and neuron dynamics, as well as methods of signal analysis. We propose a simple stochastic model representing electrical activity of neuronal systems. The model is constructed using the Monte Carlo simulation technique. The results yielded EEG-like signals with their phase portraits in three-dimensional space. The Lyapunov exponent was positive, indicating chaotic behavior. The correlation of the EEG-like signals was.92, smaller than those reported by others. It was concluded that this neuron model may provide valuable clues about the dynamic behavior of neural systems. PMID:12745622
NASA Astrophysics Data System (ADS)
Potapov, A.; Ali, M. K.
2001-04-01
We consider the problem of stabilizing unstable equilibria by discrete controls (the controls take discrete values at discrete moments of time). We prove that discrete control typically creates a chaotic attractor in the vicinity of an equilibrium. Artificial neural networks with reinforcement learning are known to be able to learn such a control scheme. We consider examples of such systems, discuss some details of implementing the reinforcement learning to controlling unstable equilibria, and show that the arising dynamics is characterized by positive Lyapunov exponents, and hence is chaotic. This chaos can be observed both in the controlled system and in the activity patterns of the controller.
Kwok, T; Smith, K A
2000-09-01
The aim of this paper is to study both the theoretical and experimental properties of chaotic neural network (CNN) models for solving combinatorial optimization problems. Previously we have proposed a unifying framework which encompasses the three main model types, namely, Chen and Aihara's chaotic simulated annealing (CSA) with decaying self-coupling, Wang and Smith's CSA with decaying timestep, and the Hopfield network with chaotic noise. Each of these models can be represented as a special case under the framework for certain conditions. This paper combines the framework with experimental results to provide new insights into the effect of the chaotic neurodynamics of each model. By solving the N-queen problem of various sizes with computer simulations, the CNN models are compared in different parameter spaces, with optimization performance measured in terms of feasibility, efficiency, robustness and scalability. Furthermore, characteristic chaotic neurodynamics crucial to effective optimization are identified, together with a guide to choosing the corresponding model parameters. PMID:11152205
Iterative prediction of chaotic time series using a recurrent neural network
Essawy, M.A.; Bodruzzaman, M.; Shamsi, A.; Noel, S.
1996-12-31
Chaotic systems are known for their unpredictability due to their sensitive dependence on initial conditions. When only time series measurements from such systems are available, neural network based models are preferred due to their simplicity, availability, and robustness. However, the type of neutral network used should be capable of modeling the highly non-linear behavior and the multi-attractor nature of such systems. In this paper the authors use a special type of recurrent neural network called the ``Dynamic System Imitator (DSI)``, that has been proven to be capable of modeling very complex dynamic behaviors. The DSI is a fully recurrent neural network that is specially designed to model a wide variety of dynamic systems. The prediction method presented in this paper is based upon predicting one step ahead in the time series, and using that predicted value to iteratively predict the following steps. This method was applied to chaotic time series generated from the logistic, Henon, and the cubic equations, in addition to experimental pressure drop time series measured from a Fluidized Bed Reactor (FBR), which is known to exhibit chaotic behavior. The time behavior and state space attractor of the actual and network synthetic chaotic time series were analyzed and compared. The correlation dimension and the Kolmogorov entropy for both the original and network synthetic data were computed. They were found to resemble each other, confirming the success of the DSI based chaotic system modeling.
Neuron-synapse IC chip-set for large-scale chaotic neural networks.
Horio, Y; Aihara, K; Yamamoto, O
2003-01-01
We propose a neuron-synapse integrated circuit (IC) chip-set for large-scale chaotic neural networks. We use switched-capacitor (SC) circuit techniques to implement a three-internal-state transiently-chaotic neural network model. The SC chaotic neuron chip faithfully reproduces complex chaotic dynamics in real numbers through continuous state variables of the analog circuitry. We can digitally control most of the model parameters by means of programmable capacitive arrays embedded in the SC chaotic neuron chip. Since the output of the neuron is transfered into a digital pulse according to the all-or-nothing property of an axon, we design a synapse chip with digital circuits. We propose a memory-based synapse circuit architecture to achieve a rapid calculation of a vast number of weighted summations. Both of the SC neuron and the digital synapse circuits have been fabricated as IC forms. We have tested these IC chips extensively, and confirmed the functions and performance of the chip-set. The proposed neuron-synapse IC chip-set makes it possible to construct a scalable and reconfigurable large-scale chaotic neural network with 10000 neurons and 10000/sup 2/ synaptic connections. PMID:18244585
Zhang, Guodong; Shen, Yi
2015-07-01
This paper is concerned with the global exponential stabilization of memristor-based chaotic neural networks with both time-varying delays and general activation functions. Here, we adopt nonsmooth analysis and control theory to handle memristor-based chaotic neural networks with discontinuous right-hand side. In particular, several new sufficient conditions ensuring exponential stabilization of memristor-based chaotic neural networks are obtained via periodically intermittent control. In addition, the proposed results here are easy to verify and they also extend the earlier publications. Finally, numerical simulations illustrate the effectiveness of the obtained results. PMID:25148672
Lalitha, V; Eswaran, C
2007-12-01
Monitoring the depth of anesthesia (DOA) during surgery is very important in order to avoid patients' interoperative awareness. Since the traditional methods of assessing DOA which involve monitoring the heart rate, pupil size, sweating etc, may vary from patient to patient depending on the type of surgery and the type of drug administered, modern methods based on electroencephalogram (EEG) are preferred. EEG being a nonlinear signal, it is appropriate to use nonlinear chaotic parameters to identify the anesthetic depth levels. This paper discusses an automated detection method of anesthetic depth levels based on EEG recordings using non-linear chaotic features and neural network classifiers. Three nonlinear parameters, namely, correlation dimension (CD), Lyapunov exponent (LE) and Hurst exponent (HE) are used as features and two neural network models, namely, multi-layer perceptron network (feed forward model) and Elman network (feedback model) are used for classification. The neural network models are trained and tested with single and multiple features derived from chaotic parameters and the performances are evaluated in terms of sensitivity, specificity and overall accuracy. It is found from the experimental results that the Lyapunov exponent feature with Elman network yields an overall accuracy of 99% in detecting the anesthetic depth levels. PMID:18041276
Hybrid information privacy system: integration of chaotic neural network and RSA coding
NASA Astrophysics Data System (ADS)
Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.
2005-03-01
Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.
Mackey-Glass noisy chaotic time series prediction by a swarm-optimized neural network
NASA Astrophysics Data System (ADS)
López-Caraballo, C. H.; Salfate, I.; Lazzús, J. A.; Rojas, P.; Rivera, M.; Palma-Chilla, L.
2016-05-01
In this study, an artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass noiseless chaotic time series in the short-term and long-term prediction. The performance prediction is evaluated and compared with similar work in the literature, particularly for the long-term forecast. Also, we present properties of the dynamical system via the study of chaotic behaviour obtained from the time series prediction. Then, this standard hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called stochastic hybrid ANN+PSO) in order to obtain a new estimator of the predictions that also allowed us compute uncertainties of predictions for noisy Mackey-Glass chaotic time series. We study the impact of noise for three cases with a white noise level (σ N ) contribution of 0.01, 0.05 and 0.1.
Takada, Ryu; Munetaka, Daigo; Kobayashi, Shoji; Suemitsu, Yoshikazu; Nara, Shigetoshi
2007-09-01
Chaotic dynamics in a recurrent neural network model and in two-dimensional cellular automata, where both have finite but large degrees of freedom, are investigated from the viewpoint of harnessing chaos and are applied to motion control to indicate that both have potential capabilities for complex function control by simple rule(s). An important point is that chaotic dynamics generated in these two systems give us autonomous complex pattern dynamics itinerating through intermediate state points between embedded patterns (attractors) in high-dimensional state space. An application of these chaotic dynamics to complex controlling is proposed based on an idea that with the use of simple adaptive switching between a weakly chaotic regime and a strongly chaotic regime, complex problems can be solved. As an actual example, a two-dimensional maze, where it should be noted that the spatial structure of the maze is one of typical ill-posed problems, is solved with the use of chaos in both systems. Our computer simulations show that the success rate over 300 trials is much better, at least, than that of a random number generator. Our functional simulations indicate that both systems are almost equivalent from the viewpoint of functional aspects based on our idea, harnessing of chaos. PMID:19003512
Effects of correlation among stored patterns on associative dynamics of chaotic neural network
NASA Astrophysics Data System (ADS)
Iwai, Toshiya; Matsuzaki, Fuminari; Kuroiwa, Jousuke; Miyake, Shogo
2005-12-01
We numerically investigate the effects of correlation among stored patterns on the associative dynamics in a chaotic neural network model. In the model, there are two kinds of parameters: one is a measure of the Hopfield like behavior of the retrieval process and another controls the chaotic behavior. The parameter dependence of the associative dynamics is also examined. The following results are found. (i) Two dimensional parameter space is divided into two kinds of associative states by a distinct boundary. One is the retrieval state of the association such as the Hopfield like retrieval state, and another is the wandering state of the associative dynamics where the network retrieves stored patterns and their reverse patterns. (ii) The area of the wandering state becomes larger as the degree of correlation becomes larger. (iii) As the degree of correlation becomes larger, both the recall ratio of correlated patterns and the transition frequency between correlated patterns becomes larger in the wandering state. (iv) The whole region of the wandering state in the parameter space is not necessarily chaotic from the view point of the Lyapunov dimension, but most of the region of the wandering state is chaotic.
NASA Astrophysics Data System (ADS)
Shiino, Masatoshi; Fukai, Tomoki
1990-05-01
We propose a simple asymmetric neural network which exhibits chaotic motions in retrieval dynamics with a finite number of memory patterns. The characteristic feature of the model is that the synaptic couplings are designed in such a way that each neuron is given an exclusively excitatory or inhibitory function, i.e., a physiological constraint of the Dale hypothesis is taken into account. The updating rule of the neurons is assumed to be simple Markovian stochastic dynamics of the Little type (without time delay) in which the threshold for neuron firing is incorporated. Our analysis is based on the exact time evolution equations derived in the thermodynamic limit for the macroscopic pattern overlaps. It is shown that chaotic image retrieval can take place only when a finite amount of stochastic noise exists.
Bodruzzaman, M.; Essawy, M.A.
1996-03-31
Chaotic systems are known for their unpredictability due to their sensitive dependence on initial conditions. When only time series measurements from such systems are available, neural network based models are preferred due to their simplicity, availability, and robustness. However, the type of neural network used should be capable of modeling the highly non-linear behavior and the multi- attractor nature of such systems. In this paper we use a special type of recurrent neural network called the ``Dynamic System Imitator (DSI)``, that has been proven to be capable of modeling very complex dynamic behaviors. The DSI is a fully recurrent neural network that is specially designed to model a wide variety of dynamic systems. The prediction method presented in this paper is based upon predicting one step ahead in the time series, and using that predicted value to iteratively predict the following steps. This method was applied to chaotic time series generated from the logistic, Henon, and the cubic equations, in addition to experimental pressure drop time series measured from a Fluidized Bed Reactor (FBR), which is known to exhibit chaotic behavior. The time behavior and state space attractor of the actual and network synthetic chaotic time series were analyzed and compared. The correlation dimension and the Kolmogorov entropy for both the original and network synthetic data were computed. They were found to resemble each other, confirming the success of the DSI based chaotic system modeling.
Crop Classification by Forward Neural Network with Adaptive Chaotic Particle Swarm Optimization
Zhang, Yudong; Wu, Lenan
2011-01-01
This paper proposes a hybrid crop classifier for polarimetric synthetic aperture radar (SAR) images. The feature sets consisted of span image, the H/A/α decomposition, and the gray-level co-occurrence matrix (GLCM) based texture features. Then, the features were reduced by principle component analysis (PCA). Finally, a two-hidden-layer forward neural network (NN) was constructed and trained by adaptive chaotic particle swarm optimization (ACPSO). K-fold cross validation was employed to enhance generation. The experimental results on Flevoland sites demonstrate the superiority of ACPSO to back-propagation (BP), adaptive BP (ABP), momentum BP (MBP), Particle Swarm Optimization (PSO), and Resilient back-propagation (RPROP) methods. Moreover, the computation time for each pixel is only 1.08 × 10−7 s. PMID:22163872
Bodruzzaman, M.; Essawy, M.A.
1996-02-27
Pressurized fluidized-bed combustors (FBC) are becoming very popular, efficient, and environmentally acceptable replica for conventional boilers in Coal-fired and chemical plants. In this paper, we present neural network-based methods for chaotic behavior monitoring and control in FBC systems, in addition to chaos analysis of FBC data, in order to localize chaotic modes in them. Both of the normal and abnormal mixing processes in FBC systems are known to undergo chaotic behavior. Even though, this type of behavior is not always undesirable, it is a challenge to most types of conventional control methods, due to its unpredictable nature. The performance, reliability, availability and operating cost of an FBC system will be significantly improved, if an appropriate control method is available to control its abnormal operation and switch it to normal when exists. Since this abnormal operation develops only at certain times due to a sequence of transient behavior, then an appropriate abnormal behavior monitoring method is also necessary. Those methods has to be fast enough for on-line operation, such that the control methods would be applied before the system reaches a non-return point in its transients. It was found that both normal and abnormal behavior of FBC systems are chaotic. However, the abnormal behavior has a higher order chaos. Hence, the appropriate control system should be capable of switching the system behavior from its high order chaos condition to low order chaos. It is to mention that most conventional chaos control methods are designed to switch a chaotic behavior to a periodic orbit. Since this is not the goal for the FBC case, further developments are needed. We propose neural network-based control methods which are known for their flexibility and capability to control both non-linear and chaotic systems. A special type of recurrent neural network, known as Dynamic System Imitator (DSI), will be used for the monitoring and control purposes.
Learning feature constraints in a chaotic neural memory
NASA Astrophysics Data System (ADS)
Nara, Shigetoshi; Davis, Peter
1997-01-01
We consider a neural network memory model that has both nonchaotic and chaotic regimes. The chaotic regime occurs for reduced neural connectivity. We show that it is possible to adapt the dynamics in the chaotic regime, by reinforcement learning, to learn multiple constraints on feature subsets. This results in chaotic pattern generation that is biased to generate the feature patterns that have received responses. Depending on the connectivity, there can be additional memory pulling effects, due to the correlations between the constrained neurons in the feature subsets and the other neurons.
Bodruzzaman, M.
1996-10-30
We have developed techniques to control the chaotic behavior in Fluidized Bed Systems (FBC) systems using recurrent neural networks. For the sake of comparison of the techniques we have developed with the traditional chaotic system control methods, in the past three months we have been investigating the most popular and first known chaotic system control technique known as the OGY method. This method was developed by Edward Ott, Celso Grebogi and James York in 1990. In the past few years this method was further developed and applied by many researchers in the field. It was shown that this method has potential applications to a large cross section of problems in many fields. The only remaining question is whether it will prove possible to move from laboratory demonstrations on model systems to real world situations of engineering importance. We have developed computer programs to compute the OGY parameters from a chaotic time series, to control a chaotic system to a desired periodic orbit, using small perturbations to an accessible system parameter. We have tested those programs on the logistic map and the Henon map. We were able to control the chaotic behavior in such typical chaotic systems to period 1, 2, 3, 5..., as shown in some sample results below. In the following sections a brief discussion for the OGY method will be introduced, followed by results for the logistic map and Henon map control.
Bodruzzaman, M.; Essawy, M.A.
1996-07-30
We have developed techniques to control the chaotic behavior in the Fluidized Bed (FBC) Systems using Artificial Neural Networks (ANNs). For those techniques to cross from theory to implementation, the computer programs we are developing have to be interfaced with the outside world, as a necessary step towards the actual interface with an FBC system or its experimental mock up. For this reason we are working on a Data Acquisition Board setup that will enable communication between our programs and external systems. Communication is planned to be enabled in both ways to deliver feedback signals from a system to the control programs in one way, and the control signals from the control programs to the controlled system in the other way. On the other hand, since most of our programs are PC based, they have to follow the revolutionary progress in the PC technology. Our programs were developed in the DOS environment using an early version of Microsoft C compiler. For those programs to meet the current needs of most PC users, we are working on converting those programs to the Windows environment, using a very advanced and up to date C++ compiler. This compiler is known as the Microsoft Visual C++ Version 4.0. This compiler enables the implementation of very professional and sophisticated Windows 95, 32 bit applications. It also allows a simple utilization of the Object Oriented Programming (OOP) techniques, and lots of powerful graphical and communication tools known as the Microsoft Foundation Classes (MFC). This compiler also allows creating Dynamic Link Libraries (DLLS) that can be liked together or with other Windows programs. These two main aspects, the computer-system interface and the DOS-Windows migration will give our programs a leap frog towards their real implementation.
Smith, Patrick I.
2003-09-23
Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing
Hsieh, Chin-Tsung; Yau, Her-Terng; Wu, Shang-Yi; Lin, Huo-Cheng
2014-01-01
The collision fault detection of a XXY stage is proposed for the first time in this paper. The stage characteristic signals are extracted and imported into the master and slave chaos error systems by signal filtering from the vibratory magnitude of the stage. The trajectory diagram is made from the chaos synchronization dynamic error signals E1 and E2. The distance between characteristic positive and negative centers of gravity, as well as the maximum and minimum distances of trajectory diagram, are captured as the characteristics of fault recognition by observing the variation in various signal trajectory diagrams. The matter-element model of normal status and collision status is built by an extension neural network. The correlation grade of various fault statuses of the XXY stage was calculated for diagnosis. The dSPACE is used for real-time analysis of stage fault status with an accelerometer sensor. Three stage fault statuses are detected in this study, including normal status, Y collision fault and X collision fault. It is shown that the scheme can have at least 75% diagnosis rate for collision faults of the XXY stage. As a result, the fault diagnosis system can be implemented using just one sensor, and consequently the hardware cost is significantly reduced. PMID:25405512
NASA Astrophysics Data System (ADS)
Gu, Huaguang
2013-06-01
The transition from chaotic bursting to chaotic spiking has been simulated and analyzed in theoretical neuronal models. In the present study, we report experimental observations in a neural pacemaker of a transition from chaotic bursting to chaotic spiking within a bifurcation scenario from period-1 bursting to period-1 spiking. This was induced by adjusting extracellular calcium or potassium concentrations. The bifurcation scenario began from period-doubling bifurcations or period-adding sequences of bursting pattern. This chaotic bursting is characterized by alternations between multiple continuous spikes and a long duration of quiescence, whereas chaotic spiking is comprised of fast, continuous spikes without periods of quiescence. Chaotic bursting changed to chaotic spiking as long interspike intervals (ISIs) of quiescence disappeared within bursting patterns, drastically decreasing both ISIs and the magnitude of the chaotic attractors. Deterministic structures of the chaotic bursting and spiking patterns are also identified by a short-term prediction. The experimental observations, which agree with published findings in theoretical neuronal models, demonstrate the existence and reveal the dynamics of a neuronal transition from chaotic bursting to chaotic spiking in the nervous system.
Kepler, T.B.
1989-01-01
After a brief introduction to the techniques and philosophy of neural network modeling by spin glass inspired system, the author investigates several properties of these discrete models for autoassociative memory. Memories are represented as patterns of neural activity; their traces are stored in a distributed manner in the matrix of synaptic coupling strengths. Recall is dynamic, an initial state containing partial information about one of the memories evolves toward that memory. Activity in each neuron creates fields at every other neuron, the sum total of which determines its activity. By averaging over the space of interaction matrices with memory constraints enforced by the choice of measure, we show that the exist universality classes defined by families of field distributions and the associated network capacities. He demonstrates the dominant role played by the field distribution in determining the size of the domains of attraction and present, in two independent ways, an expression for this size. He presents a class of convergent learning algorithms which improve upon known algorithms for producing such interaction matrices. He demonstrates that spurious states, or unexperienced memories, may be practically suppressed by the inducement of n-cycles and chaos. He investigates aspects of chaos in these systems, and then leave discrete modeling to implement the analysis of chaotic behavior on a continuous valued network realized in electronic hardware. In each section he combine analytical calculation and computer simulations.
NASA Technical Reports Server (NTRS)
Thakoor, Anil
1990-01-01
Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.
NASA Technical Reports Server (NTRS)
Baram, Yoram
1992-01-01
Report presents analysis of nested neural networks, consisting of interconnected subnetworks. Analysis based on simplified mathematical models more appropriate for artificial electronic neural networks, partly applicable to biological neural networks. Nested structure allows for retrieval of individual subpatterns. Requires fewer wires and connection devices than fully connected networks, and allows for local reconstruction of damaged subnetworks without rewiring entire network.
Ritter, G.X.; Sussner, P.
1996-12-31
The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.
Maximum hyperchaos in chaotic nonmonotonic neuronal networks
NASA Astrophysics Data System (ADS)
Shuai, J. W.; Chen, Z. X.; Liu, R. T.; Wu, B. X.
1997-07-01
Hyperchaos in chaotic nonmonotonic neuronal networks is discussed with computer simulations. Maximum chaos with all Lyapunov exponents positive is found not only in the present dissipative model with weak coupling connections between neurons, but also with some strong-coupling connections. Although the model presented is a noninvertible map, the information dimension of simple chaos still yields a good approximation to the Lyapunov dimension.
Evolution to a small-world network with chaotic units
NASA Astrophysics Data System (ADS)
Gong, P.; van Leeuwen, C.
2004-07-01
We investigated the mutually supporting role of chaotic activity and evolving structure in a complex network. An initially randomly coupled network with chaotic activation is adaptively rewired according to dynamic coherence between its units. The evolving network reaches a small-world structure. Meanwhile, collective network activity tends to an intermittent dynamic clustering regime. Spontaneous chaotic activity and adaptively evolving structure jointly enhance signal propagation capacity.
NASA Technical Reports Server (NTRS)
Benediktsson, J. A.; Ersoy, O. K.; Swain, P. H.
1991-01-01
A neural network architecture called a consensual neural network (CNN) is proposed for the classification of data from multiple sources. Its relation to hierarchical and ensemble neural networks is discussed. CNN is based on the statistical consensus theory and uses nonlinearly transformed input data. The input data are transformed several times, and the different transformed data are applied as if they were independent inputs. The independent inputs are classified using stage neural networks and outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote-sensing data and geographic data are given.
A novel compound chaotic block cipher for wireless sensor networks
NASA Astrophysics Data System (ADS)
Tong, Xiao-Jun; Wang, Zhu; Liu, Yang; Zhang, Miao; Xu, Lianjie
2015-05-01
The nodes of wireless sensor network (WSN) have limited calculation and communication ability. Traditional encryption algorithms need large amounts of resources, so they cannot be applied to the wireless sensor network. To solve this problem, this paper proposes a block cipher algorithm for wireless sensor network based on compound chaotic map. The algorithm adopts Feistel network and constructs a Cubic function including discretized chaotic map, and its key is generated by the compound chaotic sequence. Security and performance tests show that the algorithm has high security and efficiency, low resource depletion. So the novel chaotic algorithm is suitable for the wireless sensor networks.
Exploring neural network technology
Naser, J.; Maulbetsch, J.
1992-12-01
EPRI is funding several projects to explore neural network technology, a form of artificial intelligence that some believe may mimic the way the human brain processes information. This research seeks to provide a better understanding of fundamental neural network characteristics and to identify promising utility industry applications. Results to date indicate that the unique attributes of neural networks could lead to improved monitoring, diagnostic, and control capabilities for a variety of complex utility operations. 2 figs.
Intrinsic adaptation in autonomous recurrent neural networks.
Marković, Dimitrije; Gros, Claudius
2012-02-01
A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the quality of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics. PMID:22091667
Traffic chaotic dynamics modeling and analysis of deterministic network
NASA Astrophysics Data System (ADS)
Wu, Weiqiang; Huang, Ning; Wu, Zhitao
2016-07-01
Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.
Neural networks for aircraft control
NASA Technical Reports Server (NTRS)
Linse, Dennis
1990-01-01
Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.
Critical Branching Neural Networks
ERIC Educational Resources Information Center
Kello, Christopher T.
2013-01-01
It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…
NASA Technical Reports Server (NTRS)
Padgett, Mary L.; Desai, Utpal; Roppel, T.A.; White, Charles R.
1993-01-01
A design procedure is suggested for neural networks which accommodates the inclusion of such knowledge-based systems techniques as fuzzy logic and pairwise comparisons. The use of these procedures in the design of applications combines qualitative and quantitative factors with empirical data to yield a model with justifiable design and parameter selection procedures. The procedure is especially relevant to areas of back-propagation neural network design which are highly responsive to the use of precisely recorded expert knowledge.
Gallery of Chaotic Attractors Generated by Fractal Network
NASA Astrophysics Data System (ADS)
Bouallegue, Kais
During the last decade, fractal processes and chaotic systems were widely studied in many areas of research. Chaotic systems are highly dependent on initial conditions. Small changes in initial conditions can generate widely diverging or converging outcomes for both bifurcation or attraction in chaotic systems. In this work, we present a new method on how to generate a new family of chaotic attractors by combining these with a network of fractal processes. The proposed approach in this article is based upon the construction of a new system of fractal processes.
Neural network representation and learning of mappings and their derivatives
NASA Technical Reports Server (NTRS)
White, Halbert; Hornik, Kurt; Stinchcombe, Maxwell; Gallant, A. Ronald
1991-01-01
Discussed here are recent theorems proving that artificial neural networks are capable of approximating an arbitrary mapping and its derivatives as accurately as desired. This fact forms the basis for further results establishing the learnability of the desired approximations, using results from non-parametric statistics. These results have potential applications in robotics, chaotic dynamics, control, and sensitivity analysis. An example involving learning the transfer function and its derivatives for a chaotic map is discussed.
Chaotic Modes in Scale Free Opinion Networks
NASA Astrophysics Data System (ADS)
Kusmartsev, Feo V.; Kürten, Karl E.
In this paper, we investigate processes associated with formation of public opinion in varies directed random, scale free and small-world social networks. The important factor of the opinion formation is the existence of contrarians which were discovered by Granovetter in various social psychology experiments1,2,3 long ago and later introduced in sociophysics by Galam.4 When the density of contrarians increases the system behavior drastically changes at some critical value. At high density of contrarians the system can never arrive to a consensus state and periodically oscillates with different periods depending on specific structure of the network. At small density of the contrarians the behavior is manifold. It depends primary on the initial state of the system. If initially the majority of the population agrees with each other a state of stable majority may be easily reached. However when originally the population is divided in nearly equal parts consensus can never be reached. We model the emergence of collective decision making by considering N interacting agents, whose opinions are described by two state Ising spin variable associated with YES and NO. We show that the dynamical behaviors are very sensitive not only to the density of the contrarians but also to the network topology. We find that a phase of social chaos may arise in various dynamical processes of opinion formation in many realistic models. We compare the prediction of the theory with data describing the dynamics of the average opinion of the USA population collected on a day-by-day basis by varies media sources during the last six month before the final Obama-McCain election. The qualitative ouctome is in reasonable agreement with the prediction of our theory. In fact, the analyses of these data made within the paradigm of our theory indicates that even in this campaign there were chaotic elements where the public opinion migrated in an unpredictable chaotic way. The existence of such a phase
Chaotic Modes in Scale Free Opinion Networks
NASA Astrophysics Data System (ADS)
Kusmartsev, Feo V.; Kürten, Karl E.
2010-12-01
In this paper, we investigate processes associated with formation of public opinion in varies directed random, scale free and small-world social networks. The important factor of the opinion formation is the existence of contrarians which were discovered by Granovetter in various social psychology experiments1,2,3 long ago and later introduced in sociophysics by Galam.4 When the density of contrarians increases the system behavior drastically changes at some critical value. At high density of contrarians the system can never arrive to a consensus state and periodically oscillates with different periods depending on specific structure of the network. At small density of the contrarians the behavior is manifold. It depends primary on the initial state of the system. If initially the majority of the population agrees with each other a state of stable majority may be easily reached. However when originally the population is divided in nearly equal parts consensus can never be reached. We model the emergence of collective decision making by considering N interacting agents, whose opinions are described by two state Ising spin variable associated with YES and NO. We show that the dynamical behaviors are very sensitive not only to the density of the contrarians but also to the network topology. We find that a phase of social chaos may arise in various dynamical processes of opinion formation in many realistic models. We compare the prediction of the theory with data describing the dynamics of the average opinion of the USA population collected on a day-by-day basis by varies media sources during the last six month before the final Obama-McCain election. The qualitative ouctome is in reasonable agreement with the prediction of our theory. In fact, the analyses of these data made within the paradigm of our theory indicates that even in this campaign there were chaotic elements where the public opinion migrated in an unpredictable chaotic way. The existence of such a phase
Hyperbolic Hopfield neural networks.
Kobayashi, M
2013-02-01
In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states. PMID:24808287
NASA Technical Reports Server (NTRS)
Baram, Yoram
1988-01-01
Nested neural networks, consisting of small interconnected subnetworks, allow for the storage and retrieval of neural state patterns of different sizes. The subnetworks are naturally categorized by layers of corresponding to spatial frequencies in the pattern field. The storage capacity and the error correction capability of the subnetworks generally increase with the degree of connectivity between layers (the nesting degree). Storage of only few subpatterns in each subnetworks results in a vast storage capacity of patterns and subpatterns in the nested network, maintaining high stability and error correction capability.
Spatial-temporal dynamics of chaotic behavior in cultured hippocampal networks.
Chen, Wenjuan; Li, Xiangning; Pu, Jiangbo; Luo, Qingming
2010-06-01
Using multiple nonlinear techniques, we revealed the existence of chaos in the spontaneous activity of neuronal networks in vitro. The spatial-temporal dynamics of these networks indicated that emergent transition between chaotic behavior and superburst occurred periodically in low-frequency oscillations. An analysis of network-wide activity indicated that chaos was synchronized among different sites. Moreover, we found that the degree of chaos increased as the number of active sites in the network increased during long-term development (over three months in vitro). The chaotic behavior of the dissociated networks had similar spatial-temporal characteristics (rapid transition, periodicity, and synchronization) as the intact brain; however, the degree of chaos depended on the number of active sites at the mesoscopic level. This work could provide insight into neural coding and neurocybernetics. PMID:20866436
Neural Networks and Micromechanics
NASA Astrophysics Data System (ADS)
Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.
The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.
Generalized Adaptive Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1993-01-01
Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.
Improved Autoassociative Neural Networks
NASA Technical Reports Server (NTRS)
Hand, Charles
2003-01-01
Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.
NASA Technical Reports Server (NTRS)
Villarreal, James A.
1991-01-01
A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.
Parallel processing neural networks
Zargham, M.
1988-09-01
A model for Neural Network which is based on a particular kind of Petri Net has been introduced. The model has been implemented in C and runs on the Sequent Balance 8000 multiprocessor, however it can be directly ported to different multiprocessor environments. The potential advantages of using Petri Nets include: (1) the overall system is often easier to understand due to the graphical and precise nature of the representation scheme, (2) the behavior of the system can be analyzed using Petri Net theory. Though, the Petri Net is an obvious choice as a basis for the model, the basic Petri Net definition is not adequate to represent the neuronal system. To eliminate certain inadequacies more information has been added to the Petri Net model. In the model, a token represents either a processor or a post synaptic potential. Progress through a particular Neural Network is thus graphically depicted in the movement of the processor tokens through the Petri Net.
Neural networks for triggering
Denby, B. ); Campbell, M. ); Bedeschi, F. ); Chriss, N.; Bowers, C. ); Nesti, F. )
1990-01-01
Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.
Synchrony of small nonlinear networks in chaotic semiconductor lasers
NASA Astrophysics Data System (ADS)
Ohtsubo, Junji; Ozawa, Ryo; Nanbu, Masashi
2015-07-01
The dynamics and synchronization properties of coupled chaotic semiconductor laser networks are numerically studied. As network nodes, we consider a small number of nonlinear elements of semiconductor lasers. In relation to the networks in coupled synaptic neurons, the synchronization properties of systems and conditions for zero-lag synchronization between semiconductor lasers are investigated. It is proved that a common driving laser in the adjacent coupled nodes plays a crucial role in zero-lag synchronization in semiconductor laser networks.
Complex network synchronization of chaotic systems with delay coupling
Theesar, S. Jeeva Sathya Ratnavelu, K.
2014-03-05
The study of complex networks enables us to understand the collective behavior of the interconnected elements and provides vast real time applications from biology to laser dynamics. In this paper, synchronization of complex network of chaotic systems has been studied. Every identical node in the complex network is assumed to be in Lur’e system form. In particular, delayed coupling has been assumed along with identical sector bounded nonlinear systems which are interconnected over network topology.
Uniformly sparse neural networks
NASA Astrophysics Data System (ADS)
Haghighi, Siamack
1992-07-01
Application of neural networks to problems with a large number of sensory inputs is severely limited when the processing elements (PEs) need to be fully connected. This paper presents a new network model in which a trade off between the number of connections to a node and the number of processing layers can be made. This trade off is an important issue in the VLSI implementation of neural networks. The performance and capability of a hierarchical pyramidal network architecture of limited fan-in PE layers is analyzed. Analysis of this architecture requires the development of a new learning rule, since each PE has access to limited information about the entire network input. A spatially local unsupervised training rule is developed in which each PE optimizes the fraction of its output variance contributed by input correlations, resulting in PEs behaving as adaptive local correlation detectors. It is also shown that the output of a PE optimally represents the mutual information among the inputs to that PE. Applications of the developed model in image compression and motion detection are presented.
High-performance neural networks. [Neural computers
Dress, W.B.
1987-06-01
The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.
Program Helps Simulate Neural Networks
NASA Technical Reports Server (NTRS)
Villarreal, James; Mcintire, Gary
1993-01-01
Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.
NASA Technical Reports Server (NTRS)
Villarreal, James A.; Shelton, Robert O.
1992-01-01
Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.
Accelerating Learning By Neural Networks
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad; Barhen, Jacob
1992-01-01
Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.
Metzler, R; Kinzel, W; Kanter, I
2000-08-01
Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random. PMID:11088736
Dynamic interactions in neural networks
Arbib, M.A. ); Amari, S. )
1989-01-01
The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.
Neural Network Based Montioring and Control of Fluidized Bed.
Bodruzzaman, M.; Essawy, M.A.
1996-04-01
The goal of this project was to develop chaos analysis and neural network-based modeling techniques and apply them to the pressure-drop data obtained from the Fluid Bed Combustion (FBC) system (a small scale prototype model) located at the Federal Energy Technology Center (FETC)-Morgantown. The second goal was to develop neural network-based chaos control techniques and provide a suggestive prototype for possible real-time application to the FBC system. The experimental pressure data were collected from a cold FBC experimental set-up at the Morgantown Center. We have performed several analysis on these data in order to unveil their dynamical and chaotic characteristics. The phase-space attractors were constructed from the one dimensional time series data, using the time-delay embedding method, for both normal and abnormal conditions. Several identifying parameters were also computed from these attractors such as the correlation dimension, the Kolmogorov entropy, and the Lyapunov exponents. These chaotic attractor parameters can be used to discriminate between the normal and abnormal operating conditions of the FBC system. It was found that, the abnormal data has higher correlation dimension, larger Kolmogorov entropy and larger positive Lyapunov exponents as compared to the normal data. Chaotic system control using neural network based techniques were also investigated and compared to conventional chaotic system control techniques. Both types of chaotic system control techniques were applied to some typical chaotic systems such as the logistic, the Henon, and the Lorenz systems. A prototype model for real-time implementation of these techniques has been suggested to control the FBC system. These models can be implemented for real-time control in a next phase of the project after obtaining further measurements from the experimental model. After testing the control algorithms developed for the FBC model, the next step is to implement them on hardware and link them to
Neural network applications in telecommunications
NASA Technical Reports Server (NTRS)
Alspector, Joshua
1994-01-01
Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.
Neural Networks for the Beginner.
ERIC Educational Resources Information Center
Snyder, Robin M.
Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…
Synchronization in complex dynamical networks coupled with complex chaotic system
NASA Astrophysics Data System (ADS)
Wei, Qiang; Xie, Cheng-Jun; Wang, Bo
2015-11-01
This paper investigates synchronization in complex dynamical networks with time delay and perturbation. The node of complex dynamical networks is composed of complex chaotic system. A complex feedback controller is designed to realize different component of complex state variable synchronize up to different scaling complex function when complex dynamical networks realize synchronization. The synchronization scaling function is changed from real field to complex field. Synchronization in complex dynamical networks with constant delay and time-varying coupling delay are investigated, respectively. Numerical simulations show the effectiveness of the proposed method.
Neural Network Development Tool (NETS)
NASA Technical Reports Server (NTRS)
Baffes, Paul T.
1990-01-01
Artificial neural networks formed from hundreds or thousands of simulated neurons, connected in manner similar to that in human brain. Such network models learning behavior. Using NETS involves translating problem to be solved into input/output pairs, designing network configuration, and training network. Written in C.
The geometry of chaotic dynamics — a complex network perspective
NASA Astrophysics Data System (ADS)
Donner, R. V.; Heitzig, J.; Donges, J. F.; Zou, Y.; Marwan, N.; Kurths, J.
2011-12-01
Recently, several complex network approaches to time series analysis have been developed and applied to study a wide range of model systems as well as real-world data, e.g., geophysical or financial time series. Among these techniques, recurrence-based concepts and prominently ɛ-recurrence networks, most faithfully represent the geometrical fine structure of the attractors underlying chaotic (and less interestingly non-chaotic) time series. In this paper we demonstrate that the well known graph theoretical properties local clustering coefficient and global (network) transitivity can meaningfully be exploited to define two new local and two new global measures of dimension in phase space: local upper and lower clustering dimension as well as global upper and lower transitivity dimension. Rigorous analytical as well as numerical results for self-similar sets and simple chaotic model systems suggest that these measures are well-behaved in most non-pathological situations and that they can be estimated reasonably well using ɛ-recurrence networks constructed from relatively short time series. Moreover, we study the relationship between clustering and transitivity dimensions on the one hand, and traditional measures like pointwise dimension or local Lyapunov dimension on the other hand. We also provide further evidence that the local clustering coefficients, or equivalently the local clustering dimensions, are useful for identifying unstable periodic orbits and other dynamically invariant objects from time series. Our results demonstrate that ɛ-recurrence networks exhibit an important link between dynamical systems and graph theory.
Neural networks for calibration tomography
NASA Technical Reports Server (NTRS)
Decker, Arthur
1993-01-01
Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.
Neural adaptive chaotic control with constrained input using state and output feedback
NASA Astrophysics Data System (ADS)
Gao, Shi-Gen; Dong, Hai-Rong; Sun, Xu-Bin; Ning, Bin
2015-01-01
This paper presents neural adaptive control methods for a class of chaotic nonlinear systems in the presence of constrained input and unknown dynamics. To attenuate the influence of constrained input caused by actuator saturation, an effective auxiliary system is constructed to prevent the stability of closed loop system from being destroyed. Radial basis function neural networks (RBF-NNs) are used in the online learning of the unknown dynamics, which do not require an off-line training phase. Both state and output feedback control laws are developed. In the output feedback case, high-order sliding mode (HOSM) observer is utilized to estimate the unmeasurable system states. Simulation results are presented to verify the effectiveness of proposed schemes. Project supported by the National High Technology Research and Development Program of China (Grant No. 2012AA041701), the Fundamental Research Funds for Central Universities of China (Grant No. 2013JBZ007), the National Natural Science Foundation of China (Grant Nos. 61233001, 61322307, 61304196, and 61304157), and the Research Program of Beijing Jiaotong University, China (Grant No. RCS2012ZZ003).
Deinterlacing using modular neural network
NASA Astrophysics Data System (ADS)
Woo, Dong H.; Eom, Il K.; Kim, Yoo S.
2004-05-01
Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.
Electric circuit networks equivalent to chaotic quantum billiards
Bulgakov, Evgeny N.; Maksimov, Dmitrii N.; Sadreev, Almas F.
2005-04-01
We consider two electric RLC resonance networks that are equivalent to quantum billiards. In a network of inductors grounded by capacitors, the eigenvalues of the quantum billiard correspond to the squared resonant frequencies. In a network of capacitors grounded by inductors, the eigenvalues of the billiard are given by the inverse of the squared resonant frequencies. In both cases, the local voltages play the role of the wave function of the quantum billiard. However, unlike for quantum billiards, there is a heat power because of the resistance of the inductors. In the equivalent chaotic billiards, we derive a distribution of the heat power which describes well the numerical statistics.
Synchronization in the network of chaotic microwave oscillators
NASA Astrophysics Data System (ADS)
Moskalenko, O.; Phrolov, N.; Koronovskii, A.; Hramov, A.
2013-10-01
Time scale synchronization in networks of chaotic microwave oscillators with the different topologies of the links between nodes has been studied. As a node element of the network the one-dimensional distributed model of the low-voltage vircator has been used. To characterize the degree of synchronization in the whole network the synchronization index has been introduced. The transition to the synchronous regime is shown to take place via cluster time scale synchronization. Meanwhile, the spectral structure of the output signals is complicated sufficiently which allows using such devices in a number of practical applications.
Prediction and control of chaotic processes using nonlinear adaptive networks
Jones, R.D.; Barnes, C.W.; Flake, G.W.; Lee, K.; Lewis, P.S.; O'Rouke, M.K.; Qian, S.
1990-01-01
We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.
Criteria for stochastic pinning control of networks of chaotic maps
Mwaffo, Violet; Porfiri, Maurizio; DeLellis, Pietro
2014-03-15
This paper investigates the controllability of discrete-time networks of coupled chaotic maps through stochastic pinning. In this control scheme, the network dynamics are steered towards a desired trajectory through a feedback control input that is applied stochastically to the network nodes. The network controllability is studied by analyzing the local mean square stability of the error dynamics with respect to the desired trajectory. Through the analysis of the spectral properties of salient matrices, a toolbox of conditions for controllability are obtained, in terms of the dynamics of the individual maps, algebraic properties of the network, and the probability distribution of the pinning control. We demonstrate the use of these conditions in the design of a stochastic pinning control strategy for networks of Chirikov standard maps. To elucidate the applicability of the approach, we consider different network topologies and compare five different stochastic pinning strategies through extensive numerical simulations.
Modular, Hierarchical Learning By Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Baldi, Pierre F.; Toomarian, Nikzad
1996-01-01
Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.
Neural Networks for Readability Analysis.
ERIC Educational Resources Information Center
McEneaney, John E.
This paper describes and reports on the performance of six related artificial neural networks that have been developed for the purpose of readability analysis. Two networks employ counts of linguistic variables that simulate a traditional regression-based approach to readability. The remaining networks determine readability from "visual snapshots"…
Learning, Exploration and Chaotic Policies
NASA Astrophysics Data System (ADS)
Potapov, Alexei B.; Ali, M. K.
We consider different versions of exploration in reinforcement learning. For the test problem, we use navigation in a shortcut maze. It is shown that chaotic ɛ-greedy policy may be as efficient as a random one. The best results were obtained with a model chaotic neuron. Therefore, exploration strategy can be implemented in a deterministic learning system such as a neural network.
Neural Networks Of VLSI Components
NASA Technical Reports Server (NTRS)
Eberhardt, Silvio P.
1991-01-01
Concept for design of electronic neural network calls for assembly of very-large-scale integrated (VLSI) circuits of few standard types. Each VLSI chip, which contains both analog and digital circuitry, used in modular or "building-block" fashion by interconnecting it in any of variety of ways with other chips. Feedforward neural network in typical situation operates under control of host computer and receives inputs from, and sends outputs to, other equipment.
Correlational Neural Networks.
Chandar, Sarath; Khapra, Mitesh M; Larochelle, Hugo; Ravindran, Balaraman
2016-02-01
Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches. PMID:26654210
Patil, R.B.
1995-05-01
Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.
Neural-Network-Development Program
NASA Technical Reports Server (NTRS)
Phillips, Todd A.
1993-01-01
NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.
NASA Astrophysics Data System (ADS)
Semenova, N.; Zakharova, A.; Schöll, E.; Anishchenko, V.
2015-11-01
We analyze nonlocally coupled networks of identical chaotic oscillators with either time-discrete or time-continuous dynamics (Henon map, Lozi map, Lorenz system). We hypothesize that chimera states, in which spatial domains of coherent (synchronous) and incoherent (desynchronized) dynamics coexist, can be obtained only in networks of oscillators with nonhyperbolic chaotic attractors and cannot be found in networks of systems with hyperbolic chaotic attractors. This hypothesis is supported by analytical results and numerical simulations for hyperbolic and nonhyperbolic cases.
Chaotic burst synchronization in a two-small-world-layer neuronal network
NASA Astrophysics Data System (ADS)
Zheng, Yanhong; Wang, Haixia
2015-09-01
Chaotic burst synchronization in a two-small-world-layer neuronal network is studied in this paper. For a neuronal network coupled by two single-small-world-layer networks with link probability differences between layers, the two-layer network can achieve synchrony as the interlayer coupling strength increases. When chaotic layer network is coupled with chaotic-burst-synchronization layer network, the latter is dominant at small interlayer coupling strength, so it can make the layer with the irregular pattern show some regular and also exhibit the same pattern with the other layer. However, when chaotic layer is coupled with firing synchronization layer, the ordered layer is dominated by a disordered one with the interlayer coupling strength increasing. When the interlayer coupling strength is large enough, both networks are chaotic burst synchronization. Therefore, the synchronous states strongly depend on the interlayer coupling strength and the link probability. Moreover, the spatiotemporal pattern synchronization between the networks is robust to small noise.
Origin of chaotic transients in excitatory pulse-coupled networks.
Zou, Hai-Lin; Li, Menghui; Lai, Choy-Heng; Lai, Ying-Cheng
2012-12-01
We develop an approach to understanding long chaotic transients in networks of excitatory pulse-coupled oscillators. Our idea is to identify a class of attractors, sequentially active firing (SAF) attractors, in terms of the temporal event structure of firing and receipt of pulses. Then all attractors can be classified into two groups: SAF attractors and non-SAF attractors. We establish that long transients typically arise in the transitional region of the parameter space where the SAF attractors are collectively destabilized. Bifurcation behavior of the SAF attractors is analyzed to provide a detailed understanding of the long irregular transients. Although demonstrated using pulse-coupled oscillator networks, our general methodology may be useful in understanding the origin of transient chaos in other types of networked systems, an extremely challenging problem in nonlinear dynamics and complex systems. PMID:23368031
Synchronisation and scaling properties of chaotic networks with multiple delays
NASA Astrophysics Data System (ADS)
D'Huys, Otti; Zeeb, Steffen; Jüngling, Thomas; Heiligenthal, Sven; Yanchuk, Serhiy; Kinzel, Wolfgang
2013-07-01
We study chaotic systems with multiple time delays that range over several orders of magnitude. We show that the spectrum of Lyapunov exponents (LEs) in such systems possesses a hierarchical structure, with different parts scaling with the different delays. This leads to different types of chaos, depending on the scaling of the maximal LE. Our results are relevant, in particular, for the synchronisation properties of hierarchical networks (networks of networks) where the nodes of subnetworks are coupled with shorter delays and couplings between different subnetworks are realised with longer delay times. Units within a subnetwork can synchronise if the maximal exponent scales with the shorter delay, long-range synchronisation between different subnetworks is only possible if the maximal exponent scales with the longer delay. The results are illustrated analytically for Bernoulli maps and numerically for tent maps and semiconductor lasers.
Multiprocessor Neural Network in Healthcare.
Godó, Zoltán Attila; Kiss, Gábor; Kocsis, Dénes
2015-01-01
A possible way of creating a multiprocessor artificial neural network is by the use of microcontrollers. The RISC processors' high performance and the large number of I/O ports mean they are greatly suitable for creating such a system. During our research, we wanted to see if it is possible to efficiently create interaction between the artifical neural network and the natural nervous system. To achieve as much analogy to the living nervous system as possible, we created a frequency-modulated analog connection between the units. Our system is connected to the living nervous system through 128 microelectrodes. Two-way communication is provided through A/D transformation, which is even capable of testing psychopharmacons. The microcontroller-based analog artificial neural network can play a great role in medical singal processing, such as ECG, EEG etc. PMID:26152990
NASA Astrophysics Data System (ADS)
Zhang, Xiuping
In this paper, the weights of a Neural Network using Chaotic Imperialist Competitive Algorithm are updated. A three-layered Perseptron Neural Network applied for prediction of the maximum worth of the stocks changed in TEHRAN's bourse market. We trained this neural network with CICA, ICA, PSO and GA algorithms and compared the results with each other. The consideration of the results showed that the training and test error of the network trained by the CICA algorithm has been reduced in comparison to the other three methods.
Neural network ultrasound image analysis
NASA Astrophysics Data System (ADS)
Schneider, Alexander C.; Brown, David G.; Pastel, Mary S.
1993-09-01
Neural network based analysis of ultrasound image data was carried out on liver scans of normal subjects and those diagnosed with diffuse liver disease. In a previous study, ultrasound images from a group of normal volunteers, Gaucher's disease patients, and hepatitis patients were obtained by Garra et al., who used classical statistical methods to distinguish from among these three classes. In the present work, neural network classifiers were employed with the same image features found useful in the previous study for this task. Both standard backpropagation neural networks and a recently developed biologically-inspired network called Dystal were used. Classification performance as measured by the area under a receiver operating characteristic curve was generally excellent for the back propagation networks and was roughly comparable to that of classical statistical discriminators tested on the same data set and documented in the earlier study. Performance of the Dystal network was significantly inferior; however, this may be due to the choice of network parameter. Potential methods for enhancing network performance was identified.
Stimulus-dependent suppression of chaos in recurrent neural networks
Rajan, Kanaka; Abbott, L. F.; Sompolinsky, Haim
2010-07-15
Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, but they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a nonmonotonic function of stimulus frequency, revealing a 'resonant' frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.
Nonlinear signal processing using neural networks: Prediction and system modelling
Lapedes, A.; Farber, R.
1987-06-01
The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs.
Plant Growth Models Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Bubenheim, David
1997-01-01
In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.
Centroid calculation using neural networks
NASA Astrophysics Data System (ADS)
Himes, Glenn S.; Inigo, Rafael M.
1992-01-01
Centroid calculation provides a means of eliminating translation problems, which is useful for automatic target recognition. a neural network implementation of centroid calculation is described that used a spatial filter and a Hopfield network to determine the centroid location of an object. spatial filtering of a segmented window creates a result whose peak vale occurs at the centroid of the input data set. A Hopfield network then finds the location of this peak and hence gives the location of the centroid. Hardware implementations of the networks are described and simulation results are provided.
Neural Networks for Flight Control
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1996-01-01
Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.
Neural networks and applications tutorial
NASA Astrophysics Data System (ADS)
Guyon, I.
1991-09-01
The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.
Artificial neural networks in medicine
Keller, P.E.
1994-07-01
This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.
Neural networks for handwriting recognition
NASA Astrophysics Data System (ADS)
Kelly, David A.
1992-09-01
The market for a product that can read handwritten forms, such as insurance applications, re- order forms, or checks, is enormous. Companies could save millions of dollars each year if they had an effective and efficient way to read handwritten forms into a computer without human intervention. Urged on by the potential gold mine that an adequate solution would yield, a number of companies and researchers have developed, and are developing, neural network-based solutions to this long-standing problem. This paper briefly outlines the current state-of-the-art in neural network-based handwriting recognition research and products. The first section of the paper examines the potential market for this technology. The next section outlines the steps in the recognition process, followed by a number of the basic issues that need to be dealt with to solve the recognition problem in a real-world setting. Next, an overview of current commercial solutions and research projects shows the different ways that neural networks are applied to the problem. This is followed by a breakdown of the current commercial market and the future outlook for neural network-based handwriting recognition technology.
How Neural Networks Learn from Experience.
ERIC Educational Resources Information Center
Hinton, Geoffrey E.
1992-01-01
Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…
Model Of Neural Network With Creative Dynamics
NASA Technical Reports Server (NTRS)
Zak, Michail; Barhen, Jacob
1993-01-01
Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.
Not Available
1991-01-01
The present conference the application of neural networks to associative memories, neurorecognition, hybrid systems, supervised and unsupervised learning, image processing, neurophysiology, sensation and perception, electrical neurocomputers, optimization, robotics, machine vision, sensorimotor control systems, and neurodynamics. Attention is given to such topics as optimal associative mappings in recurrent networks, self-improving associative neural network models, fuzzy activation functions, adaptive pattern recognition with sparse associative networks, efficient question-answering in a hybrid system, the use of abstractions by neural networks, remote-sensing pattern classification, speech recognition with guided propagation, inverse-step competitive learning, and rotational quadratic function neural networks. Also discussed are electrical load forecasting, evolutionarily stable and unstable strategies, the capacity of recurrent networks, neural net vs control theory, perceptrons for image recognition, storage capacity of bidirectional associative memories, associative random optimization for control, automatic synthesis of digital neural architectures, self-learning robot vision, and the associative dynamics of chaotic neural networks.
Evolutionary artificial neural networks for hydrological systems forecasting
NASA Astrophysics Data System (ADS)
Chen, Yung-hsiang; Chang, Fi-John
2009-03-01
SummaryThe conventional ways of constructing artificial neural network (ANN) for a problem generally presume a specific architecture and do not automatically discover network modules appropriate for specific training data. Evolutionary algorithms are used to automatically adapt the network architecture and connection weights according to the problem environment without substantial human intervention. To improve on the drawbacks of the conventional optimal process, this study presents a novel evolutionary artificial neural network (EANN) for time series forecasting. The EANN has a hybrid procedure, including the genetic algorithm and the scaled conjugate gradient algorithm, where the feedforward ANN architecture and its connection weights of neurons are simultaneously identified and optimized. We first explored the performance of the proposed EANN for the Mackey-Glass chaotic time series. The performance of the different networks was evaluated. The excellent performance in forecasting of the chaotic series shows that the proposed algorithm concurrently possesses efficiency, effectiveness, and robustness. We further explored the applicability and reliability of the EANN in a real hydrological time series. Again, the results indicate the EANN can effectively and efficiently construct a viable forecast module for the 10-day reservoir inflow, and its accuracy is superior to that of the AR and ARMAX models.
Overview of artificial neural networks.
Zou, Jinming; Han, Yi; So, Sung-Sau
2008-01-01
The artificial neural network (ANN), or simply neural network, is a machine learning method evolved from the idea of simulating the human brain. The data explosion in modem drug discovery research requires sophisticated analysis methods to uncover the hidden causal relationships between single or multiple responses and a large set of properties. The ANN is one of many versatile tools to meet the demand in drug discovery modeling. Compared to a traditional regression approach, the ANN is capable of modeling complex nonlinear relationships. The ANN also has excellent fault tolerance and is fast and highly scalable with parallel processing. This chapter introduces the background of ANN development and outlines the basic concepts crucially important for understanding more sophisticated ANN. Several commonly used learning methods and network setups are discussed briefly at the end of the chapter. PMID:19065803
Neural Networks For Visual Telephony
NASA Astrophysics Data System (ADS)
Gottlieb, A. M.; Alspector, J.; Huang, P.; Hsing, T. R.
1988-10-01
By considering how an image is processed by the eye and brain, we may find ways to simplify the task of transmitting complex video images over a telecommunication channel. Just as the retina and visual cortex reduce the amount of information sent to other areas of the brain, electronic systems can be designed to compress visual data, encode features, and adapt to new scenes for video transmission. In this talk, we describe a system inspired by models of neural computation that may, in the future, augment standard digital processing techniques for image compression. In the next few years it is expected that a compact low-cost full motion video telephone operating over an ISDN basic access line (144 KBits/sec) will be shown to be feasible. These systems will likely be based on a standard digital signal processing approach. In this talk, we discuss an alternative method that does not use standard digital signal processing but instead uses eletronic neural networks to realize the large compression necessary for a low bit-rate video telephone. This neural network approach is not being advocated as a near term solution for visual telephony. However, low bit rate visual telephony is an area where neural network technology may, in the future, find a significant application.
Validation and regulation of medical neural networks.
Rodvold, D M
2001-01-01
Using artificial neural networks (ANNs) in medical applications can be challenging because of the often-experimental nature of ANN construction and the "black box" label that is frequently attached to them. In the US, medical neural networks are regulated by the Food and Drug Administration. This article briefly discusses the documented FDA policy on neural networks and the various levels of formal acceptance that neural network development groups might pursue. To assist medical neural network developers in creating robust and verifiable software, this paper provides a development process model targeted specifically to ANNs for critical applications. PMID:11790274
Terminal attractors in neural networks
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.
The hysteretic Hopfield neural network.
Bharitkar, S; Mendel, J M
2000-01-01
A new neuron activation function based on a property found in physical systems--hysteresis--is proposed. We incorporate this neuron activation in a fully connected dynamical system to form the hysteretic Hopfield neural network (HHNN). We then present an analog implementation of this architecture and its associated dynamical equation and energy function.We proceed to prove Lyapunov stability for this new model, and then solve a combinatorial optimization problem (i.e., the N-queen problem) using this network. We demonstrate the advantages of hysteresis by showing increased frequency of convergence to a solution, when the parameters associated with the activation function are varied. PMID:18249816
The LILARTI neural network system
Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.
1992-10-01
The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.
Response of the parameters of a neural network to pseudoperiodic time series
NASA Astrophysics Data System (ADS)
Zhao, Yi; Weng, Tongfeng; Small, Michael
2014-02-01
We propose a representation plane constructed from parameters of a multilayer neural network, with the aim of characterizing the dynamical character of a learned time series. We find that fluctuation of this plane reveals distinct features of the time series. Specifically, a periodic representation plane corresponds to a periodic time series, even when contaminated with strong observational noise or dynamical noise. We present a theoretical explanation for how the neural network training algorithm adjusts parameters of this representation plane and thereby encodes the specific characteristics of the underlying system. This ability, which is intrinsic to the architecture of the neural network, can be employed to distinguish the chaotic time series from periodic counterparts. It provides a new path toward identifying the dynamics of pseudoperiodic time series. Furthermore, we extract statistics from the representation plane to quantify its character. We then validate this idea with various numerical data generated by the known periodic and chaotic dynamics and experimentally recorded human electrocardiogram data.
Load forecasting using artificial neural networks
Pham, K.D.
1995-12-31
Artificial neural networks, modeled after their biological counterpart, have been successfully applied in many diverse areas including speech and pattern recognition, remote sensing, electrical power engineering, robotics and stock market forecasting. The most commonly used neural networks are those that gained knowledge from experience. Experience is presented to the network in form of the training data. Once trained, the neural network can recognized data that it has not seen before. This paper will present a fundamental introduction to the manner in which neural networks work and how to use them in load forecasting.
Nonlinear PLS modeling using neural networks
Qin, S.J.; McAvoy, T.J.
1994-12-31
This paper discusses the embedding of neural networks into the framework of the PLS (partial least squares) modeling method resulting in a neural net PLS modeling approach. By using the universal approximation property of neural networks, the PLS modeling method is genealized to a nonlinear framework. The resulting model uses neural networks to capture the nonlinearity and keeps the PLS projection to attain robust generalization property. In this paper, the standard PLS modeling method is briefly reviewed. Then a neural net PLS (NNPLS) modeling approach is proposed which incorporates feedforward networks into the PLS modeling. A multi-input-multi-output nonlinear modeling task is decomposed into linear outer relations and simple nonlinear inner relations which are performed by a number of single-input-single-output networks. Since only a small size network is trained at one time, the over-parametrized problem of the direct neural network approach is circumvented even when the training data are very sparse. A conjugate gradient learning method is employed to train the network. It is shown that, by analyzing the NNPLS algorithm, the global NNPLS model is equivalent to a multilayer feedforward network. Finally, applications of the proposed NNPLS method are presented with comparison to the standard linear PLS method and the direct neural network approach. The proposed neural net PLS method gives better prediction results than the PLS modeling method and the direct neural network approach.
Neural network modeling of emotion
NASA Astrophysics Data System (ADS)
Levine, Daniel S.
2007-03-01
This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.
Chaotic, informational and synchronous behaviour of multiplex networks.
Baptista, M S; Szmoski, R M; Pereira, R F; Pinto, S E de Souza
2016-01-01
The understanding of the relationship between topology and behaviour in interconnected networks would allow to charac- terise and predict behaviour in many real complex networks since both are usually not simultaneously known. Most previous studies have focused on the relationship between topology and synchronisation. In this work, we provide analytical formulas that shows how topology drives complex behaviour: chaos, information, and weak or strong synchronisation; in multiplex net- works with constant Jacobian. We also study this relationship numerically in multiplex networks of Hindmarsh-Rose neurons. Whereas behaviour in the analytically tractable network is a direct but not trivial consequence of the spectra of eigenvalues of the Laplacian matrix, where behaviour may strongly depend on the break of symmetry in the topology of interconnections, in Hindmarsh-Rose neural networks the nonlinear nature of the chemical synapses breaks the elegant mathematical connec- tion between the spectra of eigenvalues of the Laplacian matrix and the behaviour of the network, creating networks whose behaviour strongly depends on the nature (chemical or electrical) of the inter synapses. PMID:26939580
Chaotic, informational and synchronous behaviour of multiplex networks
Baptista, M. S.; Szmoski, R. M.; Pereira, R. F.; Pinto, S. E. de Souza
2016-01-01
The understanding of the relationship between topology and behaviour in interconnected networks would allow to charac- terise and predict behaviour in many real complex networks since both are usually not simultaneously known. Most previous studies have focused on the relationship between topology and synchronisation. In this work, we provide analytical formulas that shows how topology drives complex behaviour: chaos, information, and weak or strong synchronisation; in multiplex net- works with constant Jacobian. We also study this relationship numerically in multiplex networks of Hindmarsh-Rose neurons. Whereas behaviour in the analytically tractable network is a direct but not trivial consequence of the spectra of eigenvalues of the Laplacian matrix, where behaviour may strongly depend on the break of symmetry in the topology of interconnections, in Hindmarsh-Rose neural networks the nonlinear nature of the chemical synapses breaks the elegant mathematical connec- tion between the spectra of eigenvalues of the Laplacian matrix and the behaviour of the network, creating networks whose behaviour strongly depends on the nature (chemical or electrical) of the inter synapses. PMID:26939580
Chaotic, informational and synchronous behaviour of multiplex networks
NASA Astrophysics Data System (ADS)
Baptista, M. S.; Szmoski, R. M.; Pereira, R. F.; Pinto, S. E. De Souza
2016-03-01
The understanding of the relationship between topology and behaviour in interconnected networks would allow to charac- terise and predict behaviour in many real complex networks since both are usually not simultaneously known. Most previous studies have focused on the relationship between topology and synchronisation. In this work, we provide analytical formulas that shows how topology drives complex behaviour: chaos, information, and weak or strong synchronisation; in multiplex net- works with constant Jacobian. We also study this relationship numerically in multiplex networks of Hindmarsh-Rose neurons. Whereas behaviour in the analytically tractable network is a direct but not trivial consequence of the spectra of eigenvalues of the Laplacian matrix, where behaviour may strongly depend on the break of symmetry in the topology of interconnections, in Hindmarsh-Rose neural networks the nonlinear nature of the chemical synapses breaks the elegant mathematical connec- tion between the spectra of eigenvalues of the Laplacian matrix and the behaviour of the network, creating networks whose behaviour strongly depends on the nature (chemical or electrical) of the inter synapses.
Neural networks for aircraft system identification
NASA Technical Reports Server (NTRS)
Linse, Dennis J.
1991-01-01
Artificial neural networks offer some interesting possibilities for use in control. Our current research is on the use of neural networks on an aircraft model. The model can then be used in a nonlinear control scheme. The effectiveness of network training is demonstrated.
Neural-Network Computer Transforms Coordinates
NASA Technical Reports Server (NTRS)
Josin, Gary M.
1990-01-01
Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.
Neural networks and MIMD-multiprocessors
NASA Technical Reports Server (NTRS)
Vanhala, Jukka; Kaski, Kimmo
1990-01-01
Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.
Neural Networks in Nonlinear Aircraft Control
NASA Technical Reports Server (NTRS)
Linse, Dennis J.
1990-01-01
Recent research indicates that artificial neural networks offer interesting learning or adaptive capabilities. The current research focuses on the potential for application of neural networks in a nonlinear aircraft control law. The current work has been to determine which networks are suitable for such an application and how they will fit into a nonlinear control law.
Satellite image analysis using neural networks
NASA Technical Reports Server (NTRS)
Sheldon, Roger A.
1990-01-01
The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.
Constructive neural network learning algorithms
Parekh, R.; Yang, Jihoon; Honavar, V.
1996-12-31
Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.
Adaptive optimization and control using neural networks
Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.
1993-10-22
Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.
Application of neural network to humanoid robots-development of co-associative memory model.
Itoh, Kazuko; Miwa, Hiroyasu; Takanobu, Hideaki; Takanishi, Atsuo
2005-01-01
We have been studying a system of many harmonic oscillators (neurons) interacting via a chaotic force since 2002. Each harmonic oscillator is driven by chaotic force whose bifurcation parameter is modulated by the position of the harmonic oscillator. Moreover, a system of mutually coupled chaotic neural networks was investigated. Different patterns were stored in each network and the associative memory problem was discussed in these networks. Each network can retrieve the pattern stored in the other network. On the other hand, we have been developing new mechanisms and functions for a humanoid robot with the ability to express emotions and communicate with humans in a human-like manner. We introduced a mental model which consisted of the mental space, the mood, the equations of emotion, the robot personality, the need model, the consciousness model and the behavior model. This type of mental model was implemented in Emotion Expression Humanoid Robot WE-4RII (Waseda Eye No.4 Refined II). In this paper, an associative memory model using mutually coupled chaotic neural networks is proposed for retrieving optimum memory (recognition) in response to a stimulus. We implemented this model in Emotion Expression Humanoid Robot WE-4RII (Waseda Eye No.4 Refined II). PMID:16109473
Complexity matching in neural networks
NASA Astrophysics Data System (ADS)
Usefie Mafahim, Javad; Lambert, David; Zare, Marzieh; Grigolini, Paolo
2015-01-01
In the wide literature on the brain and neural network dynamics the notion of criticality is being adopted by an increasing number of researchers, with no general agreement on its theoretical definition, but with consensus that criticality makes the brain very sensitive to external stimuli. We adopt the complexity matching principle that the maximal efficiency of communication between two complex networks is realized when both of them are at criticality. We use this principle to establish the value of the neuronal interaction strength at which criticality occurs, yielding a perfect agreement with the adoption of temporal complexity as criticality indicator. The emergence of a scale-free distribution of avalanche size is proved to occur in a supercritical regime. We use an integrate-and-fire model where the randomness of each neuron is only due to the random choice of a new initial condition after firing. The new model shares with that proposed by Izikevich the property of generating excessive periodicity, and with it the annihilation of temporal complexity at supercritical values of the interaction strength. We find that the concentration of inhibitory links can be used as a control parameter and that for a sufficiently large concentration of inhibitory links criticality is recovered again. Finally, we show that the response of a neural network at criticality to a harmonic stimulus is very weak, in accordance with the complexity matching principle.
Advances in neural networks research: an introduction.
Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar
2009-01-01
The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications. PMID:19632811
Neural network based system for equipment surveillance
Vilim, R.B.; Gross, K.C.; Wegerich, S.W.
1998-04-28
A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.
Neural network based system for equipment surveillance
Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.
1998-01-01
A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.
Neural network modeling of distillation columns
Baratti, R.; Vacca, G.; Servida, A.
1995-06-01
Neural network modeling (NNM) was implemented for monitoring and control applications on two actual distillation columns: the butane splitter tower and the gasoline stabilizer. The two distillation columns are in operation at the SARAS refinery. Results show that with proper implementation techniques NNM can significantly improve column operation. The common belief that neural networks can be used as black-box process models is not completely true. Effective implementation always requires a minimum degree of process knowledge to identify the relevant inputs to the net. After background and generalities on neural network modeling, the paper describes efforts on the development of neural networks for the two distillation units.
Electronic neural networks for global optimization
NASA Technical Reports Server (NTRS)
Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.
1990-01-01
An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.
Aerodynamic Design Using Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan; Madavan, Nateri K.
2003-01-01
The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.
Neural networks for nuclear spectroscopy
Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T.
1995-12-31
In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.
Chimera-like States in Modular Neural Networks
Hizanidis, Johanne; Kouvaris, Nikos E.; Gorka, Zamora-López; Díaz-Guilera, Albert; Antonopoulos, Chris G.
2016-01-01
Chimera states, namely the coexistence of coherent and incoherent behavior, were previously analyzed in complex networks. However, they have not been extensively studied in modular networks. Here, we consider a neural network inspired by the connectome of the C. elegans soil worm, organized into six interconnected communities, where neurons obey chaotic bursting dynamics. Neurons are assumed to be connected with electrical synapses within their communities and with chemical synapses across them. As our numerical simulations reveal, the coaction of these two types of coupling can shape the dynamics in such a way that chimera-like states can happen. They consist of a fraction of synchronized neurons which belong to the larger communities, and a fraction of desynchronized neurons which are part of smaller communities. In addition to the Kuramoto order parameter ρ, we also employ other measures of coherence, such as the chimera-like χ and metastability λ indices, which quantify the degree of synchronization among communities and along time, respectively. We perform the same analysis for networks that share common features with the C. elegans neural network. Similar results suggest that under certain assumptions, chimera-like states are prominent phenomena in modular networks, and might provide insight for the behavior of more complex modular networks. PMID:26796971
Nakano, Hidehiro; Utani, Akihide; Miyauchi, Arata; Yamamoto, Hisao
2011-04-19
This paper studies chaos-based data gathering scheme in multiple sink wireless sensor networks. In the proposed scheme, each wireless sensor node has a simple chaotic oscillator. The oscillators generate spike signals with chaotic interspike intervals, and are impulsively coupled by the signals via wireless communication. Each wireless sensor node transmits and receives sensor information only in the timing of the couplings. The proposed scheme can exhibit various chaos synchronous phenomena and their breakdown phenomena, and can effectively gather sensor information with the significantly small number of transmissions and receptions compared with the conventional scheme. Also, the proposed scheme can flexibly adapt various wireless sensor networks not only with a single sink node but also with multiple sink nodes. This paper introduces our previous works. Through simulation experiments, we show effectiveness of the proposed scheme and discuss its development potential.
Neural Network Classifies Teleoperation Data
NASA Technical Reports Server (NTRS)
Fiorini, Paolo; Giancaspro, Antonio; Losito, Sergio; Pasquariello, Guido
1994-01-01
Prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on manipulator hand. Prototype is early, subsystem-level product of continuing effort to develop automated system that assists in training and supervising human control operator: provides symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to operator in real time during successive executions of same task. Also simplifies transition between teleoperation and autonomous modes of telerobotic system.
Transient Spatiotemporal Chaos in a Synaptically Coupled Neural Network
NASA Astrophysics Data System (ADS)
Lafranceschina, Jacopo; Wackerbauer, Renate
2014-03-01
Spatiotemporal chaos is transient in a diffusively coupled Morris-Lecar neural network. This study shows that the addition of synaptic coupling in the ring network reduces the average lifetime of spatiotemporal chaos for small to intermediate coupling strength and almost all numbers of synapses. For large coupling strength, close to the threshold of excitation, the average lifetime increases beyond the value for only diffusive coupling, and the collapse to the rest state dominates over the collapse to a traveling pulse state. The regime of spatiotemporal chaos is characterized by a slightly increasing Lyaponov exponent and degree of phase coherence as the number of synaptic links increases. The presence of transient spatiotemporal chaos in a network of coupled neurons and the associated chaotic saddle provides a possibility for switching between metastable states observed in information processing and brain function. This research is supported by the University of Alaska Fairbanks.
The Laplacian spectrum of neural networks
de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.
2014-01-01
The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286
Ozone Modeling Using Neural Networks.
NASA Astrophysics Data System (ADS)
Narasimhan, Ramesh; Keller, Joleen; Subramaniam, Ganesh; Raasch, Eric; Croley, Brandon; Duncan, Kathleen; Potter, William T.
2000-03-01
Ozone models for the city of Tulsa were developed using neural network modeling techniques. The neural models were developed using meteorological data from the Oklahoma Mesonet and ozone, nitric oxide, and nitrogen dioxide (NO2) data from Environmental Protection Agency monitoring sites in the Tulsa area. An initial model trained with only eight surface meteorological input variables and NO2 was able to simulate ozone concentrations with a correlation coefficient of 0.77. The trained model was then used to evaluate the sensitivity to the primary variables that affect ozone concentrations. The most important variables (NO2, temperature, solar radiation, and relative humidity) showed response curves with strong nonlinear codependencies. Incorporation of ozone concentrations from the previous 3 days into the model increased the correlation coefficient to 0.82. As expected, the ozone concentrations correlated best with the most recent (1-day previous) values. The model's correlation coefficient was increased to 0.88 by the incorporation of upper-air data from the National Weather Service's Nested Grid Model. Sensitivity analysis for the upper-air variables indicated unusual positive correlations between ozone and the relative humidity from 500 hPa to the tropopause in addition to the other expected correlations with upper-air temperatures, vertical wind velocity, and 1000-500-hPa layer thickness. The neural model results are encouraging for the further use of these systems to evaluate complex parameter cosensitivities, and for the use of these systems in automated ozone forecast systems.
Three dimensional living neural networks
NASA Astrophysics Data System (ADS)
Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.
2015-08-01
We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.
Model for a neural network structure and signal transmission
NASA Astrophysics Data System (ADS)
Kotsavasiloglou, C.; Kalampokis, A.; Argyrakis, P.; Baloyannis, S.
1997-10-01
We present a model of a neural network that is based on the diffusion-limited-aggregation (DLA) structure from fractal physics. A single neuron is one DLA cluster, while a large number of clusters, in an interconnected fashion, make up the neural network. Using simulation techniques, a signal is randomly generated and traced through its transmission inside the neuron and from neuron to neuron through the synapses. The activity of the entire neural network is monitored as a function of time. The characteristics included in the model contain, among others, the threshold for firing, the excitatory or inhibitory character of the synapse, the synaptic delay, and the refractory period. The system activity results in ``noisy'' time series that exhibit an oscillatory character. Standard power spectra are evaluated and fractal analyses performed, showing that the system is not chaotic, but the varying parameters can be associated with specific values of fractal dimensions. It is found that the network activity is not linear with the system parameters, e.g., with the numbers of active synapses. The details of this behavior may have interesting repercussions from the neurological point of view.
Artificial neural networks in neurosurgery.
Azimi, Parisa; Mohammadi, Hasan Reza; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad; Montazeri, Ali
2015-03-01
Artificial neural networks (ANNs) effectively analyze non-linear data sets. The aimed was A review of the relevant published articles that focused on the application of ANNs as a tool for assisting clinical decision-making in neurosurgery. A literature review of all full publications in English biomedical journals (1993-2013) was undertaken. The strategy included a combination of key words 'artificial neural networks', 'prognostic', 'brain', 'tumor tracking', 'head', 'tumor', 'spine', 'classification' and 'back pain' in the title and abstract of the manuscripts using the PubMed search engine. The major findings are summarized, with a focus on the application of ANNs for diagnostic and prognostic purposes. Finally, the future of ANNs in neurosurgery is explored. A total of 1093 citations were identified and screened. In all, 57 citations were found to be relevant. Of these, 50 articles were eligible for inclusion in this review. The synthesis of the data showed several applications of ANN in neurosurgery, including: (1) diagnosis and assessment of disease progression in low back pain, brain tumours and primary epilepsy; (2) enhancing clinically relevant information extraction from radiographic images, intracranial pressure processing, low back pain and real-time tumour tracking; (3) outcome prediction in epilepsy, brain metastases, lumbar spinal stenosis, lumbar disc herniation, childhood hydrocephalus, trauma mortality, and the occurrence of symptomatic cerebral vasospasm in patients with aneurysmal subarachnoid haemorrhage; (4) the use in the biomechanical assessments of spinal disease. ANNs can be effectively employed for diagnosis, prognosis and outcome prediction in neurosurgery. PMID:24987050
Computational acceleration using neural networks
NASA Astrophysics Data System (ADS)
Cadaret, Paul
2008-04-01
The author's recent participation in the Small Business Innovative Research (SBIR) program has resulted in the development of a patent pending technology that enables the construction of very large and fast artificial neural networks. Through the use of UNICON's CogniMax pattern recognition technology we believe that systems can be constructed that exploit the power of "exhaustive learning" for the benefit of certain types of complex and slow computational problems. This paper presents a theoretical study that describes one potentially beneficial application of exhaustive learning. It describes how a very large and fast Radial Basis Function (RBF) artificial Neural Network (NN) can be used to implement a useful computational system. Viewed another way, it presents an unusual method of transforming a complex, always-precise, and slow computational problem into a fuzzy pattern recognition problem where other methods are available to effectively improve computational performance. The method described recognizes that the need for computational precision in a problem domain sometimes varies throughout the domain's Feature Space (FS) and high precision may only be needed in limited areas. These observations can then be exploited to the benefit of overall computational performance. Addressing computational reliability, we describe how existing always-precise computational methods can be used to reliably train the NN to perform the computational interpolation function. The author recognizes that the method described is not applicable to every situation, but over the last 8 months we have been surprised at how often this method can be applied to enable interesting and effective solutions.
A new formulation for feedforward neural networks.
Razavi, Saman; Tolson, Bryan A
2011-10-01
Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization. PMID:21859600
Drift chamber tracking with neural networks
Lindsey, C.S.; Denby, B.; Haggerty, H.
1992-10-01
We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.
Extrapolation limitations of multilayer feedforward neural networks
NASA Technical Reports Server (NTRS)
Haley, Pamela J.; Soloway, Donald
1992-01-01
The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.
Coherence resonance in bursting neural networks
NASA Astrophysics Data System (ADS)
Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.
2015-10-01
Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.
Synchronization in node of complex networks consist of complex chaotic system
Wei, Qiang; Xie, Cheng-jun; Liu, Hong-jun; Li, Yan-hui
2014-07-15
A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.
Lyapunov exponents from CHUA's circuit time series using artificial neural networks
NASA Technical Reports Server (NTRS)
Gonzalez, J. Jesus; Espinosa, Ismael E.; Fuentes, Alberto M.
1995-01-01
In this paper we present the general problem of identifying if a nonlinear dynamic system has a chaotic behavior. If the answer is positive the system will be sensitive to small perturbations in the initial conditions which will imply that there is a chaotic attractor in its state space. A particular problem would be that of identifying a chaotic oscillator. We present an example of three well known different chaotic oscillators where we have knowledge of the equations that govern the dynamical systems and from there we can obtain the corresponding time series. In a similar example we assume that we only know the time series and, finally, in another example we have to take measurements in the Chua's circuit to obtain sample points of the time series. With the knowledge about the time series the phase plane portraits are plotted and from them, by visual inspection, it is concluded whether or not the system is chaotic. This method has the problem of uncertainty and subjectivity and for that reason a different approach is needed. A quantitative approach is the computation of the Lyapunov exponents. We describe several methods for obtaining them and apply a little known method of artificial neural networks to the different examples mentioned above. We end the paper discussing the importance of the Lyapunov exponents in the interpretation of the dynamic behavior of biological neurons and biological neural networks.
From Classical Neural Networks to Quantum Neural Networks
NASA Astrophysics Data System (ADS)
Tirozzi, B.
2013-09-01
First I give a brief description of the classical Hopfield model introducing the fundamental concepts of patterns, retrieval, pattern recognition, neural dynamics, capacity and describe the fundamental results obtained in this field by Amit, Gutfreund and Sompolinsky,1 using the non rigorous method of replica and the rigorous version given by Pastur, Shcherbina, Tirozzi2 using the cavity method. Then I give a formulation of the theory of Quantum Neural Networks (QNN) in terms of the XY model with Hebbian interaction. The problem of retrieval and storage is discussed. The retrieval states are the states of the minimum energy. I apply the estimates found by Lieb3 which give lower and upper bound of the free-energy and expectation of the observables of the quantum model. I discuss also some experiment and the search of ground state using Monte Carlo Dynamics applied to the equivalent classical two dimensional Ising model constructed by Suzuki et al.6 At the end there is a list of open problems.
Medical image analysis with artificial neural networks.
Jiang, J; Trundle, P; Ren, J
2010-12-01
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. PMID:20713305
Creativity in design and artificial neural networks
Neocleous, C.C.; Esat, I.I.; Schizas, C.N.
1996-12-31
The creativity phase is identified as an integral part of the design phase. The characteristics of creative persons which are relevant to designing artificial neural networks manifesting aspects of creativity, are identified. Based on these identifications, a general framework of artificial neural network characteristics to implement such a goal are proposed.
Self-organization of neural networks
NASA Astrophysics Data System (ADS)
Clark, John W.; Winston, Jeffrey V.; Rafelski, Johann
1984-05-01
The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (“brainwashing”) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conductive to the simulation of memory and learning phenomena.
Advanced telerobotic control using neural networks
NASA Technical Reports Server (NTRS)
Pap, Robert M.; Atkins, Mark; Cox, Chadwick; Glover, Charles; Kissel, Ralph; Saeks, Richard
1993-01-01
Accurate Automation is designing and developing adaptive decentralized joint controllers using neural networks. We are then implementing these in hardware for the Marshall Space Flight Center PFMA as well as to be usable for the Remote Manipulator System (RMS) robot arm. Our design is being realized in hardware after completion of the software simulation. This is implemented using a Functional-Link neural network.
Neural Network Algorithm for Particle Loading
J. L. V. Lewandowski
2003-04-25
An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.
Adaptive Neurons For Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1990-01-01
Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.
Radiation Behavior of Analog Neural Network Chip
NASA Technical Reports Server (NTRS)
Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.
1996-01-01
A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.
Applications of Neural Networks in Finance.
ERIC Educational Resources Information Center
Crockett, Henry; Morrison, Ronald
1994-01-01
Discusses research with neural networks in the area of finance. Highlights include bond pricing, theoretical exposition of primary bond pricing, bond pricing regression model, and an example that created networks with corporate bonds and NeuralWare Neuralworks Professional H software using the back-propagation technique. (LRW)
Neural network based architectures for aerospace applications
NASA Technical Reports Server (NTRS)
Ricart, Richard
1987-01-01
A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed.
A Survey of Neural Network Publications.
ERIC Educational Resources Information Center
Vijayaraman, Bindiganavale S.; Osyk, Barbara
This paper is a survey of publications on artificial neural networks published in business journals for the period ending July 1996. Its purpose is to identify and analyze trends in neural network research during that period. This paper shows which topics have been heavily researched, when these topics were researched, and how that research has…
Introduction to Concepts in Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Niebur, Dagmar
1995-01-01
This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.
Enhancing neural-network performance via assortativity.
de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J
2011-03-01
The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information. PMID:21517565
Enhancing neural-network performance via assortativity
Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.
2011-03-15
The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.
Neural network and letter recognition
Lee, Hue Yeon.
1989-01-01
Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C-layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the Gabor transform. Pattern dependent choice of center and wavelengths of Gabor filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets.
Bodruzzaman, M.
1995-12-31
This report summarizes work on chaotic behavior control in FBC systems. An update is given to the chaos control method designed to control the chaotic behavior in an FBC system; this method inludes a fully recurrent neural network called the Dynamic System Imitator (DSI). DSI mimics the behavior of a wide variety of dynamic systems in the real world; it was used for modeling linear, nonlinear and chaotic systems, and is also used for iterative prediction of chaotic system behavior. A general methodology for using the DSI to control a nonlinear system is applied to control the chaotic behavior of the Lorenz System. A plan is also outlined for using this method to the FBC system for predicting and controlling its chaotic behavior. Chaotic pressure data from an experimental FBC system was obtained (from METC) on normal and abnormal mixing. Results of chaos analysis applied to these data are presented. These techniques are used to identify the system behavior at different conditions, estimate system order, construct the system attractor, and locate the chaotic behavior in the pressure-drop time series data. Preliminary analysis show that both normal and abnormal conditions of FBC have chaotic characteristics. Objective is to develop a neuro-chaos controller to preserve the normal operational performance of the system.
Moon, S W; Kong, S G
2001-01-01
This paper presents a novel block-based neural network (BBNN) model and the optimization of its structure and weights based on a genetic algorithm. The architecture of the BBNN consists of a 2D array of fundamental blocks with four variable input/output nodes and connection weights. Each block can have one of four different internal configurations depending on the structure settings, The BBNN model includes some restrictions such as 2D array and integer weights in order to allow easier implementation with reconfigurable hardware such as field programmable logic arrays (FPGA). The structure and weights of the BBNN are encoded with bit strings which correspond to the configuration bits of FPGA. The configuration bits are optimized globally using a genetic algorithm with 2D encoding and modified genetic operators. Simulations show that the optimized BBNN can solve engineering problems such as pattern classification and mobile robot control. PMID:18244385
Neural networks: a biased overview
Domany, E.
1988-06-01
An overview of recent activity in the field of neural networks is presented. The long-range aim of this research is to understand how the brain works. First some of the problems are stated and terminology defined; then an attempt is made to explain why physicists are drawn to the field, and their main potential contribution. In particular, in recent years some interesting models have been introduced by physicists. A small subset of these models is described, with particular emphasis on those that are analytically soluble. Finally a brief review of the history and recent developments of single- and multilayer perceptrons is given, bringing the situation up to date regarding the central immediate problem of the field: search for a learning algorithm that has an associated convergence theorem.
Sunspot prediction using neural networks
NASA Technical Reports Server (NTRS)
Villarreal, James; Baffes, Paul
1990-01-01
The earliest systematic observance of sunspot activity is known to have been discovered by the Chinese in 1382 during the Ming Dynasty (1368 to 1644) when spots on the sun were noticed by looking at the sun through thick, forest fire smoke. Not until after the 18th century did sunspot levels become more than a source of wonderment and curiosity. Since 1834 reliable sunspot data has been collected by the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Naval Observatory. Recently, considerable effort has been placed upon the study of the effects of sunspots on the ecosystem and the space environment. The efforts of the Artificial Intelligence Section of the Mission Planning and Analysis Division of the Johnson Space Center involving the prediction of sunspot activity using neural network technologies are described.
Introduction to artificial neural networks.
Grossi, Enzo; Buscema, Massimo
2007-12-01
The coupling of computer science and theoretical bases such as nonlinear dynamics and chaos theory allows the creation of 'intelligent' agents, such as artificial neural networks (ANNs), able to adapt themselves dynamically to problems of high complexity. ANNs are able to reproduce the dynamic interaction of multiple factors simultaneously, allowing the study of complexity; they can also draw conclusions on individual basis and not as average trends. These tools can offer specific advantages with respect to classical statistical techniques. This article is designed to acquaint gastroenterologists with concepts and paradigms related to ANNs. The family of ANNs, when appropriately selected and used, permits the maximization of what can be derived from available data and from complex, dynamic, and multidimensional phenomena, which are often poorly predictable in the traditional 'cause and effect' philosophy. PMID:17998827
Wavelet differential neural network observer.
Chairez, Isaac
2009-09-01
State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown. PMID:19674951
Neural networks for damage identification
Paez, T.L.; Klenke, S.E.
1997-11-01
Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.
Tampa Electric Neural Network Sootblowing
Mark A. Rhode
2003-12-31
Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate
Tampa Electric Neural Network Sootblowing
Mark A. Rhode
2004-09-30
Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around
Tampa Electric Neural Network Sootblowing
Mark A. Rhode
2004-03-31
Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around
Tampa Electric Neural Network Sootblowing
Mark A. Rhode
2002-09-30
Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, online, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce {sub x} emissions and improve heat rate
NASA Astrophysics Data System (ADS)
Yang, Dong-Sheng; Liu, Zhen-Wei; Zhao, Yan; Liu, Zhao-Bing
2012-04-01
The networked synchronization problem of a class of master-slave chaotic systems with time-varying communication topologies is investigated in this paper. Based on algebraic graph theory and matrix theory, a simple linear state feedback controller is designed to synchronize the master chaotic system and the slave chaotic systems with a time-varying communication topology connection. The exponential stability of the closed-loop networked synchronization error system is guaranteed by applying Lyapunov stability theory. The derived novel criteria are in the form of linear matrix inequalities (LMIs), which are easy to examine and tremendously reduce the computation burden from the feedback matrices. This paper provides an alternative networked secure communication scheme which can be extended conveniently. An illustrative example is given to demonstrate the effectiveness of the proposed networked synchronization method.
Neural networks and orbit control in accelerators
Bozoki, E.; Friedman, A.
1994-07-01
An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to `kicks` and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given.
VLSI Cells Placement Using the Neural Networks
Azizi, Hacene; Zouaoui, Lamri; Mokhnache, Salah
2008-06-12
The artificial neural networks have been studied for several years. Their effectiveness makes it possible to expect high performances. The privileged fields of these techniques remain the recognition and classification. Various applications of optimization are also studied under the angle of the artificial neural networks. They make it possible to apply distributed heuristic algorithms. In this article, a solution to placement problem of the various cells at the time of the realization of an integrated circuit is proposed by using the KOHONEN network.
Stochastic cellular automata model of neural networks.
Goltsev, A V; de Abreu, F V; Dorogovtsev, S N; Mendes, J F F
2010-06-01
We propose a stochastic dynamical model of noisy neural networks with complex architectures and discuss activation of neural networks by a stimulus, pacemakers, and spontaneous activity. This model has a complex phase diagram with self-organized active neural states, hybrid phase transitions, and a rich array of behaviors. We show that if spontaneous activity (noise) reaches a threshold level then global neural oscillations emerge. Stochastic resonance is a precursor of this dynamical phase transition. These oscillations are an intrinsic property of even small groups of 50 neurons. PMID:20866454
Neural network regulation driven by autonomous neural firings
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2016-07-01
Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.
Coronary Artery Diagnosis Aided by Neural Network
NASA Astrophysics Data System (ADS)
Stefko, Kamil
2007-01-01
Coronary artery disease is due to atheromatous narrowing and subsequent occlusion of the coronary vessel. Application of optimised feed forward multi-layer back propagation neural network (MLBP) for detection of narrowing in coronary artery vessels is presented in this paper. The research was performed using 580 data records from traditional ECG exercise test confirmed by coronary arteriography results. Each record of training database included description of the state of a patient providing input data for the neural network. Level and slope of ST segment of a 12 lead ECG signal recorded at rest and after effort (48 floating point values) was the main component of input data for neural network was. Coronary arteriography results (verified the existence or absence of more than 50% stenosis of the particular coronary vessels) were used as a correct neural network training output pattern. More than 96% of cases were correctly recognised by especially optimised and a thoroughly verified neural network. Leave one out method was used for neural network verification so 580 data records could be used for training as well as for verification of neural network.
Neural networks with dynamical synapses: From mixed-mode oscillations and spindles to chaos
NASA Astrophysics Data System (ADS)
Lee, K.; Goltsev, A. V.; Lopes, M. A.; Mendes, J. F. F.
2013-01-01
Understanding of short-term synaptic depression (STSD) and other forms of synaptic plasticity is a topical problem in neuroscience. Here we study the role of STSD in the formation of complex patterns of brain rhythms. We use a cortical circuit model of neural networks composed of irregular spiking excitatory and inhibitory neurons having type 1 and 2 excitability and stochastic dynamics. In the model, neurons form a sparsely connected network and their spontaneous activity is driven by random spikes representing synaptic noise. Using simulations and analytical calculations, we found that if the STSD is absent, the neural network shows either asynchronous behavior or regular network oscillations depending on the noise level. In networks with STSD, changing parameters of synaptic plasticity and the noise level, we observed transitions to complex patters of collective activity: mixed-mode and spindle oscillations, bursts of collective activity, and chaotic behavior. Interestingly, these patterns are stable in a certain range of the parameters and separated by critical boundaries. Thus, the parameters of synaptic plasticity can play a role of control parameters or switchers between different network states. However, changes of the parameters caused by a disease may lead to dramatic impairment of ongoing neural activity. We analyze the chaotic neural activity by use of the 0-1 test for chaos (Gottwald, G. & Melbourne, I., 2004) and show that it has a collective nature.
Multispectral-image fusion using neural networks
NASA Astrophysics Data System (ADS)
Kagel, Joseph H.; Platt, C. A.; Donaven, T. W.; Samstad, Eric A.
1990-08-01
A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard a circuit card assembly and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations results and a description of the prototype system are presented. 1.
Multispectral image fusion using neural networks
NASA Technical Reports Server (NTRS)
Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.
1990-01-01
A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.
Data compression using artificial neural networks
Watkins, B.E.
1991-09-01
This thesis investigates the application of artificial neural networks for the compression of image data. An algorithm is developed using the competitive learning paradigm which takes advantage of the parallel processing and classification capability of neural networks to produce an efficient implementation of vector quantization. Multi-Stage, tree searched, and classification vector quantization codebook design are adapted to the neural network design to reduce the computational cost and hardware requirements. The results show that the new algorithm provides a substantial reduction in computational costs and an improvement in performance.
Description of interatomic interactions with neural networks
NASA Astrophysics Data System (ADS)
Hajinazar, Samad; Shao, Junping; Kolmogorov, Aleksey N.
Neural networks are a promising alternative to traditional classical potentials for describing interatomic interactions. Recent research in the field has demonstrated how arbitrary atomic environments can be represented with sets of general functions which serve as an input for the machine learning tool. We have implemented a neural network formalism in the MAISE package and developed a protocol for automated generation of accurate models for multi-component systems. Our tests illustrate the performance of neural networks and known classical potentials for a range of chemical compositions and atomic configurations. Supported by NSF Grant DMR-1410514.
Neural network with formed dynamics of activity
Dunin-Barkovskii, V.L.; Osovets, N.B.
1995-03-01
The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.
Stock market index prediction using neural networks
NASA Astrophysics Data System (ADS)
Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok
1994-03-01
A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.
A neural network prototyping package within IRAF
NASA Technical Reports Server (NTRS)
Bazell, D.; Bankman, I.
1992-01-01
We outline our plans for incorporating a Neural Network Prototyping Package into the IRAF environment. The package we are developing will allow the user to choose between different types of networks and to specify the details of the particular architecture chosen. Neural networks consist of a highly interconnected set of simple processing units. The strengths of the connections between units are determined by weights which are adaptively set as the network 'learns'. In some cases, learning can be a separate phase of the user cycle of the network while in other cases the network learns continuously. Neural networks have been found to be very useful in pattern recognition and image processing applications. They can form very general 'decision boundaries' to differentiate between objects in pattern space and they can be used for associative recall of patterns based on partial cures and for adaptive filtering. We discuss the different architectures we plan to use and give examples of what they can do.
Nonequilibrium landscape theory of neural networks
Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin
2013-01-01
The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451
An Introduction to Neural Networks for Hearing Aid Noise Recognition.
ERIC Educational Resources Information Center
Kim, Jun W.; Tyler, Richard S.
1995-01-01
This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the characteristics…
Results of the neural network investigation
NASA Astrophysics Data System (ADS)
Uvanni, Lee A.
1992-04-01
Rome Laboratory has designed and implemented a neural network based automatic target recognition (ATR) system under contract F30602-89-C-0079 with Booz, Allen & Hamilton (BAH), Inc., of Arlington, Virginia. The system utilizes a combination of neural network paradigms and conventional image processing techniques in a parallel environment on the IE- 2000 SUN 4 workstation at Rome Laboratory. The IE-2000 workstation was designed to assist the Air Force and Department of Defense to derive the needs for image exploitation and image exploitation support for the late 1990s - year 2000 time frame. The IE-2000 consists of a developmental testbed and an applications testbed, both with the goal of solving real world problems on real-world facilities for image exploitation. To fully exploit the parallel nature of neural networks, 18 Inmos T800 transputers were utilized, in an attempt to provide a near- linear speed-up for each subsystem component implemented on them. The initial design contained three well-known neural network paradigms, each modified by BAH to some extent: the Selective Attention Neocognitron (SAN), the Binary Contour System/Feature Contour System (BCS/FCS), and Adaptive Resonance Theory 2 (ART-2), and one neural network designed by BAH called the Image Variance Exploitation Network (IVEN). Through rapid prototyping, the initial system evolved into a completely different final design, called the Neural Network Image Exploitation System (NNIES), where the final system consists of two basic components: the Double Variance (DV) layer and the Multiple Object Detection And Location System (MODALS). A rapid prototyping neural network CAD Tool, designed by Booz, Allen & Hamilton, was used to rapidly build and emulate the neural network paradigms. Evaluation of the completed ATR system included probability of detections and probability of false alarms among other measures.
Parameter extraction with neural networks
NASA Astrophysics Data System (ADS)
Cazzanti, Luca; Khan, Mumit; Cerrina, Franco
1998-06-01
In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs
Imbibition well stimulation via neural network design
Weiss, William
2007-08-14
A method for stimulation of hydrocarbon production via imbibition by utilization of surfactants. The method includes use of fuzzy logic and neural network architecture constructs to determine surfactant use.
Constructive Autoassociative Neural Network for Facial Recognition
Fernandes, Bruno J. T.; Cavalcanti, George D. C.; Ren, Tsang I.
2014-01-01
Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature. PMID:25542018
Using Neural Networks for Sensor Validation
NASA Technical Reports Server (NTRS)
Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William
1998-01-01
This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.
Radar signal categorization using a neural network
NASA Technical Reports Server (NTRS)
Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.
1991-01-01
Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.
Limitations of opto-electronic neural networks
NASA Technical Reports Server (NTRS)
Yu, Jeffrey; Johnston, Alan; Psaltis, Demetri; Brady, David
1989-01-01
Consideration is given to the limitations of implementing neurons, weights, and connections in neural networks for electronics and optics. It is shown that the advantages of each technology are utilized when electronically fabricated neurons are included and a combination of optics and electronics are employed for the weights and connections. The relationship between the types of neural networks being constructed and the choice of technologies to implement the weights and connections is examined.
Neural network simulations of the nervous system.
van Leeuwen, J L
1990-01-01
Present knowledge of brain mechanisms is mainly based on anatomical and physiological studies. Such studies are however insufficient to understand the information processing of the brain. The present new focus on neural network studies is the most likely candidate to fill this gap. The present paper reviews some of the history and current status of neural network studies. It signals some of the essential problems for which answers have to be found before substantial progress in the field can be made. PMID:2245130
Using neural networks in software repositories
NASA Technical Reports Server (NTRS)
Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.
1992-01-01
The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.
Neural-Network Controller For Vibration Suppression
NASA Technical Reports Server (NTRS)
Boussalis, Dhemetrios; Wang, Shyh Jong
1995-01-01
Neural-network-based adaptive-control system proposed for vibration suppression of flexible space structures. Controller features three-layer neural network and utilizes output feedback. Measurements generated by various sensors on structure. Feed forward path also included to speed up response in case plant exhibits predominantly linear dynamic behavior. System applicable to single-input single-output systems. Work extended to multiple-input multiple-output systems as well.
Optimization neural network for solving flow problems.
Perfetti, R
1995-01-01
This paper describes a neural network for solving flow problems, which are of interest in many areas of application as in fuel, hydro, and electric power scheduling. The neural network consist of two layers: a hidden layer and an output layer. The hidden units correspond to the nodes of the flow graph. The output units represent the branch variables. The network has a linear order of complexity, it is easily programmable, and it is suited for analog very large scale integration (VLSI) realization. The functionality of the proposed network is illustrated by a simulation example concerning the maximal flow problem. PMID:18263420
A neural network simulation package in CLIPS
NASA Technical Reports Server (NTRS)
Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John
1990-01-01
The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.
Speech synthesis with artificial neural networks
NASA Astrophysics Data System (ADS)
Weijters, Ton; Thole, Johan
1992-10-01
The application of neural nets to speech synthesis is considered. In speech synthesis, the main efforts so far have been to master the grapheme to phoneme conversion. During this conversion symbols (graphemes) are converted into other symbols (phonemes). Neural networks, however, are especially competitive for tasks in which complex nonlinear transformations are needed and sufficient domain specific knowledge is not available. The conversion of text into speech parameters appropriate as input for a speech generator seems such a task. Results of a pilot study in which an attempt is made to train a neural network for this conversion are presented.
A neural network for visual pattern recognition
Fukushima, K.
1988-03-01
A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.
The H1 neural network trigger project
NASA Astrophysics Data System (ADS)
Kiesling, C.; Denby, B.; Fent, J.; Fröchtenicht, W.; Garda, P.; Granado, B.; Grindhammer, G.; Haberer, W.; Janauschek, L.; Kobler, T.; Koblitz, B.; Nellen, G.; Prevotet, J.-C.; Schmidt, S.; Tzamariudaki, E.; Udluft, S.
2001-08-01
We present a short overview of neuromorphic hardware and some of the physics projects making use of such devices. As a concrete example we describe an innovative project within the H1-Experiment at the electron-proton collider HERA, instrumenting hardwired neural networks as pattern recognition machines to discriminate between wanted physics and uninteresting background at the trigger level. The decision time of the system is less than 20 microseconds, typical for a modern second level trigger. The neural trigger has been successfully running for the past four years and has turned out new physics results from H1 unobtainable so far with other triggering schemes. We describe the concepts and the technical realization of the neural network trigger system, present the most important physics results, and motivate an upgrade of the system for the future high luminosity running at HERA. The upgrade concentrates on "intelligent preprocessing" of the neural inputs which help to strongly improve the networks' discrimination power.
Optical neural stimulation modeling on degenerative neocortical neural networks
NASA Astrophysics Data System (ADS)
Zverev, M.; Fanjul-Vélez, F.; Salas-García, I.; Arce-Diego, J. L.
2015-07-01
Neurodegenerative diseases usually appear at advanced age. Medical advances make people live longer and as a consequence, the number of neurodegenerative diseases continuously grows. There is still no cure for these diseases, but several brain stimulation techniques have been proposed to improve patients' condition. One of them is Optical Neural Stimulation (ONS), which is based on the application of optical radiation over specific brain regions. The outer cerebral zones can be noninvasively stimulated, without the common drawbacks associated to surgical procedures. This work focuses on the analysis of ONS effects in stimulated neurons to determine their influence in neuronal activity. For this purpose a neural network model has been employed. The results show the neural network behavior when the stimulation is provided by means of different optical radiation sources and constitute a first approach to adjust the optical light source parameters to stimulate specific neocortical areas.
Artificial astrocytes improve neural network performance.
Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso
2011-01-01
Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157
Artificial Astrocytes Improve Neural Network Performance
Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso
2011-01-01
Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157
Fuzzy logic and neural networks
Loos, J.R.
1994-11-01
Combine fuzzy logic`s fuzzy sets, fuzzy operators, fuzzy inference, and fuzzy rules - like defuzzification - with neural networks and you can arrive at very unfuzzy real-time control. Fuzzy logic, cursed with a very whimsical title, simply means multivalued logic, which includes not only the conventional two-valued (true/false) crisp logic, but also the logic of three or more values. This means one can assign logic values of true, false, and somewhere in between. This is where fuzziness comes in. Multi-valued logic avoids the black-and-white, all-or-nothing assignment of true or false to an assertion. Instead, it permits the assignment of shades of gray. When assigning a value of true or false to an assertion, the numbers typically used are {open_quotes}1{close_quotes} or {open_quotes}0{close_quotes}. This is the case for programmed systems. If {open_quotes}0{close_quotes} means {open_quotes}false{close_quotes} and {open_quotes}1{close_quotes} means {open_quotes}true,{close_quotes} then {open_quotes}shades of gray{close_quotes} are any numbers between 0 and 1. Therefore, {open_quotes}nearly true{close_quotes} may be represented by 0.8 or 0.9, {open_quotes}nearly false{close_quotes} may be represented by 0.1 or 0.2, and {close_quotes}your guess is as good as mine{close_quotes} may be represented by 0.5. The flexibility available to one is limitless. One can associate any meaning, such as {open_quotes}nearly true{close_quotes}, to any value of any granularity, such as 0.9999. 2 figs.
On sparsely connected optimal neural networks
Beiu, V.; Draghici, S.
1997-10-01
This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.
Artificial Neural Networks and Instructional Technology.
ERIC Educational Resources Information Center
Carlson, Patricia A.
1991-01-01
Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…
Neural-Network Modeling Of Arc Welding
NASA Technical Reports Server (NTRS)
Anderson, Kristinn; Barnett, Robert J.; Springfield, James F.; Cook, George E.; Strauss, Alvin M.; Bjorgvinsson, Jon B.
1994-01-01
Artificial neural networks considered for use in monitoring and controlling gas/tungsten arc-welding processes. Relatively simple network, using 4 welding equipment parameters as inputs, estimates 2 critical weld-bead paramaters within 5 percent. Advantage is computational efficiency.
Higher-Order Neural Networks Recognize Patterns
NASA Technical Reports Server (NTRS)
Reid, Max B.; Spirkovska, Lilly; Ochoa, Ellen
1996-01-01
Networks of higher order have enhanced capabilities to distinguish between different two-dimensional patterns and to recognize those patterns. Also enhanced capabilities to "learn" patterns to be recognized: "trained" with far fewer examples and, therefore, in less time than necessary to train comparable first-order neural networks.
Orthogonal Patterns In A Binary Neural Network
NASA Technical Reports Server (NTRS)
Baram, Yoram
1991-01-01
Report presents some recent developments in theory of binary neural networks. Subject matter relevant to associate (content-addressable) memories and to recognition of patterns - both of considerable importance in advancement of robotics and artificial intelligence. When probed by any pattern, network converges to one of stored patterns.
Target detection using multilayer feedforward neural networks
NASA Astrophysics Data System (ADS)
Scherf, Alan V.; Scott, Peter A.
1991-08-01
Multilayer feedforward neural networks have been integrated with conventional image processing techniques to form a hybrid target detection algorithm for use in the F/A-18 FLIR pod advanced air-to-air track-while-scan mode. The network has been trained to detect and localize small targets in infrared imagery. Comparative performance between this target detection technique is evaluated.
Comparing artificial and biological dynamical neural networks
NASA Astrophysics Data System (ADS)
McAulay, Alastair D.
2006-05-01
Modern computers can be made more friendly and otherwise improved by making them behave more like humans. Perhaps we can learn how to do this from biology in which human brains evolved over a long period of time. Therefore, we first explain a commonly used biological neural network (BNN) model, the Wilson-Cowan neural oscillator, that has cross-coupled excitatory (positive) and inhibitory (negative) neurons. The two types of neurons are used for frequency modulation communication between neurons which provides immunity to electromagnetic interference. We then evolve, for the first time, an artificial neural network (ANN) to perform the same task. Two dynamical feed-forward artificial neural networks use cross-coupling feedback (like that in a flip-flop) to form an ANN nonlinear dynamic neural oscillator with the same equations as the Wilson-Cowan neural oscillator. Finally we show, through simulation, that the equations perform the basic neural threshold function, switching between stable zero output and a stable oscillation, that is a stable limit cycle. Optical implementation with an injected laser diode and future research are discussed.
Electronic device aspects of neural network memories
NASA Technical Reports Server (NTRS)
Lambe, J.; Moopenn, A.; Thakoor, A. P.
1985-01-01
The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.
Improving neural network performance on SIMD architectures
NASA Astrophysics Data System (ADS)
Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry
2015-12-01
Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.
Molnár, Botond; Ercsey-Ravasz, Mária
2013-01-01
There has been a long history of using neural networks for combinatorial optimization and constraint satisfaction problems. Symmetric Hopfield networks and similar approaches use steepest descent dynamics, and they always converge to the closest local minimum of the energy landscape. For finding global minima additional parameter-sensitive techniques are used, such as classical simulated annealing or the so-called chaotic simulated annealing, which induces chaotic dynamics by addition of extra terms to the energy landscape. Here we show that asymmetric continuous-time neural networks can solve constraint satisfaction problems without getting trapped in non-solution attractors. We concentrate on a model solving Boolean satisfiability (k-SAT), which is a quintessential NP-complete problem. There is a one-to-one correspondence between the stable fixed points of the neural network and the k-SAT solutions and we present numerical evidence that limit cycles may also be avoided by appropriately choosing the parameters of the model. This optimal parameter region is fairly independent of the size and hardness of instances, this way parameters can be chosen independently of the properties of problems and no tuning is required during the dynamical process. The model is similar to cellular neural networks already used in CNN computers. On an analog device solving a SAT problem would take a single operation: the connection weights are determined by the k-SAT instance and starting from any initial condition the system searches until finding a solution. In this new approach transient chaotic behavior appears as a natural consequence of optimization hardness and not as an externally induced effect. PMID:24066045
Molnár, Botond; Ercsey-Ravasz, Mária
2013-01-01
There has been a long history of using neural networks for combinatorial optimization and constraint satisfaction problems. Symmetric Hopfield networks and similar approaches use steepest descent dynamics, and they always converge to the closest local minimum of the energy landscape. For finding global minima additional parameter-sensitive techniques are used, such as classical simulated annealing or the so-called chaotic simulated annealing, which induces chaotic dynamics by addition of extra terms to the energy landscape. Here we show that asymmetric continuous-time neural networks can solve constraint satisfaction problems without getting trapped in non-solution attractors. We concentrate on a model solving Boolean satisfiability (k-SAT), which is a quintessential NP-complete problem. There is a one-to-one correspondence between the stable fixed points of the neural network and the k-SAT solutions and we present numerical evidence that limit cycles may also be avoided by appropriately choosing the parameters of the model. This optimal parameter region is fairly independent of the size and hardness of instances, this way parameters can be chosen independently of the properties of problems and no tuning is required during the dynamical process. The model is similar to cellular neural networks already used in CNN computers. On an analog device solving a SAT problem would take a single operation: the connection weights are determined by the k-SAT instance and starting from any initial condition the system searches until finding a solution. In this new approach transient chaotic behavior appears as a natural consequence of optimization hardness and not as an externally induced effect. PMID:24066045
Using Neural Networks to Describe Tracer Correlations
NASA Technical Reports Server (NTRS)
Lary, D. J.; Mueller, M. D.; Mussa, H. Y.
2003-01-01
Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.
Neural network technologies for image classification
NASA Astrophysics Data System (ADS)
Korikov, A. M.; Tungusova, A. V.
2015-11-01
We analyze the classes of problems with an objective necessity to use neural network technologies, i.e. representation and resolution problems in the neural network logical basis. Among these problems, image recognition takes an important place, in particular the classification of multi-dimensional data based on information about textural characteristics. These problems occur in aerospace and seismic monitoring, materials science, medicine and other. We reviewed different approaches for the texture description: statistical, structural, and spectral. We developed a neural network technology for resolving a practical problem of cloud image classification for satellite snapshots from the spectroradiometer MODIS. The cloud texture is described by the statistical characteristics of the GLCM (Gray Level Co- Occurrence Matrix) method. From the range of neural network models that might be applied for image classification, we chose the probabilistic neural network model (PNN) and developed an implementation which performs the classification of the main types and subtypes of clouds. Also, we chose experimentally the optimal architecture and parameters for the PNN model which is used for image classification.
Learning and diagnosing faults using neural networks
NASA Technical Reports Server (NTRS)
Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis
1990-01-01
Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.
A neural network approach to cloud classification
NASA Technical Reports Server (NTRS)
Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.
1990-01-01
It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.
Using neural networks for process planning
NASA Astrophysics Data System (ADS)
Huang, Samuel H.; Zhang, HongChao
1995-08-01
Process planning has been recognized as an interface between computer-aided design and computer-aided manufacturing. Since the late 1960s, computer techniques have been used to automate process planning activities. AI-based techniques are designed for capturing, representing, organizing, and utilizing knowledge by computers, and are extremely useful for automated process planning. To date, most of the AI-based approaches used in automated process planning are some variations of knowledge-based expert systems. Due to their knowledge acquisition bottleneck, expert systems are not sufficient in solving process planning problems. Fortunately, AI has developed other techniques that are useful for knowledge acquisition, e.g., neural networks. Neural networks have several advantages over expert systems that are desired in today's manufacturing practice. However, very few neural network applications in process planning have been reported. We present this paper in order to stimulate the research on using neural networks for process planning. This paper also identifies the problems with neural networks and suggests some possible solutions, which will provide some guidelines for research and implementation.
Neural network training with global optimization techniques.
Yamazaki, Akio; Ludermir, Teresa B
2003-04-01
This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks. PMID:12923920
Fuzzy neural network with fast backpropagation learning
NASA Astrophysics Data System (ADS)
Wang, Zhiling; De Sario, Marco; Guerriero, Andrea; Mugnuolo, Raffaele
1995-03-01
Neural filters with multilayer backpropagation network have been proved to be able to define mostly all linear or non-linear filters. Because of the slowness of the networks' convergency, however, the applicable fields have been limited. In this paper, fuzzy logic is introduced to adjust learning rate and momentum parameter depending upon output errors and training times. This makes the convergency of the network greatly improved. Test curves are shown to prove the fast filters' performance.
Stability of Stochastic Neutral Cellular Neural Networks
NASA Astrophysics Data System (ADS)
Chen, Ling; Zhao, Hongyong
In this paper, we study a class of stochastic neutral cellular neural networks. By constructing a suitable Lyapunov functional and employing the nonnegative semi-martingale convergence theorem we give some sufficient conditions ensuring the almost sure exponential stability of the networks. The results obtained are helpful to design stability of networks when stochastic noise is taken into consideration. Finally, two examples are provided to show the correctness of our analysis.
Flexible body control using neural networks
NASA Technical Reports Server (NTRS)
Mccullough, Claire L.
1992-01-01
Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.
Intermittent and sustained periodic windows in networked chaotic Rössler oscillators
He, Zhiwei; Sun, Yong; Zhan, Meng
2013-12-15
Route to chaos (or periodicity) in dynamical systems is one of fundamental problems. Here, dynamical behaviors of coupled chaotic Rössler oscillators on complex networks are investigated and two different types of periodic windows with the variation of coupling strength are found. Under a moderate coupling, the periodic window is intermittent, and the attractors within the window extremely sensitively depend on the initial conditions, coupling parameter, and topology of the network. Therefore, after adding or removing one edge of network, the periodic attractor can be destroyed and substituted by a chaotic one, or vice versa. In contrast, under an extremely weak coupling, another type of periodic window appears, which insensitively depends on the initial conditions, coupling parameter, and network. It is sustained and unchanged for different types of network structure. It is also found that the phase differences of the oscillators are almost discrete and randomly distributed except that directly linked oscillators more likely have different phases. These dynamical behaviors have also been generally observed in other networked chaotic oscillators.
Solving large scale traveling salesman problems by chaotic neurodynamics.
Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki
2002-03-01
We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches. PMID:12022514
Can neural networks compete with process calculations
Blaesi, J.; Jensen, B.
1992-12-01
Neural networks have been called a real alternative to rigorous theoretical models. A theoretical model for the calculation of refinery coker naphtha end point and coker furnace oil 90% point already was in place on the combination tower of a coking unit. Considerable data had been collected on the theoretical model during the commissioning phase and benefit analysis of the project. A neural net developed for the coker fractionator has equalled the accuracy of theoretical models, and shown the capability to handle normal operating conditions. One disadvantage of a neural network is the amount of data needed to create a good model. Anywhere from 100 to thousands of cases are needed to create a neural network model. Overall, the correlation between theoretical and neural net models for both the coker naphtha end point and the coker furnace oil 90% point was about .80; the average deviation was about 4 degrees. This indicates that the neural net model was at least as capable as the theoretical model in calculating inferred properties. 3 figs.
Kannada character recognition system using neural network
NASA Astrophysics Data System (ADS)
Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.
2013-03-01
Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.
Classification of radar clutter using neural networks.
Haykin, S; Deng, C
1991-01-01
A classifier that incorporates both preprocessing and postprocessing procedures as well as a multilayer feedforward network (based on the back-propagation algorithm) in its design to distinguish between several major classes of radar returns including weather, birds, and aircraft is described. The classifier achieves an average classification accuracy of 89% on generalization for data collected during a single scan of the radar antenna. The procedures of feature selection for neural network training, the classifier design considerations, the learning algorithm development, the implementation, and the experimental results of the neural clutter classifier, which is simulated on a Warp systolic computer, are discussed. A comparative evaluation of the multilayer neural network with a traditional Bayes classifier is presented. PMID:18282874
Critical and resonance phenomena in neural networks
NASA Astrophysics Data System (ADS)
Goltsev, A. V.; Lopes, M. A.; Lee, K.-E.; Mendes, J. F. F.
2013-01-01
Brain rhythms contribute to every aspect of brain function. Here, we study critical and resonance phenomena that precede the emergence of brain rhythms. Using an analytical approach and simulations of a cortical circuit model of neural networks with stochastic neurons in the presence of noise, we show that spontaneous appearance of network oscillations occurs as a dynamical (non-equilibrium) phase transition at a critical point determined by the noise level, network structure, the balance between excitatory and inhibitory neurons, and other parameters. We find that the relaxation time of neural activity to a steady state, response to periodic stimuli at the frequency of the oscillations, amplitude of damped oscillations, and stochastic fluctuations of neural activity are dramatically increased when approaching the critical point of the transition.
Artificial neural networks for small dataset analysis.
Pasini, Antonello
2015-05-01
Artificial neural networks (ANNs) are usually considered as tools which can help to analyze cause-effect relationships in complex systems within a big-data framework. On the other hand, health sciences undergo complexity more than any other scientific discipline, and in this field large datasets are seldom available. In this situation, I show how a particular neural network tool, which is able to handle small datasets of experimental or observational data, can help in identifying the main causal factors leading to changes in some variable which summarizes the behaviour of a complex system, for instance the onset of a disease. A detailed description of the neural network tool is given and its application to a specific case study is shown. Recommendations for a correct use of this tool are also supplied. PMID:26101654
Web traffic prediction with artificial neural networks
NASA Astrophysics Data System (ADS)
Gluszek, Adam; Kekez, Michal; Rudzinski, Filip
2005-02-01
The main aim of the paper is to present application of the artificial neural network in the web traffic prediction. First, the general problem of time series modelling and forecasting is shortly described. Next, the details of building of dynamic processes models with the neural networks are discussed. At this point determination of the model structure in terms of its inputs and outputs is the most important question because this structure is a rough approximation of the dynamics of the modelled process. The following section of the paper presents the results obtained applying artificial neural network (classical multilayer perceptron trained with backpropagation algorithm) to the real-world web traffic prediction. Finally, we discuss the results, describe weak points of presented method and propose some alternative approaches.
Artificial neural networks for small dataset analysis
2015-01-01
Artificial neural networks (ANNs) are usually considered as tools which can help to analyze cause-effect relationships in complex systems within a big-data framework. On the other hand, health sciences undergo complexity more than any other scientific discipline, and in this field large datasets are seldom available. In this situation, I show how a particular neural network tool, which is able to handle small datasets of experimental or observational data, can help in identifying the main causal factors leading to changes in some variable which summarizes the behaviour of a complex system, for instance the onset of a disease. A detailed description of the neural network tool is given and its application to a specific case study is shown. Recommendations for a correct use of this tool are also supplied. PMID:26101654
The Dynamical Recollection of Interconnected Neural Networks Using Meta-heuristics
NASA Astrophysics Data System (ADS)
Kuremoto, Takashi; Watanabe, Shun; Kobayashi, Kunikazu; Feng, Laing-Bing; Obayashi, Masanao
The interconnected recurrent neural networks are well-known with their abilities of associative memory of characteristic patterns. For example, the traditional Hopfield network (HN) can recall stored pattern stably, meanwhile, Aihara's chaotic neural network (CNN) is able to realize dynamical recollection of a sequence of patterns. In this paper, we propose to use meta-heuristic (MH) methods such as the particle swarm optimization (PSO) and the genetic algorithm (GA) to improve traditional associative memory systems. Using PSO or GA, for CNN, optimal parameters are found to accelerate the recollection process and raise the rate of successful recollection, and for HN, optimized bias current is calculated to improve the network with dynamical association of a series of patterns. Simulation results of binary pattern association showed effectiveness of the proposed methods.
Signal dispersion within a hippocampal neural network
NASA Technical Reports Server (NTRS)
Horowitz, J. M.; Mates, J. W. B.
1975-01-01
A model network is described, representing two neural populations coupled so that one population is inhibited by activity it excites in the other. Parameters and operations within the model represent EPSPs, IPSPs, neural thresholds, conduction delays, background activity and spatial and temporal dispersion of signals passing from one population to the other. Simulations of single-shock and pulse-train driving of the network are presented for various parameter values. Neuronal events from 100 to 300 msec following stimulation are given special consideration in model calculations.
Reinforced recurrent neural networks for multi-step-ahead flood forecasts
NASA Astrophysics Data System (ADS)
Chen, Pin-An; Chang, Li-Chiu; Chang, Fi-John
2013-08-01
Considering true values cannot be available at every time step in an online learning algorithm for multi-step-ahead (MSA) forecasts, a MSA reinforced real-time recurrent learning algorithm for recurrent neural networks (R-RTRL NN) is proposed. The main merit of the proposed method is to repeatedly adjust model parameters with the current information including the latest observed values and model's outputs to enhance the reliability and the forecast accuracy of the proposed method. The sequential formulation of the R-RTRL NN is derived. To demonstrate its reliability and effectiveness, the proposed R-RTRL NN is implemented to make 2-, 4- and 6-step-ahead forecasts in a famous benchmark chaotic time series and a reservoir flood inflow series in North Taiwan. For comparison purpose, three comparative neural networks (two dynamic and one static neural networks) were performed. Numerical and experimental results indicate that the R-RTRL NN not only achieves superior performance to comparative networks but significantly improves the precision of MSA forecasts for both chaotic time series and reservoir inflow case during typhoon events with effective mitigation in the time-lag problem.
Autonomous robot behavior based on neural networks
NASA Astrophysics Data System (ADS)
Grolinger, Katarina; Jerbic, Bojan; Vranjes, Bozo
1997-04-01
The purpose of autonomous robot is to solve various tasks while adapting its behavior to the variable environment, expecting it is able to navigate much like a human would, including handling uncertain and unexpected obstacles. To achieve this the robot has to be able to find solution to unknown situations, to learn experienced knowledge, that means action procedure together with corresponding knowledge on the work space structure, and to recognize working environment. The planning of the intelligent robot behavior presented in this paper implements the reinforcement learning based on strategic and random attempts for finding solution and neural network approach for memorizing and recognizing work space structure (structural assignment problem). Some of the well known neural networks based on unsupervised learning are considered with regard to the structural assignment problem. The adaptive fuzzy shadowed neural network is developed. It has the additional shadowed hidden layer, specific learning rule and initialization phase. The developed neural network combines advantages of networks based on the Adaptive Resonance Theory and using shadowed hidden layer provides ability to recognize lightly translated or rotated obstacles in any direction.
Chaotic Gene Regulatory Networks Can Be Robust Against Mutations and Noise
NASA Astrophysics Data System (ADS)
Sevim, Volkan; Rikvold, Per Arne
2008-03-01
Robustness to mutations and noise has been shown to evolve through stabilizing selection for optimal phenotypes in model gene regulatory networks. The ability to evolve robust mutants is known to depend on the network architecture. How do the state-space structures of networks with high and low robustness differ? Here we present large-scale computer simulations of a Random Threshold Network model of gene regulatory networks undergoing biological evolution. We show using damage propagation analysis and an extensive statistical analysis of state spaces of these model gene networks that the change in their dynamical properties due to stabilizing selection is very small. Therefore, conventional measures of stability do not provide much information about robustness in model gene regulatory networks. Interestingly, the networks that are most robust to both mutations and noise are highly chaotic. Chaotic networks are able to produce large attractor basins, which can be useful for maintaining a stable gene-expression pattern.[1] V. Sevim and P. A. Rikvold (2007), e-print arXiv:0708.2244.[2] V. Sevim and P. A. Rikvold (2007), e-print arXiv:0711.1522.
Experimental fault characterization of a neural network
NASA Technical Reports Server (NTRS)
Tan, Chang-Huong
1990-01-01
The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.
A neural network with modular hierarchical learning
NASA Technical Reports Server (NTRS)
Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)
1994-01-01
This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.
Neural network tomography: network replication from output surface geometry.
Minnett, Rupert C J; Smith, Andrew T; Lennon, William C; Hecht-Nielsen, Robert
2011-06-01
Multilayer perceptron networks whose outputs consist of affine combinations of hidden units using the tanh activation function are universal function approximators and are used for regression, typically by reducing the MSE with backpropagation. We present a neural network weight learning algorithm that directly positions the hidden units within input space by numerically analyzing the curvature of the output surface. Our results show that under some sampling requirements, this method can reliably recover the parameters of a neural network used to generate a data set. PMID:21377326
Uniform framework for the recurrence-network analysis of chaotic time series
NASA Astrophysics Data System (ADS)
Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.
2016-01-01
We propose a general method for the construction and analysis of unweighted ɛ -recurrence networks from chaotic time series. The selection of the critical threshold ɛc in our scheme is done empirically and we show that its value is closely linked to the embedding dimension M . In fact, we are able to identify a small critical range Δ ɛ numerically that is approximately the same for the random and several standard chaotic time series for a fixed M . This provides us a uniform framework for the nonsubjective comparison of the statistical measures of the recurrence networks constructed from various chaotic attractors. We explicitly show that the degree distribution of the recurrence network constructed by our scheme is characteristic to the structure of the attractor and display statistical scale invariance with respect to increase in the number of nodes N . We also present two practical applications of the scheme, detection of transition between two dynamical regimes in a time-delayed system and identification of the dimensionality of the underlying system from real-world data with a limited number of points through recurrence network measures. The merits, limitations, and the potential applications of the proposed method are also highlighted.