Science.gov

Sample records for neural network simulations

  1. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  2. Simulation of photosynthetic production using neural network

    NASA Astrophysics Data System (ADS)

    Kmet, Tibor; Kmetova, Maria

    2013-10-01

    This paper deals with neural network based optimal control synthesis for solving optimal control problems with control and state constraints and discrete time delay. The optimal control problem is transcribed into nonlinear programming problem which is implemented with adaptive critic neural network. This approach is applicable to a wide class of nonlinear systems. The proposed simulation methods is illustrated by the optimal control problem of photosynthetic production described by discrete time delay differential equations. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  3. Neural network computer simulation of medical aerosols.

    PubMed

    Richardson, C J; Barlow, D J

    1996-06-01

    Preliminary investigations have been conducted to assess the potential for using artificial neural networks to simulate aerosol behaviour, with a view to employing this type of methodology in the evaluation and design of pulmonary drug-delivery systems. Details are presented of the general purpose software developed for these tasks; it implements a feed-forward back-propagation algorithm with weight decay and connection pruning, the user having complete run-time control of the network architecture and mode of training. A series of exploratory investigations is then reported in which different network structures and training strategies are assessed in terms of their ability to simulate known patterns of fluid flow in simple model systems. The first of these involves simulations of cellular automata-generated data for fluid flow through a partially obstructed two-dimensional pipe. The artificial neural networks are shown to be highly successful in simulating the behaviour of this simple linear system, but with important provisos relating to the information content of the training data and the criteria used to judge when the network is properly trained. A second set of investigations is then reported in which similar networks are used to simulate patterns of fluid flow through aerosol generation devices, using training data furnished through rigorous computational fluid dynamics modelling. These more complex three-dimensional systems are modelled with equal success. It is concluded that carefully tailored, well trained networks could provide valuable tools not just for predicting but also for analysing the spatial dynamics of pharmaceutical aerosols.

  4. Electronic Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Moopenn, Alex W.; Thakoor, Anilkumar P.; Lambe, John J.

    1988-01-01

    Experimental circuits faster than simulation programs run on digital computers. Serial shift register routes clock pulses C1 to neurons in sequence. Clock pulses C2 interrogate neurons. Neuron interconnection information stored in simulated synapses. Can be expanded to greater complexity.

  5. Higherorder neural network group models for financial simulation.

    PubMed

    Zhang, M; Zhang, J C; Fulcher, J

    2000-04-01

    Real world financial data is often discontinuous and non-smooth. If we attempt to use neural networks to simulate such functions, then accuracy will be a problem. Neural network group models perform this function much better. Both Polynomial Higher Order Neural network Group (PHONG) and Trigonometric polynomial Higher Order Neural network Group (THONG) models are developed. These HONG models are open box, convergent models capable of approximating any kind of piecewise continuous function, to any degree of accuracy. Moreover they are capable of handling higher frequency, higher order non-linear and discontinuous data. Results obtained using a Higher Order Neural network Group financial simulator are presented, which confirm that HONG group models converge without difficulty, and are considerably more accurate than neural network models (more specifically, around twice as good for prediction, and a factor of four improvement in the case of simulation).

  6. A neural network simulation package in CLIPS

    NASA Technical Reports Server (NTRS)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  7. Neural Network Simulation Package from Ohio State University

    SciTech Connect

    Wickham, K.L.

    1990-08-01

    This report describes the Neural Network Simulation Package acquired from Ohio State University. The package known as Neural Shell V2.1 was evaluated and benchmarked at the INEL Supercomputing Center (ISC). The emphasis was on the Back Propagation Net which is currently considered one of the more promising types of neural networks. This report also provides additional documentation that may be helpful to anyone using the package.

  8. Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.

    1997-01-01

    The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.

  9. Artificial neural network simulation of battery performance

    SciTech Connect

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  10. An artificial neural network based groundwater flow and transport simulator

    SciTech Connect

    Krom, T.D.; Rosbjerg, D.

    1998-07-01

    Artificial neural networks are investigated as a tool for the simulation of contaminant loss and recovery in three-dimensional heterogeneous groundwater flow and contaminant transport modeling. These methods have useful applications in expert system development, knowledge base development and optimization of groundwater pollution remediation. The numerical model runs used to develop the artificial neural networks can be re-used to develop artificial neural networks to address alternative optimization problems or changed formulations of the constraints and or objective function under optimization. Artificial neural networks have been analyzed with the goal of estimating objectives which normally require the use of traditional flow and transport codes: such as contaminant recovery, contaminant loss (unrecovered) and remediation failure. The inputs to the artificial neutral networks are variable pumping withdrawal rates at fairly unconstrained 3-D locations. A forward-feed backwards error propagation artificial neural network architecture is used. The significance of the size of the training set, network architecture, and network weight optimization algorithm with respect to the estimation accuracy and objective are shown to be important. Finally, the quality of the weight optimization is studied via cross-validation techniques. This is demonstrated to be a useful method for judging training performance for strongly under-described systems.

  11. Simulation of nonlinear random vibrations using artificial neural networks

    SciTech Connect

    Paez, T.L.; Tucker, S.; O`Gorman, C.

    1997-02-01

    The simulation of mechanical system random vibrations is important in structural dynamics, but it is particularly difficult when the system under consideration is nonlinear. Artificial neural networks provide a useful tool for the modeling of nonlinear systems, however, such modeling may be inefficient or insufficiently accurate when the system under consideration is complex. This paper shows that there are several transformations that can be used to uncouple and simplify the components of motion of a complex nonlinear system, thereby making its modeling and random vibration simulation, via component modeling with artificial neural networks, a much simpler problem. A numerical example is presented.

  12. F77NNS - A FORTRAN-77 NEURAL NETWORK SIMULATOR

    NASA Technical Reports Server (NTRS)

    Mitchell, P. H.

    1994-01-01

    F77NNS (A FORTRAN-77 Neural Network Simulator) simulates the popular back error propagation neural network. F77NNS is an ANSI-77 FORTRAN program designed to take advantage of vectorization when run on machines having this capability, but it will run on any computer with an ANSI-77 FORTRAN Compiler. Artificial neural networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to biological nerve cells. Problems which involve pattern matching or system modeling readily fit the class of problems which F77NNS is designed to solve. The program's formulation trains a neural network using Rumelhart's back-propagation algorithm. Typically the nodes of a network are grouped together into clumps called layers. A network will generally have an input layer through which the various environmental stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. The back-propagation training algorithm can require massive computational resources to implement a large network such as a network capable of learning text-to-phoneme pronunciation rules as in the famous Sehnowski experiment. The Sehnowski neural network learns to pronounce 1000 common English words. The standard input data defines the specific inputs that control the type of run to be made, and input files define the NN in terms of the layers and nodes, as well as the input/output (I/O) pairs. The program has a restart capability so that a neural network can be solved in stages suitable to the user's resources and desires. F77NNS allows the user to customize the patterns of connections between layers of a network. The size of the neural network to be solved is limited only by the amount of random access memory (RAM) available to the

  13. Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.

    PubMed

    Shen, Lin; Wu, Jingheng; Yang, Weitao

    2016-10-11

    Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.

  14. Integrated Circuit For Simulation Of Neural Network

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.; Khanna, Satish K.

    1988-01-01

    Ballast resistors deposited on top of circuit structure. Cascadable, programmable binary connection matrix fabricated in VLSI form as basic building block for assembly of like units into content-addressable electronic memory matrices operating somewhat like networks of neurons. Connections formed during storage of data, and data recalled from memory by prompting matrix with approximate or partly erroneous signals. Redundancy in pattern of connections causes matrix to respond with correct stored data.

  15. Simulation of nonlinear strutures with artificial neural networks

    SciTech Connect

    Paez, T.L.

    1996-03-01

    Structural system simulation is important in analysis, design, testing, control, and other areas, but it is particularly difficult when the system under consideration is nonlinear. Artificial neural networks offer a useful tool for the modeling of nonlinear systems, however, such modeling may be inefficient or insufficiently accurate when the system under consideration is complex. This paper shows that there are several transformations that can be used to uncouple and simplify the components of motion of a complex nonlinear system, thereby making its modeling and simulation a much simpler problem. A numerical example is also presented.

  16. Neural Networks

    SciTech Connect

    Smith, Patrick I.

    2003-09-23

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  17. Leader neurons in leaky integrate and fire neural network simulations.

    PubMed

    Zbinden, Cyrille

    2011-10-01

    In this paper, we highlight the topological properties of leader neurons whose existence is an experimental fact. Several experimental studies show the existence of leader neurons in population bursts of activity in 2D living neural networks (Eytan and Marom, J Neurosci 26(33):8465-8476, 2006; Eckmann et al., New J Phys 10(015011), 2008). A leader neuron is defined as a neuron which fires at the beginning of a burst (respectively network spike) more often than we expect by chance considering its mean firing rate. This means that leader neurons have some burst triggering power beyond a chance-level statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model (Gerstner and Kistler 2002; Cessac, J Math Biol 56(3):311-345, 2008), which allows fast simulations (Izhikevich, IEEE Trans Neural Netw 15(5):1063-1070, 2004; Gerstner and Naud, Science 326:379-380, 2009). The dynamics of our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themselves. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends signals to many excitatory neurons as well as to few inhibitory neurons and a leader neuron receives only signals from few other excitatory neurons. Our linear analysis exhibits five essential properties of leader neurons each with different relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of

  18. Correlated EEG Signals Simulation Based on Artificial Neural Networks.

    PubMed

    Tomasevic, Nikola M; Neskovic, Aleksandar M; Neskovic, Natasa J

    2016-09-30

    In recent years, simulation of the human electroencephalogram (EEG) data found its important role in medical domain and neuropsychology. In this paper, a novel approach to simulation of two cross-correlated EEG signals is proposed. The proposed method is based on the principles of artificial neural networks (ANN). Contrary to the existing EEG data simulators, the ANN-based approach was leveraged solely on the experimentally acquired EEG data. More precisely, measured EEG data were utilized to optimize the simulator which consisted of two ANN models (each model responsible for generation of one EEG sequence). In order to acquire the EEG recordings, the measurement campaign was carried out on a healthy awake adult having no cognitive, physical or mental load. For the evaluation of the proposed approach, comprehensive quantitative and qualitative statistical analysis was performed considering probability distribution, correlation properties and spectral characteristics of generated EEG processes. The obtained results clearly indicated the satisfactory agreement with the measurement data.

  19. Efficiently passing messages in distributed spiking neural network simulation.

    PubMed

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  20. Neural networks for perceptual processing: from simulation tools to theories.

    PubMed

    Gurney, Kevin

    2007-03-29

    Neural networks are modelling tools that are, in principle, able to capture the input-output behaviour of arbitrary systems that may include the dynamics of animal populations or brain circuits. While a neural network model is useful if it captures phenomenologically the behaviour of the target system in this way, its utility is amplified if key mechanisms of the model can be discovered, and identified with those of the underlying system. In this review, we first describe, at a fairly high level with minimal mathematics, some of the tools used in constructing neural network models. We then go on to discuss the implications of network models for our understanding of the system they are supposed to describe, paying special attention to those models that deal with neural circuits and brain systems. We propose that neural nets are useful for brain modelling if they are viewed in a wider computational framework originally devised by Marr. Here, neural networks are viewed as an intermediate mechanistic abstraction between 'algorithm' and 'implementation', which can provide insights into biological neural representations and their putative supporting architectures.

  1. Neural Networks

    DTIC Science & Technology

    1990-01-01

    FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT ELEMENT NO. NO. NO. ACCESSION NO 11 TITLE (Include Security Classification) NEURAL NETWORKS 12. PERSONAL...SUB-GROUP Neural Networks Optical Architectures Nonlinear Optics Adaptation 19. ABSTRACT (Continue on reverse if necessary and identify by block number...341i Y C-odes , lo iii/(iv blank) 1. INTRODUCTION Neural networks are a type of distributed processing system [1

  2. Optimal Control Problem of Feeding Adaptations of Daphnia and Neural Network Simulation

    NASA Astrophysics Data System (ADS)

    Kmet', Tibor; Kmet'ov, Mria

    2010-09-01

    A neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints and open final time. The optimal control problem is transcribed into nonlinear programming problem, which is implemented with adaptive critic neural network [9] and recurrent neural network for solving nonlinear proprojection equations [10]. The proposed simulation methods is illustrated by the optimal control problem of feeding adaptation of filter feeders of Daphnia. Results show that adaptive critic based systematic approach and neural network solving of nonlinear equations hold promise for obtaining the optimal control with control and state constraints and open final time.

  3. Use of simulated neural networks of aerial image classification

    NASA Technical Reports Server (NTRS)

    Medina, Frances I.; Vasquez, Ramon

    1991-01-01

    The utility of one layer neural network in aerial image classification is examined. The network was trained with the delta rule. This method was shown to be useful as a classifier in aerial images with good resolution. It is fast, it is easy to implement, because it is distribution-free, nothing about statistical distribution of the data is needed, and it is very efficient as a boundary detector.

  4. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  5. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  6. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    NASA Astrophysics Data System (ADS)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  7. Designing laboratory wind simulations using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Križan, Josip; Gašparac, Goran; Kozmar, Hrvoje; Antonić, Oleg; Grisogono, Branko

    2015-05-01

    While experiments in boundary layer wind tunnels remain to be a major research tool in wind engineering and environmental aerodynamics, designing the modeling hardware required for a proper atmospheric boundary layer (ABL) simulation can be costly and time consuming. Hence, possibilities are sought to speed-up this process and make it more time-efficient. In this study, two artificial neural networks (ANNs) are developed to determine an optimal design of the Counihan hardware, i.e., castellated barrier wall, vortex generators, and surface roughness, in order to simulate the ABL flow developing above urban, suburban, and rural terrains, as previous ANN models were created for one terrain type only. A standard procedure is used in developing those two ANNs in order to further enhance best-practice possibilities rather than to improve existing ANN designing methodology. In total, experimental results obtained using 23 different hardware setups are used when creating ANNs. In those tests, basic barrier height, barrier castellation height, spacing density, and height of surface roughness elements are the parameters that were varied to create satisfactory ABL simulations. The first ANN was used for the estimation of mean wind velocity, turbulent Reynolds stress, turbulence intensity, and length scales, while the second one was used for the estimation of the power spectral density of velocity fluctuations. This extensive set of studied flow and turbulence parameters is unmatched in comparison to the previous relevant studies, as it includes here turbulence intensity and power spectral density of velocity fluctuations in all three directions, as well as the Reynolds stress profiles and turbulence length scales. Modeling results agree well with experiments for all terrain types, particularly in the lower ABL within the height range of the most engineering structures, while exhibiting sensitivity to abrupt changes and data scattering in profiles of wind-tunnel results. The

  8. Simulating dynamic plastic continuous neural networks by finite elements.

    PubMed

    Joghataie, Abdolreza; Torghabehi, Omid Oliyan

    2014-08-01

    We introduce dynamic plastic continuous neural network (DPCNN), which is comprised of neurons distributed in a nonlinear plastic medium where wire-like connections of neural networks are replaced with the continuous medium. We use finite element method to model the dynamic phenomenon of information processing within the DPCNNs. During the training, instead of weights, the properties of the continuous material at its different locations and some properties of neurons are modified. Input and output can be vectors and/or continuous functions over lines and/or areas. Delay and feedback from neurons to themselves and from outputs occur in the DPCNNs. We model a simple form of the DPCNN where the medium is a rectangular plate of bilinear material, and the neurons continuously fire a signal, which is a function of the horizontal displacement.

  9. A case for spiking neural network simulation based on configurable multiple-FPGA systems.

    PubMed

    Yang, Shufan; Wu, Qiang; Li, Renfa

    2011-09-01

    Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.

  10. Unified-theory-of-reinforcement neural networks do not simulate the blocking effect.

    PubMed

    Calvin, Nicholas T; J McDowell, J

    2015-11-01

    For the last 20 years the unified theory of reinforcement (Donahoe et al., 1993) has been used to develop computer simulations to evaluate its plausibility as an account for behavior. The unified theory of reinforcement states that operant and respondent learning occurs via the same neural mechanisms. As part of a larger project to evaluate the operant behavior predicted by the theory, this project was the first replication of neural network models based on the unified theory of reinforcement. In the process of replicating these neural network models it became apparent that a previously published finding, namely, that the networks simulate the blocking phenomenon (Donahoe et al., 1993), was a misinterpretation of the data. We show that the apparent blocking produced by these networks is an artifact of the inability of these networks to generate the same conditioned response to multiple stimuli. The piecemeal approach to evaluate the unified theory of reinforcement via simulation is critiqued and alternatives are discussed.

  11. Dynamical system modeling via signal reduction and neural network simulation

    SciTech Connect

    Paez, T.L.; Hunter, N.F.

    1997-11-01

    Many dynamical systems tested in the field and the laboratory display significant nonlinear behavior. Accurate characterization of such systems requires modeling in a nonlinear framework. One construct forming a basis for nonlinear modeling is that of the artificial neural network (ANN). However, when system behavior is complex, the amount of data required to perform training can become unreasonable. The authors reduce the complexity of information present in system response measurements using decomposition via canonical variate analysis. They describe a method for decomposing system responses, then modeling the components with ANNs. A numerical example is presented, along with conclusions and recommendations.

  12. Artificial Neural Network Metamodels of Stochastic Computer Simulations

    DTIC Science & Technology

    1994-08-10

    Networks, Vol. 5 (1992), pp. 207-220. 55 Haykin, Op. Cit., pp. 190-191. 56 Gon, M. and Tesi , A., "On the Problem of Local Minima in Backpropagation," IEEE...Simulation Experiments: Taguchi Methods and Response Surface Metamodels, by J. Ramberg, S. Sanchez , P. Sanchez , and L. Hollick" (Piscataway, New Jersey

  13. Development of a Neural Network Simulator for Studying the Constitutive Behavior of Structural Composite Materials

    DOE PAGES

    Na, Hyuntae; Lee, Seung-Yub; Üstündag, Ersan; ...

    2013-01-01

    This paper introduces a recent development and application of a noncommercial artificial neural network (ANN) simulator with graphical user interface (GUI) to assist in rapid data modeling and analysis in the engineering diffraction field. The real-time network training/simulation monitoring tool has been customized for the study of constitutive behavior of engineering materials, and it has improved data mining and forecasting capabilities of neural networks. This software has been used to train and simulate the finite element modeling (FEM) data for a fiber composite system, both forward and inverse. The forward neural network simulation precisely reduplicates FEM results several orders ofmore » magnitude faster than the slow original FEM. The inverse simulation is more challenging; yet, material parameters can be meaningfully determined with the aid of parameter sensitivity information. The simulator GUI also reveals that output node size for materials parameter and input normalization method for strain data are critical train conditions in inverse network. The successful use of ANN modeling and simulator GUI has been validated through engineering neutron diffraction experimental data by determining constitutive laws of the real fiber composite materials via a mathematically rigorous and physically meaningful parameter search process, once the networks are successfully trained from the FEM database.« less

  14. Neural Networks

    NASA Astrophysics Data System (ADS)

    Schwindling, Jerome

    2010-04-01

    This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p.) corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  15. Neural Network Function Classifier

    DTIC Science & Technology

    2003-02-07

    neural network sets. Each of the neural networks in a particular set is trained to recognize a particular data set type. The best function representation of the data set is determined from the neural network output. The system comprises sets of trained neural networks having neural networks trained to identify different types of data. The number of neural networks within each neural network set will depend on the number of function types that are represented. The system further comprises

  16. Neural Network Communications Signal Processing

    DTIC Science & Technology

    1994-08-01

    This final technical report describes the research and development- results of the Neural Network Communications Signal Processing (NNCSP) Program...The objectives of the NNCSP program are to: (1) develop and implement a neural network and communications signal processing simulation system for the...purpose of exploring the applicability of neural network technology to communications signal processing; (2) demonstrate several configurations of the

  17. Neural network simulation of the atmospheric point spread function for the adjacency effect research

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoshan; Wang, Haidong; Li, Ligang; Yang, Zhen; Meng, Xin

    2016-10-01

    Adjacency effect could be regarded as the convolution of the atmospheric point spread function (PSF) and the surface leaving radiance. Monte Carlo is a common method to simulate the atmospheric PSF. But it can't obtain analytic expression and the meaningful results can be only acquired by statistical analysis of millions of data. A backward Monte Carlo algorithm was employed to simulate photon emitting and propagating in the atmosphere under different conditions. The PSF was determined by recording the photon-receiving numbers in fixed bin at different position. A multilayer feed-forward neural network with a single hidden layer was designed to learn the relationship between the PSF's and the input condition parameters. The neural network used the back-propagation learning rule for training. Its input parameters involved atmosphere condition, spectrum range, observing geometry. The outputs of the network were photon-receiving numbers in the corresponding bin. Because the output units were too many to be allowed by neural network, the large network was divided into a collection of smaller ones. These small networks could be ran simultaneously on many workstations and/or PCs to speed up the training. It is important to note that the simulated PSF's by Monte Carlo technique in non-nadir viewing angles are more complicated than that in nadir conditions which brings difficulties in the design of the neural network. The results obtained show that the neural network approach could be very useful to compute the atmospheric PSF based on the simulated data generated by Monte Carlo method.

  18. Visual NNet: An Educational ANN's Simulation Environment Reusing Matlab Neural Networks Toolbox

    ERIC Educational Resources Information Center

    Garcia-Roselló, Emilio; González-Dacosta, Jacinto; Lado, Maria J.; Méndez, Arturo J.; Garcia Pérez-Schofield, Baltasar; Ferrer, Fátima

    2011-01-01

    Artificial Neural Networks (ANN's) are nowadays a common subject in different curricula of graduate and postgraduate studies. Due to the complex algorithms involved and the dynamic nature of ANN's, simulation software has been commonly used to teach this subject. This software has usually been developed specifically for learning purposes, because…

  19. Application of a neural network to simulate analysis in an optimization process

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Lamarsh, William J., II

    1992-01-01

    A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.

  20. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware

    PubMed Central

    Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.

    2016-01-01

    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061

  1. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware.

    PubMed

    Knight, James C; Tully, Philip J; Kaplan, Bernhard A; Lansner, Anders; Furber, Steve B

    2016-01-01

    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  2. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    PubMed

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  3. Limits to high-speed simulations of spiking neural networks using general-purpose computers

    PubMed Central

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite. PMID:25309418

  4. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    PubMed

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  5. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    PubMed Central

    Cheung, Kit; Schultz, Simon R.; Luk, Wayne

    2016-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  6. Computer Implementation and Simulation of Some Neural Networks Used in Pattern Recognition and Classification

    DTIC Science & Technology

    1989-03-01

    I. INTRODUCTION A. W HAT IS A NEURAL NETWORK: A neural network is a highly parallel network with many interconnections between analog computational ...optimization problems, using nonlinear analog computation . * Nonlinear Mapping: Many neural networks can map a vector of analog inputs into an output vector...N rNAVAL POSTGRADUATE SCHOOL 0Monterey, California ., THESIS COMPUTER IMPLEMENTATION AND SIMU- LATION OF SOME NEURAL NETWORKS USED IN PATTERN

  7. A Drone Remote Sensing for Virtual Reality Simulation System for Forest Fires: Semantic Neural Network Approach

    NASA Astrophysics Data System (ADS)

    Narasimha Rao, Gudikandhula; Jagadeeswara Rao, Peddada; Duvvuru, Rajesh

    2016-09-01

    Wild fires have significant impact on atmosphere and lives. The demand of predicting exact fire area in forest may help fire management team by using drone as a robot. These are flexible, inexpensive and elevated-motion remote sensing systems that use drones as platforms are important for substantial data gaps and supplementing the capabilities of manned aircraft and satellite remote sensing systems. In addition, powerful computational tools are essential for predicting certain burned area in the duration of a forest fire. The reason of this study is to built up a smart system based on semantic neural networking for the forecast of burned areas. The usage of virtual reality simulator is used to support the instruction process of fire fighters and all users for saving of surrounded wild lives by using a naive method Semantic Neural Network System (SNNS). Semantics are valuable initially to have a enhanced representation of the burned area prediction and better alteration of simulation situation to the users. In meticulous, consequences obtained with geometric semantic neural networking is extensively superior to other methods. This learning suggests that deeper investigation of neural networking in the field of forest fires prediction could be productive.

  8. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method

    PubMed Central

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller’s scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller’s algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller’s algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller’s algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data. PMID:26958442

  9. Inverse simulation system for manual-controlled rendezvous and docking based on artificial neural network

    NASA Astrophysics Data System (ADS)

    Zhou, Wanmeng; Wang, Hua; Tang, Guojin; Guo, Shuai

    2016-09-01

    The time-consuming experimental method for handling qualities assessment cannot meet the increasing fast design requirements for the manned space flight. As a tool for the aircraft handling qualities research, the model-predictive-control structured inverse simulation (MPC-IS) has potential applications in the aerospace field to guide the astronauts' operations and evaluate the handling qualities more effectively. Therefore, this paper establishes MPC-IS for the manual-controlled rendezvous and docking (RVD) and proposes a novel artificial neural network inverse simulation system (ANN-IS) to further decrease the computational cost. The novel system was obtained by replacing the inverse model of MPC-IS with the artificial neural network. The optimal neural network was trained by the genetic Levenberg-Marquardt algorithm, and finally determined by the Levenberg-Marquardt algorithm. In order to validate MPC-IS and ANN-IS, the manual-controlled RVD experiments on the simulator were carried out. The comparisons between simulation results and experimental data demonstrated the validity of two systems and the high computational efficiency of ANN-IS.

  10. Finite-state neural networks. A step toward the simulation of very large systems

    SciTech Connect

    Kohring, G.A. )

    1991-02-01

    Neural networks composed of neurons with Q{sub N} states and synapses with Q{sub J} states are studied analytically and numerically. Analytically it is shown that these finite-state networks are much more efficient at information storage than networks with continuous synapses. In order to take the utmost advantage of networks with finite-state elements, a multineuron and multisynapse coding scheme is introduced which allows the simulation of networks having 1.0 {times} 10{sup 9} couplings at a speed of 7.1 {times} 10{sup 9} coupling evaluations per second on a single processor of the Cray-YMP. A local learning algorithm is also introduced which allows for the efficient training of large networks with finite-state elements.

  11. A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors.

    PubMed

    Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L; Nicolau, Alex; Veidenbaum, Alexander V

    2009-01-01

    Neural network simulators that take into account the spiking behavior of neurons are useful for studying brain mechanisms and for various neural engineering applications. Spiking Neural Network (SNN) simulators have been traditionally simulated on large-scale clusters, super-computers, or on dedicated hardware architectures. Alternatively, Compute Unified Device Architecture (CUDA) Graphics Processing Units (GPUs) can provide a low-cost, programmable, and high-performance computing platform for simulation of SNNs. In this paper we demonstrate an efficient, biologically realistic, large-scale SNN simulator that runs on a single GPU. The SNN model includes Izhikevich spiking neurons, detailed models of synaptic plasticity and variable axonal delay. We allow user-defined configuration of the GPU-SNN model by means of a high-level programming interface written in C++ but similar to the PyNN programming interface specification. PyNN is a common programming interface developed by the neuronal simulation community to allow a single script to run on various simulators. The GPU implementation (on NVIDIA GTX-280 with 1 GB of memory) is up to 26 times faster than a CPU version for the simulation of 100K neurons with 50 Million synaptic connections, firing at an average rate of 7 Hz. For simulation of 10 Million synaptic connections and 100K neurons, the GPU SNN model is only 1.5 times slower than real-time. Further, we present a collection of new techniques related to parallelism extraction, mapping of irregular communication, and network representation for effective simulation of SNNs on GPUs. The fidelity of the simulation results was validated on CPU simulations using firing rate, synaptic weight distribution, and inter-spike interval analysis. Our simulator is publicly available to the modeling community so that researchers will have easy access to large-scale SNN simulations.

  12. Neural network simulation on a reduced-mesh-of-trees organization

    NASA Astrophysics Data System (ADS)

    Misra, Manavendra; Prasanna Kumar, V. K.

    1990-07-01

    The theory of Artificial Neural Networks (ANN's) shows that ANN's can perform useful image recognition functions. Simulations on uniprocessor sequential machines, however, destroy the parallelism inherent in ANN models and this results in a significant loss of speed. Simulations on parallel machines are therefore essential to fully exploit the advantages of ANN's. We show how to simulate ANN's on an SIMD architecture, the Reduced Mesh of Trees (RMOT). The architecture has p PE's and n2 memory arranged in a p x p array of modules (p is a constant less than or equal to n). This massive memory is used to store connection weights. A fully connected, single layer neural network with n neurons can be mapped easily onto the architecture. An update in this case requires O(n2/p) time steps. A sparse network can also be simulated efficiently on the architecture. The proposed architecture can also be used for the efficient simulation of multilayer networks with a Back Propagation learning scheme. The architecture can easily be implemented within the framework of existing hardware technology.

  13. Neural Network Studies

    DTIC Science & Technology

    1993-07-01

    basic useful theorems and general rules which apply to neural networks (in ’Overview of Neural Network Theory’), studies of training time as the...The Neural Network , Bayes- Gaussian, and k-Nearest Neighbor Classifiers’), an analysis of fuzzy logic and its relationship to neural network (in ’Fuzzy

  14. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  15. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  16. Simulation of Neurocomputing Based on Photophobic Reactions of Euglena: Toward Microbe-Based Neural Network Computing

    NASA Astrophysics Data System (ADS)

    Ozasa, Kazunari; Aono, Masashi; Maeda, Mizuo; Hara, Masahiko

    In order to develop an adaptive computing system, we investigate microscopic optical feedback to a group of microbes (Euglena gracilis in this study) with a neural network algorithm, expecting that the unique characteristics of microbes, especially their strategies to survive/adapt against unfavorable environmental stimuli, will explicitly determine the temporal evolution of the microbe-based feedback system. The photophobic reactions of Euglena are extracted from experiments, and built in the Monte-Carlo simulation of a microbe-based neurocomputing. The simulation revealed a good performance of Euglena-based neurocomputing. Dynamic transition among the solutions is discussed from the viewpoint of feedback instability.

  17. A neural-network-based method of model reduction for the dynamic simulation of MEMS

    NASA Astrophysics Data System (ADS)

    Liang, Y. C.; Lin, W. Z.; Lee, H. P.; Lim, S. P.; Lee, K. H.; Feng, D. P.

    2001-05-01

    This paper proposes a neuro-network-based method for model reduction that combines the generalized Hebbian algorithm (GHA) with the Galerkin procedure to perform the dynamic simulation and analysis of nonlinear microelectromechanical systems (MEMS). An unsupervised neural network is adopted to find the principal eigenvectors of a correlation matrix of snapshots. It has been shown that the extensive computer results of the principal component analysis using the neural network of GHA can extract an empirical basis from numerical or experimental data, which can be used to convert the original system into a lumped low-order macromodel. The macromodel can be employed to carry out the dynamic simulation of the original system resulting in a dramatic reduction of computation time while not losing flexibility and accuracy. Compared with other existing model reduction methods for the dynamic simulation of MEMS, the present method does not need to compute the input correlation matrix in advance. It needs only to find very few required basis functions, which can be learned directly from the input data, and this means that the method possesses potential advantages when the measured data are large. The method is evaluated to simulate the pull-in dynamics of a doubly-clamped microbeam subjected to different input voltage spectra of electrostatic actuation. The efficiency and the flexibility of the proposed method are examined by comparing the results with those of the fully meshed finite-difference method.

  18. [Artificial neural networks in Neurosciences].

    PubMed

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María

    2011-11-01

    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  19. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  20. Simulation and optimization of a pulsating heat pipe using artificial neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad

    2016-11-01

    In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.

  1. Closed Loop Interactions between Spiking Neural Network and Robotic Simulators Based on MUSIC and ROS

    PubMed Central

    Weidel, Philipp; Djurfeldt, Mikael; Duarte, Renato C.; Morrison, Abigail

    2016-01-01

    In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning. PMID:27536234

  2. Causal measures of structure and plasticity in simulated and living neural networks.

    PubMed

    Cadotte, Alex J; DeMarse, Thomas B; He, Ping; Ding, Mingzhou

    2008-10-07

    A major goal of neuroscience is to understand the relationship between neural structures and their function. Recording of neural activity with arrays of electrodes is a primary tool employed toward this goal. However, the relationships among the neural activity recorded by these arrays are often highly complex making it problematic to accurately quantify a network's structural information and then relate that structure to its function. Current statistical methods including cross correlation and coherence have achieved only modest success in characterizing the structural connectivity. Over the last decade an alternative technique known as Granger causality is emerging within neuroscience. This technique, borrowed from the field of economics, provides a strong mathematical foundation based on linear auto-regression to detect and quantify "causal" relationships among different time series. This paper presents a combination of three Granger based analytical methods that can quickly provide a relatively complete representation of the causal structure within a neural network. These are a simple pairwise Granger causality metric, a conditional metric, and a little known computationally inexpensive subtractive conditional method. Each causal metric is first described and evaluated in a series of biologically plausible neural simulations. We then demonstrate how Granger causality can detect and quantify changes in the strength of those relationships during plasticity using 60 channel spike train data from an in vitro cortical network measured on a microelectrode array. We show that these metrics can not only detect the presence of causal relationships, they also provide crucial information about the strength and direction of that relationship, particularly when that relationship maybe changing during plasticity. Although we focus on the analysis of multichannel spike train data the metrics we describe are applicable to any stationary time series in which causal

  3. Neural Network Development Tool (NETS)

    NASA Technical Reports Server (NTRS)

    Baffes, Paul T.

    1990-01-01

    Artificial neural networks formed from hundreds or thousands of simulated neurons, connected in manner similar to that in human brain. Such network models learning behavior. Using NETS involves translating problem to be solved into input/output pairs, designing network configuration, and training network. Written in C.

  4. Neural networks counting chimes.

    PubMed Central

    Amit, D J

    1988-01-01

    It is shown that the ideas that led to neural networks capable of recalling associatively and asynchronously temporal sequences of patterns can be extended to produce a neural network that automatically counts the cardinal number in a sequence of identical external stimuli. The network is explicitly constructed, analyzed, and simulated. Such a network may account for the cognitive effect of the automatic counting of chimes to tell the hour. A more general implication is that different electrophysiological responses to identical stimuli, at certain stages of cortical processing, do not necessarily imply synaptic modification, a la Hebb. Such differences may arise from the fact that consecutive identical inputs find the network in different stages of an active temporal sequence of cognitive states. These types of networks are then situated within a program for the study of cognition, which assigns the detection of meaning as the primary role of attractor neural networks rather than computation, in contrast to the parallel distributed processing attitude to the connectionist project. This interpretation is free of homunculus, as well as from the criticism raised against the cognitive model of symbol manipulation. Computation is then identified as the syntax of temporal sequences of quasi-attractors. PMID:3353371

  5. Simulation tests of the optimization method of Hopfield and Tank using neural networks

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.

    1988-01-01

    The method proposed by Hopfield and Tank for using the Hopfield neural network with continuous valued neurons to solve the traveling salesman problem is tested by simulation. Several researchers have apparently been unable to successfully repeat the numerical simulation documented by Hopfield and Tank. However, as suggested to the author by Adams, it appears that the reason for those difficulties is that a key parameter value is reported erroneously (by four orders of magnitude) in the original paper. When a reasonable value is used for that parameter, the network performs generally as claimed. Additionally, a new method of using feedback to control the input bias currents to the amplifiers is proposed and successfully tested. This eliminates the need to set the input currents by trial and error.

  6. Introducing ab initio based neural networks for transition-rate prediction in kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Messina, Luca; Castin, Nicolas; Domain, Christophe; Olsson, Pär

    2017-02-01

    The quality of kinetic Monte Carlo (KMC) simulations of microstructure evolution in alloys relies on the parametrization of point-defect migration rates, which are complex functions of the local chemical composition and can be calculated accurately with ab initio methods. However, constructing reliable models that ensure the best possible transfer of physical information from ab initio to KMC is a challenging task. This work presents an innovative approach, where the transition rates are predicted by artificial neural networks trained on a database of 2000 migration barriers, obtained with density functional theory (DFT) in place of interatomic potentials. The method is tested on copper precipitation in thermally aged iron alloys, by means of a hybrid atomistic-object KMC model. For the object part of the model, the stability and mobility properties of copper-vacancy clusters are analyzed by means of independent atomistic KMC simulations, driven by the same neural networks. The cluster diffusion coefficients and mean free paths are found to increase with size, confirming the dominant role of coarsening of medium- and large-sized clusters in the precipitation kinetics. The evolution under thermal aging is in better agreement with experiments with respect to a previous interatomic-potential model, especially concerning the experiment time scales. However, the model underestimates the solubility of copper in iron due to the excessively high solution energy predicted by the chosen DFT method. Nevertheless, this work proves the capability of neural networks to transfer complex ab initio physical properties to higher-scale models, and facilitates the extension to systems with increasing chemical complexity, setting the ground for reliable microstructure evolution simulations in a wide range of alloys and applications.

  7. Generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2013-03-01

    In this work a new radial basis function based classification neural network named as generalized classifier neural network, is proposed. The proposed generalized classifier neural network has five layers, unlike other radial basis function based neural networks such as generalized regression neural network and probabilistic neural network. They are input, pattern, summation, normalization and output layers. In addition to topological difference, the proposed neural network has gradient descent based optimization of smoothing parameter approach and diverge effect term added calculation improvements. Diverge effect term is an improvement on summation layer calculation to supply additional separation ability and flexibility. Performance of generalized classifier neural network is compared with that of the probabilistic neural network, multilayer perceptron algorithm and radial basis function neural network on 9 different data sets and with that of generalized regression neural network on 3 different data sets include only two classes in MATLAB environment. Better classification performance up to %89 is observed. Improved classification performances proved the effectivity of the proposed neural network.

  8. Neural networks for triggering

    SciTech Connect

    Denby, B. ); Campbell, M. ); Bedeschi, F. ); Chriss, N.; Bowers, C. ); Nesti, F. )

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  9. Nonlinear Neural Network Oscillator.

    DTIC Science & Technology

    A nonlinear oscillator (10) includes a neural network (12) having at least one output (12a) for outputting a one dimensional vector. The neural ... neural network and the input of the input layer for modifying a magnitude and/or a polarity of the one dimensional output vector prior to the sample of...first or a second direction. Connection weights of the neural network are trained on a deterministic sequence of data from a chaotic source or may be a

  10. Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Wong, Jay Ming

    2014-01-01

    Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.

  11. Neural Networks for Readability Analysis.

    ERIC Educational Resources Information Center

    McEneaney, John E.

    This paper describes and reports on the performance of six related artificial neural networks that have been developed for the purpose of readability analysis. Two networks employ counts of linguistic variables that simulate a traditional regression-based approach to readability. The remaining networks determine readability from "visual…

  12. Chaotic simulated annealing by a neural network with a variable delay: design and application.

    PubMed

    Chen, Shyan-Shiou

    2011-10-01

    In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem.

  13. Efficient Simulation of Wing Modal Response: Application of 2nd Order Shape Sensitivities and Neural Networks

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Liu, Youhua

    2000-01-01

    At the preliminary design stage of a wing structure, an efficient simulation, one needing little computation but yielding adequately accurate results for various response quantities, is essential in the search of optimal design in a vast design space. In the present paper, methods of using sensitivities up to 2nd order, and direct application of neural networks are explored. The example problem is how to decide the natural frequencies of a wing given the shape variables of the structure. It is shown that when sensitivities cannot be obtained analytically, the finite difference approach is usually more reliable than a semi-analytical approach provided an appropriate step size is used. The use of second order sensitivities is proved of being able to yield much better results than the case where only the first order sensitivities are used. When neural networks are trained to relate the wing natural frequencies to the shape variables, a negligible computation effort is needed to accurately determine the natural frequencies of a new design.

  14. Brain without mind: Computer simulation of neural networks with modifiable neuronal interactions

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Rafelski, Johann; Winston, Jeffrey V.

    1985-07-01

    Aspects of brain function are examined in terms of a nonlinear dynamical system of highly interconnected neuron-like binary decision elements. The model neurons operate synchronously in discrete time, according to deterministic or probabilistic equations of motion. Plasticity of the nervous system, which underlies such cognitive collective phenomena as adaptive development, learning, and memory, is represented by temporal modification of interneuronal connection strengths depending on momentary or recent neural activity. A formal basis is presented for the construction of local plasticity algorithms, or connection-modification routines, spanning a large class. To build an intuitive understanding of the behavior of discrete-time network models, extensive computer simulations have been carried out (a) for nets with fixed, quasirandom connectivity and (b) for nets with connections that evolve under one or another choice of plasticity algorithm. From the former experiments, insights are gained concerning the spontaneous emergence of order in the form of cyclic modes of neuronal activity. In the course of the latter experiments, a simple plasticity routine (“brainwashing,” or “anti-learning”) was identified which, applied to nets with initially quasirandom connectivity, creates model networks which provide more felicitous starting points for computer experiments on the engramming of content-addressable memories and on learning more generally. The potential relevance of this algorithm to developmental neurobiology and to sleep states is discussed. The model considered is at the same time a synthesis of earlier synchronous neural-network models and an elaboration upon them; accordingly, the present article offers both a focused review of the dynamical properties of such systems and a selection of new findings derived from computer simulation.

  15. Neural Network Hurricane Tracker

    DTIC Science & Technology

    1998-05-27

    data about the hurricane and supplying the data to a trained neural network for yielding a predicted path for the hurricane. The system further includes...a device for displaying the predicted path of the hurricane. A method for using and training the neural network in the system is described. In the...method, the neural network is trained using information about hurricanes in a specific geographical area maintained in a database. The training involves

  16. Simulated village locations in Thailand: A multi-scale model including a neural network approach.

    PubMed

    Tang, Wenwu; Malanson, George P; Entwisle, Barbara

    2009-04-01

    The simulation of rural land use systems, in general, and rural settlement dynamics in particular has developed with synergies of theory and methods for decades. Three current issues are: linking spatial patterns and processes, representing hierarchical relations across scales, and considering nonlinearity to address complex non-stationary settlement dynamics. We present a hierarchical simulation model to investigate complex rural settlement dynamics in Nang Rong, Thailand. This simulation uses sub-models to allocate new villages at three spatial scales. Regional and sub-regional models, which involve a nonlinear space-time autoregressive model implemented in a neural network approach, determine the number of new villages to be established. A dynamic village niche model, establishing pattern-process link, was designed to enable the allocation of villages into specific locations. Spatiotemporal variability in model performance indicates the pattern of village location changes as a settlement frontier advances from rice-growing lowlands to higher elevations. Experiments results demonstrate this simulation model can enhance our understanding of settlement development in Nang Rong and thus gain insight into complex land use systems in this area.

  17. Simulated village locations in Thailand: A multi-scale model including a neural network approach

    PubMed Central

    Malanson, George P.; Entwisle, Barbara

    2010-01-01

    The simulation of rural land use systems, in general, and rural settlement dynamics in particular has developed with synergies of theory and methods for decades. Three current issues are: linking spatial patterns and processes, representing hierarchical relations across scales, and considering nonlinearity to address complex non-stationary settlement dynamics. We present a hierarchical simulation model to investigate complex rural settlement dynamics in Nang Rong, Thailand. This simulation uses sub-models to allocate new villages at three spatial scales. Regional and sub-regional models, which involve a nonlinear space-time autoregressive model implemented in a neural network approach, determine the number of new villages to be established. A dynamic village niche model, establishing pattern-process link, was designed to enable the allocation of villages into specific locations. Spatiotemporal variability in model performance indicates the pattern of village location changes as a settlement frontier advances from rice-growing lowlands to higher elevations. Experiments results demonstrate this simulation model can enhance our understanding of settlement development in Nang Rong and thus gain insight into complex land use systems in this area. PMID:21399748

  18. Studies in Neural Networks

    DTIC Science & Technology

    1991-01-01

    N00014-87-K-0377 TITLE: "Studies in Neural Networks " fl.U Q l~~izie JUL 021991 "" " F.: L9’CO37 "I! c-1(.d Contract No.: N00014-87-K-0377 Final...34) have been very useful, both in understanding the dynamics of neural networks and in engineering networks to perform particular tasks. We have noted...understanding more complex network computation. Interest in applying ideas from biological neural networks to real problems of engineering raises the issues of

  19. Latent Heat and Sensible Heat Fluxes Simulation in Maize Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Safa, B.

    2015-12-01

    Latent Heat (LE) and Sensible Heat (H) flux are two major components of the energy balance at the earth's surface which play important roles in the water cycle and global warming. There are various methods for their estimation or measurement. Eddy covariance is a direct and accurate technique for their measurement. Some limitations lead to prevention of the extensive use of the eddy covariance technique. Therefore, simulation approaches can be utilized for their estimation. ANNs are the information processing systems, which can inspect the empirical data and investigate the relations (hidden rules) among them, and then make the network structure. In this study, multi-layer perceptron neural network trained by the steepest descent Back-Propagation (BP) algorithm was tested to simulate LE and H flux above two maize sites (rain-fed & irrigated) near Mead, Nebraska. Network training and testing was fulfilled using hourly data of including year, local time of day (DTime), leaf area index (LAI), soil water content (SWC) in 10 and 25 cm depths, soil temperature (Ts) in 10 cm depth, air temperature (Ta), vapor pressure deficit (VPD), wind speed (WS), irrigation and precipitation (P), net radiation (Rn), and the fraction of incoming Photosynthetically Active Radiation (PAR) absorbed by the canopy (fPAR), which were selected from days of year (DOY) 169 to 222 for 2001, 2003, 2005, 2007, and 2009. The results showed high correlation between actual and estimated data; the R² values for LE flux in irrigated and rain-fed sites were 0.9576, and 0.9642; and for H flux 0.8001, and 0.8478, respectively. Furthermore, the RMSE values ranged from 0.0580 to 0.0721 W/m² for LE flux and from 0.0824 to 0.0863 W/m² for H flux. In addition, the sensitivity of the fluxes with respect to each input was analyzed over the growth stages. Thus, the most powerful effects among the inputs for LE flux were identified net radiation, leaf area index, vapor pressure deficit, wind speed, and for H

  20. Neural networks to simulate regional ground water levels affected by human activities.

    PubMed

    Feng, Shaoyuan; Kang, Shaozhong; Huo, Zailin; Chen, Shaojun; Mao, Xiaomin

    2008-01-01

    In arid regions, human activities like agriculture and industry often require large ground water extractions. Under these circumstances, appropriate ground water management policies are essential for preventing aquifer overdraft, and thereby protecting critical ecologic and economic objectives. Identification of such policies requires accurate simulation capability of the ground water system in response to hydrological, meteorological, and human factors. In this research, artificial neural networks (ANNs) were developed and applied to investigate the effects of these factors on ground water levels in the Minqin oasis, located in the lower reach of Shiyang River Basin, in Northwest China. Using data spanning 1980 through 1997, two ANNs were developed to model and simulate dynamic ground water levels for the two subregions of Xinhe and Xiqu. The ANN models achieved high predictive accuracy, validating to 0.37 m or less mean absolute error. Sensitivity analyses were conducted with the models demonstrating that agricultural ground water extraction for irrigation is the predominant factor responsible for declining ground water levels exacerbated by a reduction in regional surface water inflows. ANN simulations indicate that it is necessary to reduce the size of the irrigation area to mitigate ground water level declines in the oasis. Unlike previous research, this study demonstrates that ANN modeling can capture important temporally and spatially distributed human factors like agricultural practices and water extraction patterns on a regional basin (or subbasin) scale, providing both high-accuracy prediction capability and enhanced understanding of the critical factors influencing regional ground water conditions.

  1. Spatial Estimation, Data Assimilation and Stochastic Conditional Simulation using the Counterpropagation Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Besaw, L. E.; Rizzo, D. M.; Boitnoitt, G. N.

    2006-12-01

    Accurate, yet cost effective, sites characterization and analysis of uncertainty are the first steps in remediation efforts at sites with subsurface contamination. From the time of source identification to the monitoring and assessment of a remediation design, the management objectives change, resulting in increased costs and the need for additional data acquisition. Parameter estimation is a key component in reliable site characterization, contaminant flow and transport predictions, plume delineation and many other data management goals. We implement a data-driven parameter estimation technique using a counterpropagation Artificial Neural Network (ANN) that is able to incorporate multiple types of data. This method is applied to estimates of geophysical properties measured on a slab of Berea sandstone and delineation of the leachate plume migrating from a landfill in upstate N.Y. The estimates generated by the ANN have been found to be statistically similar to estimates generated using conventional geostatistical kriging methods. The associated parameter uncertainty in site characterization, due to sparsely distributed samples (spatial or temporal) and incomplete site knowledge, is of major concern in resource mining and environmental engineering. We also illustrate the ability of the ANN method to perform conditional simulation using the spatial structure of parameters identified with semi-variogram analysis. This method allows for the generation of simulations that respect the observed measurement data, as well as the data's underlying spatial structure. The method of conditional simulation is used in a 3-dimensional application to estimate the uncertainty of soil lithology.

  2. Antenna analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  3. Prototype neural network pattern recognition testbed

    NASA Astrophysics Data System (ADS)

    Worrell, Steven W.; Robertson, James A.; Varner, Thomas L.; Garvin, Charles G.

    1991-02-01

    Recent successes ofneural networks has led to an optimistic outlook for neural network applications to image processing(IP). This paperpresents a general architecture for performing comparative studies of neural processing and more conventional IF techniques as well as hybrid pattern recognition (PR) systems. Two hybrid PR systems have been simulated each of which incorporate both conventional and neural processing techniques.

  4. A multiscale modelling of bone ultrastructure elastic proprieties using finite elements simulation and neural network method.

    PubMed

    Barkaoui, Abdelwahed; Tlili, Brahim; Vercher-Martínez, Ana; Hambli, Ridha

    2016-10-01

    Bone is a living material with a complex hierarchical structure which entails exceptional mechanical properties, including high fracture toughness, specific stiffness and strength. Bone tissue is essentially composed by two phases distributed in approximately 30-70%: an organic phase (mainly type I collagen and cells) and an inorganic phase (hydroxyapatite-HA-and water). The nanostructure of bone can be represented throughout three scale levels where different repetitive structural units or building blocks are found: at the first level, collagen molecules are arranged in a pentameric structure where mineral crystals grow in specific sites. This primary bone structure constitutes the mineralized collagen microfibril. A structural organization of inter-digitating microfibrils forms the mineralized collagen fibril which represents the second scale level. The third scale level corresponds to the mineralized collagen fibre which is composed by the binding of fibrils. The hierarchical nature of the bone tissue is largely responsible of their significant mechanical properties; consequently, this is a current outstanding research topic. Scarce works in literature correlates the elastic properties in the three scale levels at the bone nanoscale. The main goal of this work is to estimate the elastic properties of the bone tissue in a multiscale approach including a sensitivity analysis of the elastic behaviour at each length scale. This proposal is achieved by means of a novel hybrid multiscale modelling that involves neural network (NN) computations and finite elements method (FEM) analysis. The elastic properties are estimated using a neural network simulation that previously has been trained with the database results of the finite element models. In the results of this work, parametric analysis and averaged elastic constants for each length scale are provided. Likewise, the influence of the elastic constants of the tissue constituents is also depicted. Results highlight

  5. Neural network architecture for crossbar switch control

    NASA Technical Reports Server (NTRS)

    Troudet, Terry P.; Walters, Stephen M.

    1991-01-01

    A Hopfield neural network architecture for the real-time control of a crossbar switch for switching packets at maximum throughput is proposed. The network performance and processing time are derived from a numerical simulation of the transitions of the neural network. A method is proposed to optimize electronic component parameters and synaptic connections, and it is fully illustrated by the computer simulation of a VLSI implementation of 4 x 4 neural net controller. The extension to larger size crossbars is demonstrated through the simulation of an 8 x 8 crossbar switch controller, where the performance of the neural computation is discussed in relation to electronic noise and inhomogeneities of network components.

  6. Using feed-forward neural networks for estimation of microbial concentration in a simulated biochemical process.

    PubMed

    Bulsari, A; Saxén, H

    1994-01-01

    This work investigated the feasibility of using feed-forward neural networks for estimation of a state variable in a process with highly non-linear characteristics. A biochemical process was considered where the microorganism Saccharomyces cerevisiae, a yeast, grows in a chemostat on a glucose substrate and produces ethanol as a product of primary energy metabolism. Three state variables for the process are the microbial concentration, substrate concentration and product concentration. The Levenberg-Marquardt Method was used to train the neural networks by minimising the sum of squares of the residuals. The inputs to the networks were the measured variable (product concentration) and the control variable (dilution rate). The output of the network was an estimate for the microbial concentration. Earlier work had shown that system identification of this biochemical process could be performed quite well using feed-forward neural networks. This work demonstrated that state estimation can also be performed successfully using feed-forward neural networks. Knowledge of the process model is not required. The method is simple, reliable and accurate enough for engineering purposes. It can save a lot of expense on sensors, their installation and maintenance.

  7. A spatio-temporal hybrid neural network-Kriging model for groundwater level simulation

    NASA Astrophysics Data System (ADS)

    Tapoglou, Evdokia; Karatzas, George P.; Trichakis, Ioannis C.; Varouchakis, Emmanouil A.

    2014-11-01

    Artificial Neural Networks (ANNs) and Kriging have both been used for hydraulic head simulation. In this study, the two methodologies were combined in order to simulate the spatial and temporal distribution of hydraulic head in a study area. In order to achieve that, a fuzzy logic inference system can also be used. Different ANN architectures and variogram models were tested, together with the use or not of a fuzzy logic system. The developed algorithm was implemented and applied for predicting, spatially and temporally, the hydraulic head in an area located in Bavaria, Germany. The performance of the algorithm was evaluated using leave one out cross validation and various performance indicators were derived. The best results were achieved by using ANNs with two hidden layers, with the use of the fuzzy logic system and by utilizing the power-law variogram. The results obtained from this procedure can be characterized as favorable, since the RMSE of the method is in the order of magnitude of 10-2 m. Therefore this method can be used successfully in aquifers where geological characteristics are obscure, but a variety of other, easily accessible data, such as meteorological data can be easily found.

  8. Neural network simulation of soil NO3 dynamic under potato crop system

    NASA Astrophysics Data System (ADS)

    Goulet-Fortin, Jérôme; Morais, Anne; Anctil, François; Parent, Léon-Étienne; Bolinder, Martin

    2013-04-01

    Nitrate leaching is a major issue in sandy soils intensively cropped to potato. Modelling could test and improve management practices, particularly as regard to the optimal N application rates. Lack of input data is an important barrier for the application of classical process-based models to predict soil NO3 content (SNOC) and NO3 leaching (NOL). Alternatively, data driven models such as neural networks (NN) could better take into account indicators of spatial soil heterogeneity and plant growth pattern such as the leaf area index (LAI), hence reducing the amount of soil information required. The first objective of this study was to evaluate NN and hybrid models to simulate SNOC in the 0-40 cm soil layer considering inter-annual variations, spatial soil heterogeneity and differential N application rates. The second objective was to evaluate the same methodology to simulate seasonal NOL dynamic at 1 m deep. To this aim, multilayer perceptrons with different combinations of driving meteorological variables, functions of the LAI and state variables of external deterministic models have been trained and evaluated. The state variables from external models were: drainage estimated by the CLASS model and the soil temperature estimated by an ICBM subroutine. Results of SNOC simulations were compared to field data collected between 2004 and 2011 at several experimental plots under potato cropping systems in Québec, Eastern Canada. Results of NOL simulation were compared to data obtained in 2012 from 11 suction lysimeters installed in 2 experimental plots under potato cropping systems in the same region. The most performing model for SNOC simulation was obtained using a 4-input hybrid model composed of 1) cumulative LAI, 2) cumulative drainage, 3) soil temperature and 4) day of year. The most performing model for NOL simulation was obtained using a 5-input NN model composed of 1) N fertilization rate at spring, 2) LAI, 3) cumulative rainfall, 4) the day of year and 5) the

  9. Probabilistic Analysis of Neural Networks

    DTIC Science & Technology

    1990-11-26

    provide an understanding of the basic mechanisms of learning and recognition in neural networks . The main areas of progress were analysis of neural ... networks models, study of network connectivity, and investigation of computer network theory.

  10. Iterative Radial Basis Functions Neural Networks as Metamodels of Stochastic Simulations of the Quality of Search Engines in the World Wide Web.

    ERIC Educational Resources Information Center

    Meghabghab, George

    2001-01-01

    Discusses the evaluation of search engines and uses neural networks in stochastic simulation of the number of rejected Web pages per search query. Topics include the iterative radial basis functions (RBF) neural network; precision; response time; coverage; Boolean logic; regression models; crawling algorithms; and implications for search engine…

  11. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  12. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  13. Demonstration of Self-Training Autonomous Neural Networks in Space Vehicle Docking Simulations

    NASA Technical Reports Server (NTRS)

    Patrick, M. Clinton; Thaler, Stephen L.; Stevenson-Chavis, Katherine

    2006-01-01

    Neural Networks have been under examination for decades in many areas of research, with varying degrees of success and acceptance. Key goals of computer learning, rapid problem solution, and automatic adaptation have been elusive at best. This paper summarizes efforts at NASA's Marshall Space Flight Center harnessing such technology to autonomous space vehicle docking for the purpose of evaluating applicability to future missions.

  14. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    NASA Astrophysics Data System (ADS)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  15. Linear eddy mixing based tabulation and artificial neural networks for large eddy simulations of turbulent flames

    SciTech Connect

    Sen, Baris Ali; Menon, Suresh

    2010-01-15

    A large eddy simulation (LES) sub-grid model is developed based on the artificial neural network (ANN) approach to calculate the species instantaneous reaction rates for multi-step, multi-species chemical kinetics mechanisms. The proposed methodology depends on training the ANNs off-line on a thermo-chemical database representative of the actual composition and turbulence (but not the actual geometrical problem) of interest, and later using them to replace the stiff ODE solver (direct integration (DI)) to calculate the reaction rates in the sub-grid. The thermo-chemical database is tabulated with respect to the thermodynamic state vector without any reduction in the number of state variables. The thermo-chemistry is evolved by stand-alone linear eddy mixing (LEM) model simulations under both premixed and non-premixed conditions, where the unsteady interaction of turbulence with chemical kinetics is included as a part of the training database. The proposed methodology is tested in LES and in stand-alone LEM studies of three distinct test cases with different reduced mechanisms and conditions. LES of premixed flame-turbulence-vortex interaction provides direct comparison of the proposed ANN method against DI and ANNs trained on thermo-chemical database created using another type of tabulation method. It is shown that the ANN trained on the LEM database can capture the correct flame physics with accuracy comparable to DI, which cannot be achieved by ANN trained on a laminar premix flame database. A priori evaluation of the ANN generality within and outside its training domain is carried out using stand-alone LEM simulations as well. Results in general are satisfactory, and it is shown that the ANN provides considerable amount of memory saving and speed-up with reasonable and reliable accuracy. The speed-up is strongly affected by the stiffness of the reduced mechanism used for the computations, whereas the memory saving is considerable regardless. (author)

  16. Studies of stimulus parameters for seizure disruption using neural network simulations

    PubMed Central

    Kudela, Pawel; Cho, Ryan J.; Bergey, Gregory K.; Franaszczuk, Piotr

    2009-01-01

    A large scale neural network simulation with realistic cortical architecture has been undertaken to investigate the effects of external electrical stimulation on the propagation and evolution of ongoing seizure activity. This is an effort to explore the parameter space of stimulation variables to uncover promising avenues of research for this therapeutic modality. The model consists of an approximately 800 μm × 800 μm region of simulated cortex, and includes seven neuron classes organized by cortical layer, inhibitory or excitatory properties, and electrophysiological characteristics. The cell dynamics are governed by a modified version of the Hodgkin-Huxley equations in single compartment format. Axonal connections are patterned after histological data and published models of local cortical wiring. Stimulation induced action potentials take place at the axon initial segments, according to threshold requirements on the applied electric field distribution. Stimulation induced action potentials in horizontal axonal branches are also separately simulated. The calculations are performed on a 16 node distributed 32-bit processor system. Clear differences in seizure evolution are presented for stimulated versus the undisturbed rhythmic activity. Data is provided for frequency dependent stimulation effects demonstrating a plateau effect of stimulation efficacy as the applied frequency is increased from 60 Hz to 200 Hz. Timing of the stimulation with respect to the underlying rhythmic activity demonstrates a phase dependent sensitivity. Electrode height and position effects are also presented. Using a dipole stimulation electrode arrangement, clear orientation effects of the dipole with respect to the model connectivity is also demonstrated. A sensitivity analysis of these results as a function of the stimulation threshold is also provided. PMID:17619199

  17. Neural Networks for Rapid Design and Analysis

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.; Maghami, Peiman G.

    1998-01-01

    Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.

  18. Critical Branching Neural Networks

    ERIC Educational Resources Information Center

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  19. Neural-network accelerated fusion simulation with self-consistent core-pedestal coupling

    NASA Astrophysics Data System (ADS)

    Meneghini, O.; Candy, J.; Snyder, P. B.; Staebler, G.; Belli, E.

    2016-10-01

    Practical fusion Whole Device Modeling (WDM) simulations require the ability to perform predictions that are fast, but yet account for the sensitivity of the fusion performance to the boundary constraint that is imposed by the pedestal structure of H-mode plasmas due to the stiff core transport models. This poster presents the development of a set of neural-network (NN) models for the pedestal structure (as predicted by the EPED model), and the neoclassical and turbulent transport fluxes (as predicted by the NEO and TGLF codes, respectively), and their self-consistent coupling within the TGYRO transport code. The results are benchmarked with the ones obtained via the coupling scheme described in [Meneghini PoP 2016]. By substituting the most demanding codes with their NN-accelerated versions, the solution can be found at a fraction of the computation cost of the original coupling scheme, thereby combining the accuracy of a high-fidelity model with the fast turnaround time of a reduced model. Work supported by U.S. DOE DE-FC02-04ER54698 and DE-FG02-95ER54309.

  20. Neural Networks: A Primer

    DTIC Science & Technology

    1991-05-01

    capture underlying relationships directly from observed behavior is one of the primary capabilities of neural networks. 29 Back P’ropagation Approximailon...model complex behavior patterns. Particularly in areas traditionally addressed by regression and other functional based techniques, neural networks...to.be determined directly from the observed behavior of a system or sample of individuals. This ability should prove important in personnel analysis and

  1. Application of artificial neural networks in hydrological modeling: A case study of runoff simulation of a Himalayan glacier basin

    NASA Technical Reports Server (NTRS)

    Buch, A. M.; Narain, A.; Pandey, P. C.

    1994-01-01

    The simulation of runoff from a Himalayan Glacier basin using an Artificial Neural Network (ANN) is presented. The performance of the ANN model is found to be superior to the Energy Balance Model and the Multiple Regression model. The RMS Error is used as the figure of merit for judging the performance of the three models, and the RMS Error for the ANN model is the latest of the three models. The ANN is faster in learning and exhibits excellent system generalization characteristics.

  2. Large eddy simulation of extinction and reignition with artificial neural networks based chemical kinetics

    SciTech Connect

    Sen, Baris Ali; Menon, Suresh; Hawkes, Evatt R.

    2010-03-15

    Large eddy simulation (LES) of a non-premixed, temporally evolving, syngas/air flame is performed with special emphasis on speeding-up the sub-grid chemistry computations using an artificial neural networks (ANN) approach. The numerical setup for the LES is identical to a previous direct numerical simulation (DNS) study, which reported considerable local extinction and reignition physics, and hence, offers a challenging test case. The chemical kinetics modeling with ANN is based on a recent approach, and replaces the stiff ODE solver (DI) to predict the species reaction rates in the subgrid linear eddy mixing (LEM) model based LES (LEMLES). In order to provide a comprehensive evaluation of the current approach, additional information on conditional statistics of some of the key species and temperature are extracted from the previous DNS study and are compared with the LEMLES using ANN (ANN-LEMLES, hereafter). The results show that the current approach can detect the correct extinction and reignition physics with reasonable accuracy compared to the DNS. The syngas flame structure and the scalar dissipation rate statistics obtained by the current ANN-LEMLES are provided to further probe the flame physics. It is observed that, in contrast to H{sub 2}, CO exhibits a smooth variation within the region enclosed by the stoichiometric mixture fraction. The probability density functions (PDFs) of the scalar dissipation rates calculated based on the mixture fraction and CO demonstrate that the mean value of the PDF is insensitive to extinction and reignition. However, this is not the case for the scalar dissipation rate calculated by the OH mass fraction. Overall, ANN provides considerable computational speed-up and memory saving compared to DI, and can be used to investigate turbulent flames in a computationally affordable manner. (author)

  3. Neural Network Retinal Model Real Time Implementation

    DTIC Science & Technology

    1992-09-02

    addresses the specific needs of vision processing. The goal of this SBIR Phase I project has been to take a significant neural network vision...application and to map it onto dedicated hardware for real time implementation. The neural network was already demonstrated using software simulation on a...general purpose computer. During Phase 1, HNC took a neural network model of the retina and, using HNC’s Vision Processor (ViP) prototype hardware

  4. Programming neural networks

    SciTech Connect

    Anderson, J.A.; Markman, A.B.; Viscuso, S.R.; Wisniewski, E.J.

    1988-09-01

    Neural networks ''compute'' though not in the way that traditional computers do. One must accept their weaknesses to use their strengths. The authors present several applications of a particular non-linear network (the BSB model) to illustrate some of the peculiarities inherent in this architecture.

  5. Modeling and simulation of permanent magnet synchronous motor based on neural network control strategy

    NASA Astrophysics Data System (ADS)

    Luo, Bingyang; Chi, Shangjie; Fang, Man; Li, Mengchao

    2017-03-01

    Permanent magnet synchronous motor is used widely in industry, the performance requirements wouldn't be met by adopting traditional PID control in some of the occasions with high requirements. In this paper, a hybrid control strategy - nonlinear neural network PID and traditional PID parallel control are adopted. The high stability and reliability of traditional PID was combined with the strong adaptive ability and robustness of neural network. The permanent magnet synchronous motor will get better control performance when switch different working modes according to different controlled object conditions. As the results showed, the speed response adopting the composite control strategy in this paper was faster than the single control strategy. And in the case of sudden disturbance, the recovery time adopting the composite control strategy designed in this paper was shorter, the recovery ability and the robustness were stronger.

  6. Dynamic Analysis of Feedforward Neural Networks Using Simulated and Measured Data

    DTIC Science & Technology

    1988-12-01

    principle behind the single and multilayer feedforward (MFB, model neural network. 2.9 The Single Layer Perceptron While the Kohonen model uses...the work of Sun (Sun, 1986) and others (Lippman, 1987) have shown that the Single Layer Perceptron can only make correct classifications in very simple...decision regions. A number of complications can affect the performance of a single layer perceptron . Different classes of data being meshed in a small

  7. A simulation study for the application of two different neural network control algorithms on an electrohydraulic system

    NASA Astrophysics Data System (ADS)

    İstif, İlyas

    2005-11-01

    This paper studies a servo-valve controlled hydraulic cylinder system which is mostly used in industrial applications such as robotics, computer numerical control (CNC) machines and transportations. The system model consists of combination of two models: The first model involves nonlinear flow equations of the servo-valve, which are widely available in the literature. The second model employed in the system is a tailored asymmetric cylinder model. A fourth order nonlinear system model is then obtained by combining these two models. Two different neural network control algorithms are applied to the system. The first algorithm is "Neural Network Predictive Control (NNPC)," which employs identified neural network model to predict the future output of the system. The second algorithm is "Nonlinear Autoregressive Moving Average (NARMA-L2)" control, which transforms nonlinear system dynamics into linear system dynamics by eliminating the nonlinearities. On the simulation, NNPC and NARMA-L2 control are applied to the system model by using Matlab's Simulik simulation package and position control of the system is realized. A discussion regarding the advantages and disadvantages of the two control algorithms are also provided in the paper.

  8. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  9. Tomography using neural networks

    NASA Astrophysics Data System (ADS)

    Demeter, G.

    1997-03-01

    We have utilized neural networks for fast evaluation of tomographic data on the MT-1M tokamak. The networks have proven useful in providing the parameters of a nonlinear fit to experimental data, producing results in a fraction of the time required for performing the nonlinear fit. Time required for training the networks makes the method worth applying only if a substantial amount of data are to be evaluated.

  10. Online guidance updates using neural networks

    NASA Astrophysics Data System (ADS)

    Filici, Cristian; Sánchez Peña, Ricardo S.

    2010-02-01

    The aim of this article is to present a method for the online guidance update for a launcher ascent trajectory that is based on the utilization of a neural network approximator. Generation of training patterns and selection of the input and output spaces of the neural network are presented, and implementation issues are discussed. The method is illustrated by a 2-dimensional launcher simulation.

  11. Self-organization of neural networks

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Winston, Jeffrey V.; Rafelski, Johann

    1984-05-01

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (“brainwashing”) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conductive to the simulation of memory and learning phenomena.

  12. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  13. Hyperbolic Hopfield neural networks.

    PubMed

    Kobayashi, M

    2013-02-01

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states.

  14. Nested neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    Nested neural networks, consisting of small interconnected subnetworks, allow for the storage and retrieval of neural state patterns of different sizes. The subnetworks are naturally categorized by layers of corresponding to spatial frequencies in the pattern field. The storage capacity and the error correction capability of the subnetworks generally increase with the degree of connectivity between layers (the nesting degree). Storage of only few subpatterns in each subnetworks results in a vast storage capacity of patterns and subpatterns in the nested network, maintaining high stability and error correction capability.

  15. Neural networks in psychiatry.

    PubMed

    Hulshoff Pol, Hilleke; Bullmore, Edward

    2013-01-01

    Over the past three decades numerous imaging studies have revealed structural and functional brain abnormalities in patients with neuropsychiatric diseases. These structural and functional brain changes are frequently found in multiple, discrete brain areas and may include frontal, temporal, parietal and occipital cortices as well as subcortical brain areas. However, while the structural and functional brain changes in patients are found in anatomically separated areas, these are connected through (long distance) fibers, together forming networks. Thus, instead of representing separate (patho)-physiological entities, these local changes in the brains of patients with psychiatric disorders may in fact represent different parts of the same 'elephant', i.e., the (altered) brain network. Recent developments in quantitative analysis of complex networks, based largely on graph theory, have revealed that the brain's structure and functions have features of complex networks. Here we briefly introduce several recent developments in neural network studies relevant for psychiatry, including from the 2013 special issue on Neural Networks in Psychiatry in European Neuropsychopharmacology. We conclude that new insights will be revealed from the neural network approaches to brain imaging in psychiatry that hold the potential to find causes for psychiatric disorders and (preventive) treatments in the future.

  16. Evolving Neural Network Pattern Classifiers

    DTIC Science & Technology

    1994-05-01

    This work investigates the application of evolutionary programming for automatically configuring neural network architectures for pattern...evaluating a multitude of neural network model hypotheses. The evolutionary programming search is augmented with the Solis & Wets random optimization

  17. Mathematical Theory of Neural Networks

    DTIC Science & Technology

    1994-08-31

    This report provides a summary of the grant work by the principal investigators in the area of neural networks . The topics covered deal with...properties) for nets; and the use of neural networks for the control of nonlinear systems.

  18. Brainlab: A Python Toolkit to Aid in the Design, Simulation, and Analysis of Spiking Neural Networks with the NeoCortical Simulator.

    PubMed

    Drewes, Rich; Zou, Quan; Goodman, Philip H

    2009-01-01

    Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading "glue" tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS.

  19. Neural Networks for Speech Application.

    DTIC Science & Technology

    1987-11-01

    This is a general introduction to the reemerging technology called neural networks , and how these networks may provide an important alternative to...traditional forms of computing in speech applications. Neural networks , sometimes called Artificial Neural Systems (ANS), have shown promise for solving

  20. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  1. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  2. Neural networks and applications tutorial

    NASA Astrophysics Data System (ADS)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  3. Comparison between artificial neural network and multilinear regression models in an evaluation of cognitive workload in a flight simulator.

    PubMed

    Hannula, Manne; Huttunen, Kerttu; Koskelo, Jukka; Laitinen, Tomi; Leino, Tuomo

    2008-01-01

    In this study, the performances of artificial neural network (ANN) analysis and multilinear regression (MLR) model-based estimation of heart rate were compared in an evaluation of individual cognitive workload. The data comprised electrocardiography (ECG) measurements and an evaluation of cognitive load that induces psychophysiological stress (PPS), collected from 14 interceptor fighter pilots during complex simulated F/A-18 Hornet air battles. In our data, the mean absolute error of the ANN estimate was 11.4 as a visual analog scale score, being 13-23% better than the mean absolute error of the MLR model in the estimation of cognitive workload.

  4. Simulation study of stimulation parameters in desynchronisation based on the Hodgkin-Huxley small-world neural networks and its possible implications for vagus nerve stimulation.

    PubMed

    Li, Yan-Long; Chen, Zhao-Yang; Ma, Jun; Chen, Yu-Hong

    2008-02-01

    Adopting small-world neural networks of the Hodgkin-Huxley (HH) model, the stimulation parameters in desynchronisation and its possible implications for vagus nerve stimulation (VNS) are numerically investigated. With the synchronisation status of networks to represent epilepsy, then, adding pulse to stimulations to 10% of neurons to simulate the VNS, we obtain the desynchronisation status of networks (representing antiepileptic effects). The simulations show that synchronisation evolves into desynchronisation in the HH neural networks when a part (10%) of neurons are stimulated with a pulse current signal. The network desynchronisation appears to be sensitive to the stimulation parameters. For the case of the same stimulation intensity, weakly coupled networks reach desynchronisation more easily than strongly coupled networks. The network desynchronisation reduced by short-stimulation interval is more distinct than that of induced by long stimulation interval. We find that there exist the optimal stimulation interval and optimal stimulation intensity when the other stimulation parameters remain certain.

  5. Neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  6. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  7. Multispectral-image fusion using neural networks

    NASA Astrophysics Data System (ADS)

    Kagel, Joseph H.; Platt, C. A.; Donaven, T. W.; Samstad, Eric A.

    1990-08-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard a circuit card assembly and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations results and a description of the prototype system are presented. 1.

  8. Coherence resonance in bursting neural networks.

    PubMed

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal-a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  9. Rule generation from neural networks

    SciTech Connect

    Fu, L.

    1994-08-01

    The neural network approach has proven useful for the development of artificial intelligence systems. However, a disadvantage with this approach is that the knowledge embedded in the neural network is opaque. In this paper, we show how to interpret neural network knowledge in symbolic form. We lay down required definitions for this treatment, formulate the interpretation algorithm, and formally verify its soundness. The main result is a formalized relationship between a neural network and a rule-based system. In addition, it has been demonstrated that the neural network generates rules of better performance than the decision tree approach in noisy conditions. 7 refs.

  10. Nonlinear programming with feedforward neural networks.

    SciTech Connect

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  11. Neural networks for atmospheric retrievals

    NASA Technical Reports Server (NTRS)

    Motteler, Howard E.; Gualtieri, J. A.; Strow, L. Larrabee; Mcmillin, Larry

    1993-01-01

    We use neural networks to perform retrievals of temperature and water fractions from simulated clear air radiances for the Atmospheric Infrared Sounder (AIRS). Neural networks allow us to make effective use of the large AIRS channel set, and give good performance with noisy input. We retrieve surface temperature, air temperature at 64 distinct pressure levels, and water fractions at 50 distinct pressure levels. Using 728 temperature and surface sensitive channels, the RMS error for temperature retrievals with 0.2K input noise is 1.2K. Using 586 water and temperature sensitive channels, the mean error with 0.2K input noise is 16 percent. Our implementation of backpropagation training for neural networks on the 16,000-processor MasPar MP-1 runs at a rate of 90 million weight updates per second, and allows us to train large networks in a reasonable amount of time. Once trained, the network can be used to perform retrievals quickly on a workstation of moderate power.

  12. A realistic neural-network simulation of both slow and quick phase components of the guinea pig VOR.

    PubMed

    Cartwright, Andrew D; Gilchrist, Darrin P D; Burgess, Ann M; Curthoys, Ian S

    2003-04-01

    A realistic neural-network model was constructed to simulate production of both the slow-phase and quick-phase components of vestibular nystagmus by incorporating a quick-phase pathway into a previous model of the slow phase. The neurons in the network were modelled by multicompartmental Hodgkin-Huxley-style spiking neurons based on known responses and projections of physiologically identified vestibular neurons. The modelling used the GENESIS software package. The slow-phase network consisted of ganglion and medial vestibular nucleus (MVN) neurons; the latter were constructed using biophysical models of MVN type A and B neurons. The quick-phase network contained several types of bursting cells which have been shown to have major roles in the generation of the quick phase: burster-driver neurons, long-lead burst neurons, pause neurons, excitatory burst neurons and inhibitory burst neurons. Comparison of the output neural responses from the model with guinea pig behavioural responses from the companion paper showed consistency between model and animal data for neuron firing patterns, maximal firing rates, and timing, duration and number of quick phases. Comparisons were made for stable head input and for sinusoidal angular stimuli at a range of frequencies from 0.1 to 2 Hz. Except for data at 0.1 Hz, where the simulation produced one more quick phase per half cycle than the animal data, the number of quick phases was consistent between the model and the animal data. The model was also used to simulate the effects both of unilateral vestibular deafferentation (UVD) and of vestibular compensation after UVD, and the responses in the modelled MVN neurons were affected in a way similar to those measured in guinea pig MVN neurons: the number of quick phases and their timing changed in a similar fashion to that observed in behavioural data.

  13. Corticostriatal response selection in sentence production: Insights from neural network simulation with reservoir computing.

    PubMed

    Hinaut, Xavier; Lance, Florian; Droin, Colas; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2015-11-01

    Language production requires selection of the appropriate sentence structure to accommodate the communication goal of the speaker - the transmission of a particular meaning. Here we consider event meanings, in terms of predicates and thematic roles, and we address the problem that a given event can be described from multiple perspectives, which poses a problem of response selection. We present a model of response selection in sentence production that is inspired by the primate corticostriatal system. The model is implemented in the context of reservoir computing where the reservoir - a recurrent neural network with fixed connections - corresponds to cortex, and the readout corresponds to the striatum. We demonstrate robust learning, and generalization properties of the model, and demonstrate its cross linguistic capabilities in English and Japanese. The results contribute to the argument that the corticostriatal system plays a role in response selection in language production, and to the stance that reservoir computing is a valid potential model of corticostriatal processing.

  14. Accuracy evaluation of numerical methods used in state-of-the-art simulators for spiking neural networks.

    PubMed

    Henker, Stephan; Partzsch, Johannes; Schüffny, René

    2012-04-01

    With the various simulators for spiking neural networks developed in recent years, a variety of numerical solution methods for the underlying differential equations are available. In this article, we introduce an approach to systematically assess the accuracy of these methods. In contrast to previous investigations, our approach focuses on a completely deterministic comparison and uses an analytically solved model as a reference. This enables the identification of typical sources of numerical inaccuracies in state-of-the-art simulation methods. In particular, with our approach we can separate the error of the numerical integration from the timing error of spike detection and propagation, the latter being prominent in simulations with fixed timestep. To verify the correctness of the testing procedure, we relate the numerical deviations to theoretical predictions for the employed numerical methods. Finally, we give an example of the influence of simulation artefacts on network behaviour and spike-timing-dependent plasticity (STDP), underlining the importance of spike-time accuracy for the simulation of STDP.

  15. Chaotic time series prediction using artificial neural networks

    SciTech Connect

    Bartlett, E.B.

    1991-12-31

    This paper describes the use of artificial neural networks to model the complex oscillations defined by a chaotic Verhuist animal population dynamic. A predictive artificial neural network model is developed and tested, and results of computer simulations are given. These results show that the artificial neural network model predicts the chaotic time series with various initial conditions, growth parameters, or noise.

  16. Chaotic time series prediction using artificial neural networks

    SciTech Connect

    Bartlett, E.B.

    1991-01-01

    This paper describes the use of artificial neural networks to model the complex oscillations defined by a chaotic Verhuist animal population dynamic. A predictive artificial neural network model is developed and tested, and results of computer simulations are given. These results show that the artificial neural network model predicts the chaotic time series with various initial conditions, growth parameters, or noise.

  17. Co-combustion of peanut hull and coal blends: Artificial neural networks modeling, particle swarm optimization and Monte Carlo simulation.

    PubMed

    Buyukada, Musa

    2016-09-01

    Co-combustion of coal and peanut hull (PH) were investigated using artificial neural networks (ANN), particle swarm optimization, and Monte Carlo simulation as a function of blend ratio, heating rate, and temperature. The best prediction was reached by ANN61 multi-layer perception model with a R(2) of 0.99994. Blend ratio of 90 to 10 (PH to coal, wt%), temperature of 305°C, and heating rate of 49°Cmin(-1) were determined as the optimum input values and yield of 87.4% was obtained under PSO optimized conditions. The validation experiments resulted in yields of 87.5%±0.2 after three replications. Monte Carlo simulations were used for the probabilistic assessments of stochastic variability and uncertainty associated with explanatory variables of co-combustion process.

  18. An adaptive drug delivery design using neural networks for effective treatment of infectious diseases: a simulation study.

    PubMed

    Padhi, Radhakant; Bhardhwaj, Jayender R

    2009-06-01

    An adaptive drug delivery design is presented in this paper using neural networks for effective treatment of infectious diseases. The generic mathematical model used describes the coupled evolution of concentration of pathogens, plasma cells, antibodies and a numerical value that indicates the relative characteristic of a damaged organ due to the disease under the influence of external drugs. From a system theoretic point of view, the external drugs can be interpreted as control inputs, which can be designed based on control theoretic concepts. In this study, assuming a set of nominal parameters in the mathematical model, first a nonlinear controller (drug administration) is designed based on the principle of dynamic inversion. This nominal drug administration plan was found to be effective in curing "nominal model patients" (patients whose immunological dynamics conform to the mathematical model used for the control design exactly. However, it was found to be ineffective in curing "realistic model patients" (patients whose immunological dynamics may have off-nominal parameter values and possibly unwanted inputs) in general. Hence, to make the drug delivery dosage design more effective for realistic model patients, a model-following adaptive control design is carried out next by taking the help of neural networks, that are trained online. Simulation studies indicate that the adaptive controller proposed in this paper holds promise in killing the invading pathogens and healing the damaged organ even in the presence of parameter uncertainties and continued pathogen attack. Note that the computational requirements for computing the control are very minimal and all associated computations (including the training of neural networks) can be carried out online. However it assumes that the required diagnosis process can be carried out at a sufficient faster rate so that all the states are available for control computation.

  19. Pricing financial derivatives with neural networks

    NASA Astrophysics Data System (ADS)

    Morelli, Marco J.; Montagna, Guido; Nicrosini, Oreste; Treccani, Michele; Farina, Marco; Amato, Paolo

    2004-07-01

    Neural network algorithms are applied to the problem of option pricing and adopted to simulate the nonlinear behavior of such financial derivatives. Two different kinds of neural networks, i.e. multi-layer perceptrons and radial basis functions, are used and their performances compared in detail. The analysis is carried out both for standard European options and American ones, including evaluation of the Greek letters, necessary for hedging purposes. Detailed numerical investigation show that, after a careful phase of training, neural networks are able to predict the value of options and Greek letters with high accuracy and competitive computational time.

  20. Radar signal categorization using a neural network

    NASA Technical Reports Server (NTRS)

    Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.

    1991-01-01

    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.

  1. Neural Network Solutions to Optical Absorption Spectra

    NASA Astrophysics Data System (ADS)

    Rosenbrock, Conrad

    2012-10-01

    Artificial neural networks have been effective in reducing computation time while achieving remarkable accuracy for a variety of difficult physics problems. Neural networks are trained iteratively by adjusting the size and shape of sums of non-linear functions by varying the function parameters to fit results for complex non-linear systems. For smaller structures, ab initio simulation methods can be used to determine absorption spectra under field perturbations. However, these methods are impractical for larger structures. Designing and training an artificial neural network with simulated data from time-dependent density functional theory may allow time-dependent perturbation effects to be calculated more efficiently. I investigate the design considerations and results of neural network implementations for calculating perturbation-coupled electron oscillations in small molecules.

  2. Stimulated Photorefractive Optical Neural Networks

    DTIC Science & Technology

    1992-12-15

    This final report describes research in optical neural networks performed under DARPA sponsorship at Hughes Aircraft Company during the period 1989...in photorefractive crystals. This approach reduces crosstalk and improves the utilization of the optical input device. Successfully implemented neural ... networks include the Perceptron, Bidirectional Associative Memory, and multi-layer backpropagation networks. Up to 104 neurons, 2xl0(7) weights, and

  3. Modeling and simulation of xylitol production in bioreactor by Debaryomyces nepalensis NCYC 3413 using unstructured and artificial neural network models.

    PubMed

    Pappu, J Sharon Mano; Gummadi, Sathyanarayana N

    2016-11-01

    This study examines the use of unstructured kinetic model and artificial neural networks as predictive tools for xylitol production by Debaryomyces nepalensis NCYC 3413 in bioreactor. An unstructured kinetic model was proposed in order to assess the influence of pH (4, 5 and 6), temperature (25°C, 30°C and 35°C) and volumetric oxygen transfer coefficient kLa (0.14h(-1), 0.28h(-1) and 0.56h(-1)) on growth and xylitol production. A feed-forward back-propagation artificial neural network (ANN) has been developed to investigate the effect of process condition on xylitol production. ANN configuration of 6-10-3 layers was selected and trained with 339 experimental data points from bioreactor studies. Results showed that simulation and prediction accuracy of ANN was apparently higher when compared to unstructured mechanistic model under varying operational conditions. ANN was found to be an efficient data-driven tool to predict the optimal harvest time in xylitol production.

  4. Structural properties and interaction energies affecting drug design. An approach combining molecular simulations, statistics, interaction energies and neural networks.

    PubMed

    Ioannidis, Dimitris; Papadopoulos, Georgios E; Anastassopoulos, Georgios; Kortsaris, Alexandros; Anagnostopoulos, Konstantinos

    2015-06-01

    In order to elucidate some basic principles for protein-ligand interactions, a subset of 87 structures of human proteins with their ligands was obtained from the PDB databank. After a short molecular dynamics simulation (to ensure structure stability), a variety of interaction energies and structural parameters were extracted. Linear regression was performed to determine which of these parameters have a potentially significant contribution to the protein-ligand interaction. The parameters exhibiting relatively high correlation coefficients were selected. Important factors seem to be the number of ligand atoms, the ratio of N, O and S atoms to total ligand atoms, the hydrophobic/polar aminoacid ratio and the ratio of cavity size to the sum of ligand plus water atoms in the cavity. An important factor also seems to be the immobile water molecules in the cavity. Nine of these parameters were used as known inputs to train a neural network in the prediction of seven other. Eight structures were left out of the training to test the quality of the predictions. After optimization of the neural network, the predictions were fairly accurate given the relatively small number of structures, especially in the prediction of the number of nitrogen and sulfur atoms of the ligand.

  5. Warranty optimisation based on the prediction of costs to the manufacturer using neural network model and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Stamenkovic, Dragan D.; Popovic, Vladimir M.

    2015-02-01

    Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.

  6. Optical Neural Network Classifier Architectures

    DTIC Science & Technology

    1998-04-01

    We present an adaptive opto-electronic neural network hardware architecture capable of exploiting parallel optics to realize real-time processing and...function neural network based on a previously demonstrated binary-input version. The greyscale-input capability broadens the range of applications for...a reduced feature set of multiwavelet images to improve training times and discrimination capability of the neural network . The design uses a joint

  7. Analysis of Simple Neural Networks

    DTIC Science & Technology

    1988-12-20

    ANALYSIS OF SThlPLE NEURAL NETWORKS Chedsada Chinrungrueng Master’s Report Under the Supervision of Prof. Carlo H. Sequin Department of... Neural Networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT...and guidJ.nce. I have learned a great deal from his teaching, knowledge, and criti- cism. 1. MOTIVATION ANALYSIS OF SIMPLE NEURAL NETWORKS Chedsada

  8. Neural Networks For Robot Control

    DTIC Science & Technology

    2001-04-17

    following: (a) Application of artificial neural networks (multi-layer perceptrons, MLPs) for 2D planar robot arm by using the dynamic backpropagation...methods for the adjustment of parameters; and optimization of the architecture; (b) Application of artificial neural networks in controlling closed...studies in controlling dynamic robot arms by using neural networks in real-time process; (2) Research of optimal architectures used in closed-loop systems in order to compare with adaptive and robust control.

  9. Trimaran Resistance Artificial Neural Network

    DTIC Science & Technology

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  10. Application of artificial neural networks to gaming

    NASA Astrophysics Data System (ADS)

    Baba, Norio; Kita, Tomio; Oda, Kazuhiro

    1995-04-01

    Recently, neural network technology has been applied to various actual problems. It has succeeded in producing a large number of intelligent systems. In this article, we suggest that it could be applied to the field of gaming. In particular, we suggest that the neural network model could be used to mimic players' characters. Several computer simulation results using a computer gaming system which is a modified version of the COMMONS GAME confirm our idea.

  11. Neural-network simulation of tonal categorization based on F0 velocity profiles

    NASA Astrophysics Data System (ADS)

    Gauthier, Bruno; Shi, Rushen; Xu, Yi; Proulx, Robert

    2005-04-01

    Perception studies have shown that by the age of six months, infants show particular response patterns to tones in their native language. The present study focuses on how infants might develop lexical tones in Man- darin. F0 is generally considered the main cue in tone perception. However, F0 patterns in connected speech display extensive contextual variability. Since speech input to infants consists mainly of multi-word utterances, tone learning must involve processes that can effectively resolve variability. In this study we explore the Target Approximation model (Xu and Wang, 2001) which characterizes surface F0 as asymptotic movements toward underlying pitch targets defined as simple linear functions. The model predicts that it is possible to infer underlying pitch targets from the manners of F0 movements. Using production data of three of the speakers from Xu (1997), we trained a self-organizing neural network with both F0 profiles and F0 velocity profiles as input. In the testing phase, velocity profiles yielded far superior categorization than F0 profiles. The results confirm that velocity profiles can effectively abstract away from surface variability and directly reflect underlying articulatory goals. The finding thus points to one way through which infants can successfully derive at phonetic categories from adult speech.

  12. Neural Networks, Reliability and Data Analysis

    DTIC Science & Technology

    1993-01-01

    Neural network technology has been surveyed with the intent of determining the feasibility and impact neural networks may have in the area of...automated reliability tools. Data analysis capabilities of neural networks appear to be very applicable to reliability science due to similar mathematical...tendencies in data.... Neural networks , Reliability, Data analysis, Automated reliability tools, Automated intelligent information processing, Statistical neural network.

  13. Pruning artificial neural networks using neural complexity measures.

    PubMed

    Jorgensen, Thomas D; Haynes, Barry P; Norlund, Charlotte C F

    2008-10-01

    This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.

  14. Interacting neural networks

    NASA Astrophysics Data System (ADS)

    Metzler, R.; Kinzel, W.; Kanter, I.

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random.

  15. Neural network modeling of emotion

    NASA Astrophysics Data System (ADS)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  16. Chlorophyll a Simulation in a Lake Ecosystem Using a Model with Wavelet Analysis and Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Wang, Xuan; Chen, Bin; Zhao, Ying; Yang, Zhifeng

    2013-05-01

    Accurate and reliable forecasting is important for the sustainable management of ecosystems. Chlorophyll a (Chl a) simulation and forecasting can provide early warning information and enable managers to make appropriate decisions for protecting lake ecosystems. In this study, we proposed a method for Chl a simulation in a lake that coupled the wavelet analysis and the artificial neural networks (WA-ANN). The proposed method had the advantage of data preprocessing, which reduced noise and managed nonstationary data. Fourteen variables were included in the developed and validated model, relating to hydrologic, ecological and meteorologic time series data from January 2000 to December 2009 at the Lake Baiyangdian study area, North China. The performance of the proposed WA-ANN model for monthly Chl a simulation in the lake ecosystem was compared with a multiple stepwise linear regression (MSLR) model, an autoregressive integrated moving average (ARIMA) model and a regular ANN model. The results showed that the WA-ANN model was suitable for Chl a simulation providing a more accurate performance than the MSLR, ARIMA, and ANN models. We recommend that the proposed method be widely applied to further facilitate the development and implementation of lake ecosystem management.

  17. Chlorophyll a simulation in a lake ecosystem using a model with wavelet analysis and artificial neural network.

    PubMed

    Wang, Fei; Wang, Xuan; Chen, Bin; Zhao, Ying; Yang, Zhifeng

    2013-05-01

    Accurate and reliable forecasting is important for the sustainable management of ecosystems. Chlorophyll a (Chl a) simulation and forecasting can provide early warning information and enable managers to make appropriate decisions for protecting lake ecosystems. In this study, we proposed a method for Chl a simulation in a lake that coupled the wavelet analysis and the artificial neural networks (WA-ANN). The proposed method had the advantage of data preprocessing, which reduced noise and managed nonstationary data. Fourteen variables were included in the developed and validated model, relating to hydrologic, ecological and meteorologic time series data from January 2000 to December 2009 at the Lake Baiyangdian study area, North China. The performance of the proposed WA-ANN model for monthly Chl a simulation in the lake ecosystem was compared with a multiple stepwise linear regression (MSLR) model, an autoregressive integrated moving average (ARIMA) model and a regular ANN model. The results showed that the WA-ANN model was suitable for Chl a simulation providing a more accurate performance than the MSLR, ARIMA, and ANN models. We recommend that the proposed method be widely applied to further facilitate the development and implementation of lake ecosystem management.

  18. Integration of Volterra model with artificial neural networks for rainfall-runoff simulation in forested catchment of northern Iran

    NASA Astrophysics Data System (ADS)

    Kashani, Mahsa H.; Ghorbani, Mohammad Ali; Dinpashoh, Yagob; Shahmorad, Sedaghat

    2016-09-01

    Rainfall-runoff simulation is an important task in water resources management. In this study, an integrated Volterra model with artificial neural networks (IVANN) was presented to simulate the rainfall-runoff process. The proposed integrated model includes the semi-distributed forms of the Volterra and ANN models which can explore spatial variation in rainfall-runoff process without requiring physical characteristic parameters of the catchments, while taking advantage of the potential of Volterra and ANNs models in nonlinear mapping. The IVANN model was developed using hourly rainfall and runoff data pertaining to thirteen storms to study short-term responses of a forest catchment in northern Iran; and its performance was compared with that of semi-distributed integrated ANN (IANN) model and lumped Volterra model. The Volterra model was applied as a nonlinear model (second-order Volterra (SOV) model) and solved using the ordinary least square (OLS) method. The models performance were evaluated and compared using five performance criteria namely coefficient of efficiency, root mean square error, error of total volume, relative error of peak discharge and error of time for peak to arrive. Results showed that the IVANN model performs well than the other semi-distributed and lumped models to simulate the rainfall-runoff process. Comparing to the integrated models, the lumped SOV model has lower precision to simulate the rainfall-runoff process.

  19. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks.

  20. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  1. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  2. Dynamic interactions in neural networks

    SciTech Connect

    Arbib, M.A. ); Amari, S. )

    1989-01-01

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  3. Technology Assessment of Neural Networks

    DTIC Science & Technology

    1989-02-13

    Unlike a Von Neumann type of computer which needs to be programmed to carry out an information-processing function, neural networks are promised as...trainable through a series of trials to learn how to process information. An assessment of the current, near-term (1995), and long-term (2010) trends in Neural Networks is given.

  4. Phase Detection Using Neural Networks.

    DTIC Science & Technology

    1997-03-10

    A likelihood of detecting a reflected signal characterized by phase discontinuities and background noise is enhanced by utilizing neural networks to...identify coherency intervals. The received signal is processed into a predetermined format such as a digital time series. Neural networks perform

  5. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  6. Neural Networks for the Beginner.

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  7. Hybrid Neural Network for Pattern Recognition.

    DTIC Science & Technology

    1997-02-03

    two one-layer neural networks and the second stage comprises a feedforward two-layer neural network . A method for recognizing patterns is also...topological representations of the input patterns using the first and second neural networks. The method further comprises providing a third neural network for...classifying and recognizing the inputted patterns and training the third neural network with a back-propagation algorithm so that the third neural network recognizes at least one interested pattern.

  8. Simulation of groundwater level variations using wavelet combined with neural network, linear regression and support vector machine

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Hadi; Rajaee, Taher

    2017-01-01

    Simulation of groundwater level (GWL) fluctuations is an important task in management of groundwater resources. In this study, the effect of wavelet analysis on the training of the artificial neural network (ANN), multi linear regression (MLR) and support vector regression (SVR) approaches was investigated, and the ANN, MLR and SVR along with the wavelet-ANN (WNN), wavelet-MLR (WLR) and wavelet-SVR (WSVR) models were compared in simulating one-month-ahead of GWL. The only variable used to develop the models was the monthly GWL data recorded over a period of 11 years from two wells in the Qom plain, Iran. The results showed that decomposing GWL time series into several sub-time series, extremely improved the training of the models. For both wells 1 and 2, the Meyer and Db5 wavelets produced better results compared to the other wavelets; which indicated wavelet types had similar behavior in similar case studies. The optimal number of delays was 6 months, which seems to be due to natural phenomena. The best WNN model, using Meyer mother wavelet with two decomposition levels, simulated one-month-ahead with RMSE values being equal to 0.069 m and 0.154 m for wells 1 and 2, respectively. The RMSE values for the WLR model were 0.058 m and 0.111 m, and for WSVR model were 0.136 m and 0.060 m for wells 1 and 2, respectively.

  9. Further validation of artificial neural network-based emissions simulation models for conventional and hybrid electric vehicles.

    PubMed

    Tóth-Nagy, Csaba; Conley, John J; Jarrett, Ronald P; Clark, Nigel N

    2006-07-01

    With the advent of hybrid electric vehicles, computer-based vehicle simulation becomes more useful to the engineer and designer trying to optimize the complex combination of control strategy, power plant, drive train, vehicle, and driving conditions. With the desire to incorporate emissions as a design criterion, researchers at West Virginia University have developed artificial neural network (ANN) models for predicting emissions from heavy-duty vehicles. The ANN models were trained on engine and exhaust emissions data collected from transient dynamometer tests of heavy-duty diesel engines then used to predict emissions based on engine speed and torque data from simulated operation of a tractor truck and hybrid electric bus. Simulated vehicle operation was performed with the ADVISOR software package. Predicted emissions (carbon dioxide [CO2] and oxides of nitrogen [NO(x)]) were then compared with actual emissions data collected from chassis dynamometer tests of similar vehicles. This paper expands on previous research to include different driving cycles for the hybrid electric bus and varying weights of the conventional truck. Results showed that different hybrid control strategies had a significant effect on engine behavior (and, thus, emissions) and may affect emissions during different driving cycles. The ANN models underpredicted emissions of CO2 and NO(x) in the case of a class-8 truck but were more accurate as the truck weight increased.

  10. Neural networks in astronomy.

    PubMed

    Tagliaferri, Roberto; Longo, Giuseppe; Milano, Leopoldo; Acernese, Fausto; Barone, Fabrizio; Ciaramella, Angelo; De Rosa, Rosario; Donalek, Ciro; Eleuteri, Antonio; Raiconi, Giancarlo; Sessa, Salvatore; Staiano, Antonino; Volpicelli, Alfredo

    2003-01-01

    In the last decade, the use of neural networks (NN) and of other soft computing methods has begun to spread also in the astronomical community which, due to the required accuracy of the measurements, is usually reluctant to use automatic tools to perform even the most common tasks of data reduction and data mining. The federation of heterogeneous large astronomical databases which is foreseen in the framework of the astrophysical virtual observatory and national virtual observatory projects, is, however, posing unprecedented data mining and visualization problems which will find a rather natural and user friendly answer in artificial intelligence tools based on NNs, fuzzy sets or genetic algorithms. This review is aimed to both astronomers (who often have little knowledge of the methodological background) and computer scientists (who often know little about potentially interesting applications), and therefore will be structured as follows: after giving a short introduction to the subject, we shall summarize the methodological background and focus our attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining. Most of the original work described in the paper has been performed in the framework of the AstroNeural collaboration (Napoli-Salerno).

  11. Teaching Students to Model Neural Circuits and Neural Networks Using an Electronic Spreadsheet Simulator. Microcomputing Working Paper Series.

    ERIC Educational Resources Information Center

    Hewett, Thomas T.

    There are a number of areas in psychology where an electronic spreadsheet simulator can be used to study and explore functional relationships among a number of parameters. For example, when dealing with sensation, perception, and pattern recognition, it is sometimes desirable for students to understand both the basic neurophysiology and the…

  12. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  13. Aerodynamic Design Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.

    2003-01-01

    The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.

  14. Electronic device aspects of neural network memories

    NASA Technical Reports Server (NTRS)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  15. Neural networks for nuclear spectroscopy

    SciTech Connect

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T.

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  16. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  17. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    PubMed

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  18. The use of artificial neural network (ANN) for the prediction and simulation of oil degradation in wastewater by AOP.

    PubMed

    Mustafa, Yasmen A; Jaid, Ghydaa M; Alwared, Abeer I; Ebrahim, Mothana

    2014-06-01

    The application of advanced oxidation process (AOP) in the treatment of wastewater contaminated with oil was investigated in this study. The AOP investigated is the homogeneous photo-Fenton (UV/H2O2/Fe(+2)) process. The reaction is influenced by the input concentration of hydrogen peroxide H2O2, amount of the iron catalyst Fe(+2), pH, temperature, irradiation time, and concentration of oil in the wastewater. The removal efficiency for the used system at the optimal operational parameters (H2O2 = 400 mg/L, Fe(+2) = 40 mg/L, pH = 3, irradiation time = 150 min, and temperature = 30 °C) for 1,000 mg/L oil load was found to be 72%. The study examined the implementation of artificial neural network (ANN) for the prediction and simulation of oil degradation in aqueous solution by photo-Fenton process. The multilayered feed-forward networks were trained by using a backpropagation algorithm; a three-layer network with 22 neurons in the hidden layer gave optimal results. The results show that the ANN model can predict the experimental results with high correlation coefficient (R (2) = 0.9949). The sensitivity analysis showed that all studied variables (H2O2, Fe(+2), pH, irradiation time, temperature, and oil concentration) have strong effect on the oil degradation. The pH was found to be the most influential parameter with relative importance of 20.6%.

  19. Uncertainty analysis of a combined Artificial Neural Network - Fuzzy logic - Kriging system for spatial and temporal simulation of Hydraulic Head.

    NASA Astrophysics Data System (ADS)

    Tapoglou, Evdokia; Karatzas, George P.; Trichakis, Ioannis C.; Varouchakis, Emmanouil A.

    2015-04-01

    The purpose of this study is to evaluate the uncertainty, using various methodologies, in a combined Artificial Neural Network (ANN) - Fuzzy logic - Kriging system, which can simulate spatially and temporally the hydraulic head in an aquifer. This system uses ANNs for the temporal prediction of hydraulic head in various locations, one ANN for every location with available data, and Kriging for the spatial interpolation of ANN's results. A fuzzy logic is used for the interconnection of these two methodologies. The full description of the initial system and its functionality can be found in Tapoglou et al. (2014). Two methodologies were used for the calculation of uncertainty for the implementation of the algorithm in a study area. First, the uncertainty of Kriging parameters was examined using a Bayesian bootstrap methodology. In this case the variogram is calculated first using the traditional methodology of Ordinary Kriging. Using the parameters derived and the covariance function of the model, the covariance matrix is constructed. A common method for testing a statistical model is the use of artificial data. Normal random numbers generation is the first step in this procedure and by multiplying them by the decomposed covariance matrix, correlated random numbers (sample set) can be calculated. These random values are then fitted into a variogram and the value in an unknown location is estimated using Kriging. The distribution of the simulated values using the Kriging of different correlated random values can be used in order to derive the prediction intervals of the process. In this study 500 variograms were constructed for every time step and prediction point, using the method described above, and their results are presented as the 95th and 5th percentile of the predictions. The second methodology involved the uncertainty of ANNs training. In this case, for all the data points 300 different trainings were implemented having different training datasets each time

  20. A Complexity Theory of Neural Networks

    DTIC Science & Technology

    1990-04-14

    Significant results have been obtained on the computation complexity of analog neural networks , and distribute voting. The computing power and...learning algorithms for limited precision analog neural networks have been investigated. Lower bounds for constant depth, polynomial size analog neural ... networks , and a limited version of discrete neural networks have been obtained. The work on distributed voting has important applications for distributed

  1. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  2. Using neural networks for dynamic light scattering time series processing

    NASA Astrophysics Data System (ADS)

    Chicea, Dan

    2017-04-01

    A basic experiment to record dynamic light scattering (DLS) time series was assembled using basic components. The DLS time series processing using the Lorentzian function fit was considered as reference. A Neural Network was designed and trained using simulated frequency spectra for spherical particles in the range 0–350 nm, assumed to be scattering centers, and the neural network design and training procedure are described in detail. The neural network output accuracy was tested both on simulated and on experimental time series. The match with the DLS results, considered as reference, was good serving as a proof of concept for using neural networks in fast DLS time series processing.

  3. Collective Computation of Neural Network

    DTIC Science & Technology

    1990-03-15

    Sciences, Beijing ABSTRACT Computational neuroscience is a new branch of neuroscience originating from current research on the theory of computer...scientists working in artificial intelligence engineering and neuroscience . The paper introduces the collective computational properties of model neural...vision research. On this basis, the authors analyzed the significance of the Hopfield model. Key phrases: Computational Neuroscience , Neural Network, Model

  4. Artificial Neural Network Analysis System

    DTIC Science & Technology

    2007-11-02

    Target detection, multi-target tracking and threat identification of ICBM and its warheads by sensor fusion and data fusion of sensors in a fuzzy neural network system based on the compound eye of a fly.

  5. The holographic neural network: Performance comparison with other neural networks

    NASA Astrophysics Data System (ADS)

    Klepko, Robert

    1991-10-01

    The artificial neural network shows promise for use in recognition of high resolution radar images of ships. The holographic neural network (HNN) promises a very large data storage capacity and excellent generalization capability, both of which can be achieved with only a few learning trials, unlike most neural networks which require on the order of thousands of learning trials. The HNN is specially designed for pattern association storage, and mathematically realizes the storage and retrieval mechanisms of holograms. The pattern recognition capability of the HNN was studied, and its performance was compared with five other commonly used neural networks: the Adaline, Hamming, bidirectional associative memory, recirculation, and back propagation networks. The patterns used for testing represented artificial high resolution radar images of ships, and appear as a two dimensional topology of peaks with various amplitudes. The performance comparisons showed that the HNN does not perform as well as the other neural networks when using the same test data. However, modification of the data to make it appear more Gaussian distributed, improved the performance of the network. The HNN performs best if the data is completely Gaussian distributed.

  6. A Physics-driven Neural Networks-based Simulation System (PhyNNeSS) for multimodal interactive virtual environments involving nonlinear deformable objects.

    PubMed

    De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S

    2011-08-01

    BACKGROUND: While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. METHODS: In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. RESULTS: We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. CONCLUSIONS: A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal

  7. Simulating the 'other-race' effect with autoassociative neural networks: further evidence in favor of the face-space model.

    PubMed

    Caldara, Roberto; Hervé, Abdi

    2006-01-01

    Other-race (OR) faces are less accurately recognized than same-race (SR) faces, but faster classified by race. This phenomenon has often been reported as the 'other-race' effect (ORE). Valentine (1991 Quarterly Journal of Experimental Psychology A: Human Experimental Psychology 43 161-204) proposed a theoretical multidimensional face-space model that explained both of these results, in terms of variations in exemplar density between races. According to this model, SR faces are more widely distributed across the dimensions of the space than OR faces. However, this model does not quantify nor state the dimensions coded within this face space. The aim of the present study was to test the face-space explanation of the ORE with neural network simulations by quantifying its dimensions. We found the predicted density properties of Valentine's framework in the face-projection spaces of the autoassociative memories. This was supported by an interaction for exemplar density between the race of the learned face set and the race of the faces. In addition, the elaborated face representations showed optimal responses for SR but not for OR faces within SR face spaces when explored at the individual level, as gender errors occurred significantly more often in OR than in SR face-space representations. Altogether, our results add further evidence in favor of a statistical exemplar density explanation of the ORE as suggested by Valentine, and question the plausibility of such coding for faces in the framework of recent neuroimaging studies.

  8. An Artificial Neural Network Control System for Spacecraft Attitude Stabilization

    DTIC Science & Technology

    1990-06-01

    training is based on the concept of enforced performance. A neural network will learn to meet a specific performance goal if the performance standard...is the only solution to a problem. Performance index training is devised to teach the neural network the time-optimal control law for the system. Real...time adaptation of a neural network in closed loop control of the Crew/Equipment Retriever was demonstrated in computer simulations.

  9. Position Sensorless Driving of BLDCM Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Guo, Hai-Jiao; Sagawa, Seiji; Ichinokura, Osamu

    A sensorless driving method of brushless DC Motors (BLDCM) using neural network has been studied in this paper. Considering the nonlinear characteristics and the parameter error of the modeling, neural networks are introduced to estimate the electromotive force (EMF). The results of simulation and experiment using offline trained neural networks show the proposed method is useful. In addition, the robustness about the parameters is discussed.

  10. VLSI implementation of neural networks.

    PubMed

    Wilamowski, B M; Binfet, J; Kaynak, M O

    2000-06-01

    Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach.

  11. Signal dispersion within a hippocampal neural network

    NASA Technical Reports Server (NTRS)

    Horowitz, J. M.; Mates, J. W. B.

    1975-01-01

    A model network is described, representing two neural populations coupled so that one population is inhibited by activity it excites in the other. Parameters and operations within the model represent EPSPs, IPSPs, neural thresholds, conduction delays, background activity and spatial and temporal dispersion of signals passing from one population to the other. Simulations of single-shock and pulse-train driving of the network are presented for various parameter values. Neuronal events from 100 to 300 msec following stimulation are given special consideration in model calculations.

  12. Models of neural networks with fuzzy activation functions

    NASA Astrophysics Data System (ADS)

    Nguyen, A. T.; Korikov, A. M.

    2017-02-01

    This paper investigates the application of a new form of neuron activation functions that are based on the fuzzy membership functions derived from the theory of fuzzy systems. On the basis of the results regarding neuron models with fuzzy activation functions, we created the models of fuzzy-neural networks. These fuzzy-neural network models differ from conventional networks that employ the fuzzy inference systems using the methods of neural networks. While conventional fuzzy-neural networks belong to the first type, fuzzy-neural networks proposed here are defined as the second-type models. The simulation results show that the proposed second-type model can successfully solve the problem of the property prediction for time – dependent signals. Neural networks with fuzzy impulse activation functions can be widely applied in many fields of science, technology and mechanical engineering to solve the problems of classification, prediction, approximation, etc.

  13. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  14. Using neural networks to model chaos

    SciTech Connect

    Upadhyay, M.D.

    1996-12-31

    Two types of neural networks -- backpropagation and radial basis function -- are presented for modeling dynamical systems. They were trained to model the Henon, Ikeda and Tinkerbell dynamical systems by providing a set of points randomly chosen from orbits under the functions. After training, the networks were used to simulate the functions to determine the extent to which they could generate the chaotic attractors associated with these systems.

  15. Auto-associative nanoelectronic neural network

    SciTech Connect

    Nogueira, C. P. S. M.; Guimarães, J. G.

    2014-05-15

    In this paper, an auto-associative neural network using single-electron tunneling (SET) devices is proposed and simulated at low temperature. The nanoelectronic auto-associative network is able to converge to a stable state, previously stored during training. The recognition of the pattern involves decreasing the energy of the input state until it achieves a point of local minimum energy, which corresponds to one of the stored patterns.

  16. Nonlinear system identification and control based on modular neural networks.

    PubMed

    Puscasu, Gheorghe; Codres, Bogdan

    2011-08-01

    A new approach for nonlinear system identification and control based on modular neural networks (MNN) is proposed in this paper. The computational complexity of neural identification can be greatly reduced if the whole system is decomposed into several subsystems. This is obtained using a partitioning algorithm. Each local nonlinear model is associated with a nonlinear controller. These are also implemented by neural networks. The switching between the neural controllers is done by a dynamical switcher, also implemented by neural networks, that tracks the different operating points. The proposed multiple modelling and control strategy has been successfully tested on simulated laboratory scale liquid-level system.

  17. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    PubMed

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2016-07-14

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  18. Use of Artificial Neural Network for the Simulation of Radon Emission Concentration of Granulated Blast Furnace Slag Mortar.

    PubMed

    Jang, Hong-Seok; Xing, Shuli; Lee, Malrey; Lee, Young-Keun; So, Seung-Young

    2016-05-01

    In this study, an artificial neural networks study was carried out to predict the quantity of radon of Granulated Blast Furnace Slag (GBFS) cement mortar. A data set of a laboratory work, in which a total of 3 mortars were produced, was utilized in the Artificial Neural Networks (ANNs) study. The mortar mixture parameters were three different GBFS ratios (0%, 20%, 40%). Measurement radon of moist cured specimens was measured at 3, 10, 30, 100, 365 days by sensing technology for continuous monitoring of indoor air quality (IAQ). ANN model is constructed, trained and tested using these data. The data used in the ANN model are arranged in a format of two input parameters that cover the cement, GBFS and age of samples and, an output parameter which is concentrations of radon emission of mortar. The results showed that ANN can be an alternative approach for the predicting the radon concentration of GBFS mortar using mortar ingredients as input parameters.

  19. Hybrid neural networks--combining abstract and realistic neural units.

    PubMed

    Lytton, William W; Hines, Michael

    2004-01-01

    There is a trade-off in neural network simulation between simulations that embody the details of neuronal biology and those that omit these details in favor of abstractions. The former approach appeals to physiologists and pharmacologists who can directly relate their experimental manipulations to parameter changes in the model. The latter approach appeals to physicists and mathematicians who seek analytic understanding of the behavior of large numbers of coupled simple units. This simplified approach is also valuable for practical reasons a highly simplified unit will run several orders of magnitude faster than a complex, biologically realistic unit. In order to have our cake and eat it, we have developed hybrid networks in the Neuron simulator package. These make use of Neuron's local variable timestep method to permit simplified integrate-and-fire units to move ahead quickly while realistic neurons in the same network are integrated slowly.

  20. Neuronify: An Educational Simulator for Neural Circuits.

    PubMed

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Våvang Solbrå, Andreas; Tennøe, Simen; Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne; Hafting, Torkel; Einevoll, Gaute T

    2017-01-01

    Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux).

  1. Neuronify: An Educational Simulator for Neural Circuits

    PubMed Central

    Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne

    2017-01-01

    Abstract Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux). PMID:28321440

  2. Neural network for solving convex quadratic bilevel programming problems.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network.

  3. Developing a Neural Network to Act as a Noise Filter

    DTIC Science & Technology

    1992-10-02

    This study uses the neural network simulator called NETS to determine if neural networks could perform a non-linear filtering operation to remove...noise from two-dimensional (2-D) data and produce a noise-free image. Application is geared toward the development and performance of neural network filters...including the development of an optional neural network architecture and the use of-criteria in determining how accurate the net filtered noise-to produce a noise-free image.

  4. A Neural-Network Model-Based Simulation Tool for Blast Wall Protection of Structures (PREPRINT)

    DTIC Science & Technology

    2010-05-01

    modeling with software such as LS-DYNA [11], AUTODYN [12], FEFLO [13], Air3D [14], SHAMRC [15, 16], CTH [17], and DYSMAS [18]. The downside to this...approach, but used simulation models (based on AUTODYN ), rather than live experiments, to generate a database of pressure-time histories over the height...GCI for LS-DYNA, CTH, and AUTODYN [30]. Results from the two studies showed that LS-DYNA, CTH, and DYSMAS appear to be good options in regard to the

  5. ARMA Neural Networks for Predicting DGPS Pseudorange Correction

    NASA Astrophysics Data System (ADS)

    Jwo, Dah-Jing; Lee, Tai-Shen; Tseng, Ying-Wei

    2004-05-01

    In this paper, the Auto-Regressive Moving-Averaging (ARMA) neural networks (NNs) will be incorporated for predicting the differential Global Positioning System (DGPS) pseudorange correction (PRC) information. The neural network is employed to realize the time-varying ARMA implementation. Online training for real-time prediction of the PRC enhances the continuity of service on the differential correction signals and therefore improves the positioning accuracy. When the PRC signal is lost, the ARMA neural network predicted PRC would temporarily provide correction data with very good accuracy. Simulation is conducted for evaluating the ARMA NN based DGPS PRC prediction accuracy. A comparative performance study based on two types of ARMA neural networks, i.e. Back-propagation Neural Network (BPNN) and General Regression Neural Network (GRNN), will be provided.

  6. Tracking and vertex finding with drift chambers and neural networks

    SciTech Connect

    Lindsey, C.

    1991-09-01

    Finding tracks, track vertices and event vertices with neural networks from drift chamber signals is discussed. Simulated feed-forward neural networks have been trained with back-propagation to give track parameters using Monte Carlo simulated tracks in one case and actual experimental data in another. Effects on network performance of limited weight resolution, noise and drift chamber resolution are given. Possible implementations in hardware are discussed. 7 refs., 10 figs.

  7. Spiking neural network simulation: numerical integration with the Parker-Sochacki method.

    PubMed

    Stewart, Robert D; Bair, Wyeth

    2009-08-01

    Mathematical neuronal models are normally expressed using differential equations. The Parker-Sochacki method is a new technique for the numerical integration of differential equations applicable to many neuronal models. Using this method, the solution order can be adapted according to the local conditions at each time step, enabling adaptive error control without changing the integration timestep. The method has been limited to polynomial equations, but we present division and power operations that expand its scope. We apply the Parker-Sochacki method to the Izhikevich 'simple' model and a Hodgkin-Huxley type neuron, comparing the results with those obtained using the Runge-Kutta and Bulirsch-Stoer methods. Benchmark simulations demonstrate an improved speed/accuracy trade-off for the method relative to these established techniques.

  8. Computational chaos in massively parallel neural networks

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  9. Implementing Signature Neural Networks with Spiking Neurons

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  10. Implementing Signature Neural Networks with Spiking Neurons.

    PubMed

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  11. Experimental fault characterization of a neural network

    NASA Technical Reports Server (NTRS)

    Tan, Chang-Huong

    1990-01-01

    The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.

  12. Signal Approximation with a Wavelet Neural Network

    DTIC Science & Technology

    1992-12-01

    specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .

  13. A Neural Network Based Speech Recognition System

    DTIC Science & Technology

    1990-02-01

    encoder and identifies individual words. This use of neural networks offers two advantages over conventional algorithmic detectors: the detection...environment. Keywords: Artificial intelligence; Neural networks : Back propagation; Speech recognition.

  14. Plant Growth Models Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  15. Neural Networks for Flight Control

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  16. Analysis and Design of Neural Networks

    DTIC Science & Technology

    1992-01-01

    The training problem for feedforward neural networks is nonlinear parameter estimation that can be solved by a variety of optimization techniques...Much of the literature of neural networks has focused on variants of gradient descent. The training of neural networks using such techniques is known to...be a slow process with more sophisticated techniques not always performing significantly better. It is shown that feedforward neural networks can

  17. Radar System Classification Using Neural Networks

    DTIC Science & Technology

    1991-12-01

    This study investigated methods of improving the accuracy of neural networks in the classification of large numbers of classes. A literature search...revealed that neural networks have been successful in the radar classification problem, and that many complex problems have been solved using systems...of multiple neural networks . The experiments conducted were based on 32 classes of radar system data. The neural networks were modelled using a program

  18. Fault Tolerance of Neural Networks

    DTIC Science & Technology

    1994-07-01

    Systematic Ap - proach, Proc. Government Microcircuit Application Conf. (GOMAC), San Diego, Nov. 1986. [10] D.E.Goldberg, Genetic Algorithms in Search...s l m n ttempt to develop fault tolerant neural networks. The lows. Given a well-trained network, we first eliminate temp todevlopfaut tlernt eurl ...both ap - proaches, and this resulted in very slight improve- ments over the addition/deletion procedure. 103 Fisher’s Iris data in average case Fisher’s

  19. Scalable photonic neural networks for real-time pattern classification

    NASA Astrophysics Data System (ADS)

    Goldstein, Adam Arthur

    1997-11-01

    With the rapid advancement of photonic technology in recent years, the potential exists for the incorporation of photonic neural-network research into the development of opto-electronic real-time pattern classification systems. In this dissertation we present three classes of photonic neural-network models that were designed to be compatible with this emerging technology: (1) photonic neural networks based upon probability density estimation, (2) photorefractive neural-network models, and (3) vertically stacked photonic neural networks that utilize hybridized CMOS/GaAs chips and diffractive optical elements. In each case, we show how previously developed neural-network learning algorithms and/or architectures must be adapted in order to allow an efficient photonic implementation. For class (1), we show that conventional 'k-Nearest Neighbors' (k-NN) probability density estimation is not suitable for an analog photonic neural-network hardware implementation, and we introduce a new probability density estimation algorithm called 'Continuous k-Nearest Neighbors' (C-kNN) that is suitable. For class (2), we show that the diffraction-efficiency decay inherent to photorefractive grating formation adversely affects outer-product neural-network learning algorithms, and we introduce a gain and exposure scheduling technique that resolves the incompatibility. For class (3), the use of compact diffractive optical interconnections constrains the corresponding neural-network weights to be fixed and locally connected. We introduce a 3-D Photonic Multichip- Module (3-D PMCM) neural-network architecture that utilizes a fixed diffractive optical layer in conjunction with a programmable electronic layer, to obtain a multi- layer neural network capable of real-time pattern recognition tasks. The design and fabrication of key components of the 3-D PMCM neural-network architecture are presented, together with simulation results for the application of detecting and locating the eyes in an

  20. Artificial neural networks in medicine

    SciTech Connect

    Keller, P.E.

    1994-07-01

    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  1. Semantic Interpretation of An Artificial Neural Network

    DTIC Science & Technology

    1995-12-01

    success for stock market analysis/prediction is artificial neural networks. However, knowledge embedded in the neural network is not easily translated...interpret neural network knowledge. The first, called Knowledge Math, extends the use of connection weights, generating rules for general (i.e. non-binary

  2. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  3. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  4. Fuzzy logic and neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  5. Artificial neural network and medicine.

    PubMed

    Khan, Z H; Mohapatra, S K; Khodiar, P K; Ragu Kumar, S N

    1998-07-01

    The introduction of human brain functions such as perception and cognition into the computer has been made possible by the use of Artificial Neural Network (ANN). ANN are computer models inspired by the structure and behavior of neurons. Like the brain, ANN can recognize patterns, manage data and most significantly, learn. This learning ability, not seen in other computer models simulating human intelligence, constantly improves its functional accuracy as it keeps on performing. Experience is as important for an ANN as it is for man. It is being increasingly used to supplement and even (may be) replace experts, in medicine. However, there is still scope for improvement in some areas. Its ability to classify and interpret various forms of medical data comes as a helping hand to clinical decision making in both diagnosis and treatment. Treatment planning in medicine, radiotherapy, rehabilitation, etc. is being done using ANN. Morbidity and mortality prediction by ANN in different medical situations can be very helpful for hospital management. ANN has a promising future in fundamental research, medical education and surgical robotics.

  6. Constrained Least Absolute Deviation Neural Networks

    PubMed Central

    Wang, Zhishun; Peterson, Bradley S.

    2008-01-01

    It is well known that least absolute deviation (LAD) criterion or L1-norm used for estimation of parameters is characterized by robustness, i.e., the estimated parameters are totally resistant (insensitive) to large changes in the sampled data. This is an extremely useful feature, especially, when the sampled data are known to be contaminated by occasionally occurring outliers or by spiky noise. In our previous works, we have proposed the least absolute deviation neural network (LADNN) to solve unconstrained LAD problems. The theoretical proofs and numerical simulations have shown that the LADNN is Lyapunov-stable and it can globally converge to the exact solution to a given unconstrained LAD problem. We have also demonstrated its excellent application value in time-delay estimation. More generally, a practical LAD application problem may contain some linear constraints, such as a set of equalities and/or inequalities, which is called constrained LAD problem, whereas the unconstrained LAD can be considered as a special form of the constrained LAD. In this paper, we present a new neural network called constrained least absolute deviation neural network (CLADNN) to solve general constrained LAD problems. Theoretical proofs and numerical simulations demonstrate that the proposed CLADNN is Lyapunov stable and globally converges to the exact solution to a given constrained LAD problem, independent of initial values. The numerical simulations have also illustrated that the proposed CLADNN can be used to robustly estimate parameters for nonlinear curve fitting, which is extensively used in signal and image processing. PMID:18269958

  7. Application of Artificial Neural Networks to Machine Vision Flame Detection

    DTIC Science & Technology

    1991-04-01

    AD-A242 6:12 ;AD-A42 rESL-TR-90-49 APPLICATION OF ARTIFICIAL NEURAL NETWORKS TO MACHINE VISION FLAME DETECTION 0~~ J. A. NEAL, C. E. LAND, R. R...I7E (inctude Security C’asifica rion) A-,Plication of Artificial Neural Networks to Machine Vision Flame Detection 2. PERSONAL AUTHOR S( ,n A. ’,’al...network. Artificial neural networks are designed to simulate the physical architecture of the neurons in the human brain and have demonstrated

  8. A novel neural network for nonlinear convex programming.

    PubMed

    Gao, Xing-Bao

    2004-05-01

    In this paper, we present a neural network for solving the nonlinear convex programming problem in real time by means of the projection method. The main idea is to convert the convex programming problem into a variational inequality problem. Then a dynamical system and a convex energy function are constructed for resulting variational inequality problem. It is shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. Compared with the existing neural networks for solving the nonlinear convex programming problem, the proposed neural network has no Lipschitz condition, no adjustable parameter, and its structure is simple. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.

  9. A Novel Neural Network for Generally Constrained Variational Inequalities.

    PubMed

    Gao, Xingbao; Liao, Li-Zhi

    2016-06-13

    This paper presents a novel neural network for solving generally constrained variational inequality problems by constructing a system of double projection equations. By defining proper convex energy functions, the proposed neural network is proved to be stable in the sense of Lyapunov and converges to an exact solution of the original problem for any starting point under the weaker cocoercivity condition or the monotonicity condition of the gradient mapping on the linear equation set. Furthermore, two sufficient conditions are provided to ensure the stability of the proposed neural network for a special case. The proposed model overcomes some shortcomings of existing continuous-time neural networks for constrained variational inequality, and its stability only requires some monotonicity conditions of the underlying mapping and the concavity of nonlinear inequality constraints on the equation set. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.

  10. Artificial neural networks: theoretical background and pharmaceutical applications: a review.

    PubMed

    Wesolowski, Marek; Suchacz, Bogdan

    2012-01-01

    In recent times, there has been a growing interest in artificial neural networks, which are a rough simulation of the information processing ability of the human brain, as modern and vastly sophisticated computational techniques. This interest has also been reflected in the pharmaceutical sciences. This paper presents a review of articles on the subject of the application of neural networks as effective tools assisting the solution of various problems in science and the pharmaceutical industry, especially those characterized by multivariate and nonlinear dependencies. After a short description of theoretical background and practical basics concerning the computations performed by means of neural networks, the most important pharmaceutical applications of neural networks, with suitable references, are demonstrated. The huge role played by neural networks in pharmaceutical analysis, pharmaceutical technology, and searching for the relationships between the chemical structure and the properties of newly synthesized compounds as candidates for drugs is discussed.

  11. On lateral competition in dynamic neural networks

    SciTech Connect

    Bellyustin, N.S.

    1995-02-01

    Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.

  12. Neural networks as a control methodology

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1990-01-01

    While conventional computers must be programmed in a logical fashion by a person who thoroughly understands the task to be performed, the motivation behind neural networks is to develop machines which can train themselves to perform tasks, using available information about desired system behavior and learning from experience. There are three goals of this fellowship program: (1) to evaluate various neural net methods and generate computer software to implement those deemed most promising on a personal computer equipped with Matlab; (2) to evaluate methods currently in the professional literature for system control using neural nets to choose those most applicable to control of flexible structures; and (3) to apply the control strategies chosen in (2) to a computer simulation of a test article, the Control Structures Interaction Suitcase Demonstrator, which is a portable system consisting of a small flexible beam driven by a torque motor and mounted on springs tuned to the first flexible mode of the beam. Results of each are discussed.

  13. Fast implementation of neural network classification

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Ok, Jiheon; Lee, Chulhee

    2013-09-01

    Most artificial neural networks use a nonlinear activation function that includes sigmoid and hyperbolic tangent functions. Most artificial networks employ nonlinear functions such as these sigmoid and hyperbolic tangent functions, which incur high complexity costs, particularly during hardware implementation. In this paper, we propose new polynomial approximation methods for nonlinear activation functions that can substantially reduce complexity without sacrificing performance. The proposed approximation methods were applied to pattern classification problems. Experimental results show that the processing time was reduced by up to 50% without any performance degradations in terms of computer simulation.

  14. Training Neural Networks with Weight Constraints

    DTIC Science & Technology

    1993-03-01

    Hardware implementation of artificial neural networks imposes a variety of constraints. Finite weight magnitudes exist in both digital and analog...optimizing a network with weight constraints. Comparisons are made to the backpropagation training algorithm for networks with both unconstrained and hard-limited weight magnitudes. Neural networks , Analog, Digital, Stochastic

  15. Tests of track segment and vertex finding with neural networks

    SciTech Connect

    Denby, B.; Lessner, E. ); Lindsey, C.S. )

    1990-04-01

    Feed forward neural networks have been trained, using back-propagation, to find the slopes of simulated track segments in a straw chamber and to find the vertex of tracks from both simulated and real events in a more conventional drift chamber geometry. Network architectures, training, and performance are presented. 12 refs., 7 figs.

  16. Artificial neural networks reveal efficiency in genetic value prediction.

    PubMed

    Peixoto, L A; Bhering, L L; Cruz, C D

    2015-06-18

    The objective of this study was to evaluate the efficiency of artificial neural networks (ANNs) for predicting genetic value in experiments carried out in randomized blocks. Sixteen scenarios were simulated with different values of heritability (10, 20, 30, and 40%), coefficient of variation (5 and 10%), and the number of genotypes per block (150 and 200 for validation, and 5000 for neural network training). One hundred validation populations were used in each scenario. Accuracy of ANNs was evaluated by comparing the correlation of network value with genetic value, and of phenotypic value with genetic value. Neural networks were efficient in predicting genetic value with a 0.64 to 10.3% gain compared to the phenotypic value, regardless the simulated population size, heritability, or coefficient of variation. Thus, the artificial neural network is a promising technique for predicting genetic value in balanced experiments.

  17. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  18. Fiber optic Adaline neural networks

    NASA Astrophysics Data System (ADS)

    Ghosh, Anjan K.; Trepka, Jim; Paparao, Palacharla

    1993-02-01

    Optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators has been discussed recently. We describe the design of a single layer fiber optic Adaline neural network which can be used as a bit pattern classifier. In our realization we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The new optical neural network described in this paper is designed for optical processing of guided lightwave signals, not electronic signals. We analyzed the convergence or learning characteristics of the optically implemented Adaline in the presence of errors in the hardware, and we studied methods for improving the convergence rate of the Adaline.

  19. Inversion of surface parameters using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Olvera, J.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A neural network approach to the inversion of surface scattering parameters is presented. Simulated data sets based on a surface scattering model are used so that the data may be viewed as taken from a completely known randomly rough surface. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) are tested on the simulated backscattering data. The RMS error of training the FL network is found to be less than one half the error of the BP network while requiring one to two orders of magnitude less CPU time. When applied to inversion of parameters from a statistically rough surface, the FL method is successful at recovering the surface permittivity, the surface correlation length, and the RMS surface height in less time and with less error than the BP network. Further applications of the FL neural network to the inversion of parameters from backscatter measurements of an inhomogeneous layer above a half space are shown.

  20. Application of neural networks in space construction

    NASA Technical Reports Server (NTRS)

    Thilenius, Stephen C.; Barnes, Frank

    1990-01-01

    When trying to decide what task should be done by robots and what tasks should be done by humans with respect to space construction, there has been one decisive barrier which ultimately divides the tasks: can a computer do the job? Von Neumann type computers have great difficulty with problems that the human brain seems to do instantaneously and with little effort. Some of these problems are pattern recognition, speech recognition, content addressable memories, and command interpretation. In an attempt to simulate these talents of the human brain, much research was currently done into the operations and construction of artificial neural networks. The efficiency of the interface between man and machine, robots in particular, can therefore be greatly improved with the use of neural networks. For example, wouldn't it be easier to command a robot to 'fetch an object' rather then having to remotely control the entire operation with remote control?

  1. Neural networks for aerosol particles characterization

    NASA Astrophysics Data System (ADS)

    Berdnik, V. V.; Loiko, V. A.

    2016-11-01

    Multilayer perceptron neural networks with one, two and three inputs are built to retrieve parameters of spherical homogeneous nonabsorbing particle. The refractive index ranges from 1.3 to 1.7; particle radius ranges from 0.251 μm to 56.234 μm. The logarithms of the scattered radiation intensity are used as input signals. The problem of the most informative scattering angles selection is elucidated. It is shown that polychromatic illumination helps one to increase significantly the retrieval accuracy. In the absence of measurement errors relative error of radius retrieval by the neural network with three inputs is 0.54%, relative error of the refractive index retrieval is 0.84%. The effect of measurement errors on the result of retrieval is simulated.

  2. Neural Network for Visual Search Classification

    DTIC Science & Technology

    2007-11-02

    neural network used to perform visual search classification. The neural network consists of a Learning vector quantization network (LVQ) and a single layer perceptron. The objective of this neural network is to classify the various human visual search patterns into predetermined classes. The classes signify the different search strategies used by individuals to scan the same target pattern. The input search patterns are quantified with respect to an ideal search pattern, determined by the user. A supervised learning rule,

  3. The LILARTI neural network system

    SciTech Connect

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  4. Simulation of SiGe:C HBTs using neural network and adaptive neuro-fuzzy inference system for RF applications

    NASA Astrophysics Data System (ADS)

    Karimi, Gholamreza; Banitalebi, Roza; Babaei Sedaghat, Sedigheh

    2013-07-01

    In this article, the small-signal equivalent circuit model of SiGe:C heterojunction bipolar transistors (HBTs) has directly been extracted from S-parameter data. Moreover, in this article, we present a new modelling approach using ANFIS (adaptive neuro-fuzzy inference system), which in general has a high degree of accuracy, simplicity and novelty (independent approach). Then measured and model-calculated data show an excellent agreement with less than 1.68 × 10-5% discrepancy in the frequency range of higher than 300 GHz over a wide range of bias points in ANFIS. The results show ANFIS model is better than ANN (artificial neural network) for redeveloping the model and increasing the input parameters.

  5. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  6. A Neural Network Approach to Modeling the Effects of Barrier Walls on Blast Wave Propagation PREPRINT

    DTIC Science & Technology

    2008-03-01

    artificial neural networks , to overcome problems of computationally expensive simulations. Neural networks have the potential to make predictions of...optimal solution. Artificial neural networks appear to be well suited to this application, performing well for problems that are strongly non-linear and

  7. Wear Scar Similarities between Retrieved and Simulator-Tested Polyethylene TKR Components: An Artificial Neural Network Approach

    PubMed Central

    2016-01-01

    The aim of this study was to determine how representative wear scars of simulator-tested polyethylene (PE) inserts compare with retrieved PE inserts from total knee replacement (TKR). By means of a nonparametric self-organizing feature map (SOFM), wear scar images of 21 postmortem- and 54 revision-retrieved components were compared with six simulator-tested components that were tested either in displacement or in load control according to ISO protocols. The SOFM network was then trained with the wear scar images of postmortem-retrieved components since those are considered well-functioning at the time of retrieval. Based on this training process, eleven clusters were established, suggesting considerable variability among wear scars despite an uncomplicated loading history inside their hosts. The remaining components (revision-retrieved and simulator-tested) were then assigned to these established clusters. Six out of five simulator components were clustered together, suggesting that the network was able to identify similarities in loading history. However, the simulator-tested components ended up in a cluster at the fringe of the map containing only 10.8% of retrieved components. This may suggest that current ISO testing protocols were not fully representative of this TKR population, and protocols that better resemble patients' gait after TKR containing activities other than walking may be warranted. PMID:27597955

  8. Wear Scar Similarities between Retrieved and Simulator-Tested Polyethylene TKR Components: An Artificial Neural Network Approach.

    PubMed

    Orozco Villaseñor, Diego A; Wimmer, Markus A

    2016-01-01

    The aim of this study was to determine how representative wear scars of simulator-tested polyethylene (PE) inserts compare with retrieved PE inserts from total knee replacement (TKR). By means of a nonparametric self-organizing feature map (SOFM), wear scar images of 21 postmortem- and 54 revision-retrieved components were compared with six simulator-tested components that were tested either in displacement or in load control according to ISO protocols. The SOFM network was then trained with the wear scar images of postmortem-retrieved components since those are considered well-functioning at the time of retrieval. Based on this training process, eleven clusters were established, suggesting considerable variability among wear scars despite an uncomplicated loading history inside their hosts. The remaining components (revision-retrieved and simulator-tested) were then assigned to these established clusters. Six out of five simulator components were clustered together, suggesting that the network was able to identify similarities in loading history. However, the simulator-tested components ended up in a cluster at the fringe of the map containing only 10.8% of retrieved components. This may suggest that current ISO testing protocols were not fully representative of this TKR population, and protocols that better resemble patients' gait after TKR containing activities other than walking may be warranted.

  9. Prediction of Aerodynamic Coefficients using Neural Networks for Sparse Data

    NASA Technical Reports Server (NTRS)

    Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Basic aerodynamic coefficients are modeled as functions of angles of attack and sideslip with vehicle lateral symmetry and compressibility effects. Most of the aerodynamic parameters can be well-fitted using polynomial functions. In this paper a fast, reliable way of predicting aerodynamic coefficients is produced using a neural network. The training data for the neural network is derived from wind tunnel test and numerical simulations. The coefficients of lift, drag, pitching moment are expressed as a function of alpha (angle of attack) and Mach number. The results produced from preliminary neural network analysis are very good.

  10. Sea ice classification using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.

  11. Feature Extraction Using an Unsupervised Neural Network

    DTIC Science & Technology

    1991-05-03

    A novel unsupervised neural network for dimensionality reduction which seeks directions emphasizing distinguishing features in the data is presented. A statistical framework for the parameter estimation problem associated with this neural network is given and its connection to exploratory projection pursuit methods is established. The network is shown to minimize a loss function (projection index) over a

  12. Neural networks and MIMD-multiprocessors

    NASA Technical Reports Server (NTRS)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  13. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  14. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  15. Oil reservoir properties estimation using neural networks

    SciTech Connect

    Toomarian, N.B.; Barhen, J.; Glover, C.W.; Aminzadeh, F.

    1997-02-01

    This paper investigates the applicability as well as the accuracy of artificial neural networks for estimating specific parameters that describe reservoir properties based on seismic data. This approach relies on JPL`s adjoint operators general purpose neural network code to determine the best suited architecture. The authors believe that results presented in this work demonstrate that artificial neural networks produce surprisingly accurate estimates of the reservoir parameters.

  16. Adaptive optimization and control using neural networks

    SciTech Connect

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  17. Neural Network False Alarm Filter. Volume 1.

    DTIC Science & Technology

    1994-12-01

    This effort identified, developed and demonstrated a set of approaches for applying neural network learning techniques to the development of a real... neural network models, 9 fault report causes and 12 common groups of BIT techniques was identified. From this space, 4 unique, high-potential...of their strengths and weaknesses were performed along with cost/ benefit analyses. This study concluded that the best candidates for neural network insert

  18. A Neural Network Object Recognition System

    DTIC Science & Technology

    1990-07-01

    useful for exploring different neural network configurations. There are three main computation phases of a model based object recognition system...segmentation, feature extraction, and object classification. This report focuses on the object classification stage. For segmentation, a neural network based...are available with the current system. Neural network based feature extraction may be added at a later date. The classification stage consists of a

  19. Neural Networks Applied to Signal Processing

    DTIC Science & Technology

    1989-09-01

    DTIC FILE COpy NAVAL POSTGRADUATE SCHOOL . Monterey, California Lf 0 (0 V’ STATES 4 THESIS NEURAL NETWORKS APPLIED TO SIGNAL PROCESSING by Mark D...FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT ELEMENT NO NO NO ACCESSION NO. 11. TITLE (Include Security Classification) NEURAL NETWORKS APPLIED TO...for public release; distribution is unlimited Neural Networks Applied to Signal Processing by Mark D. Baehre Captain, United States Army B.S., United

  20. A Complexity Theory of Neural Networks

    DTIC Science & Technology

    1991-08-09

    Significant progress has been made in laying the foundations of a complexity theory of neural networks . The fundamental complexity classes have been...identified and studied. The class of problems solvable by small, shallow neural networks has been found to be the same class even if (1) probabilistic...behaviour (2)Multi-valued logic, and (3)analog behaviour, are allowed (subject to certain resonable technical assumptions). Neural networks can be

  1. The Neural Network Method of Corrosion Diagnosis for Grounding Grid

    SciTech Connect

    Hou Zaien; Duan Fujian; Zhang Kecun

    2008-11-06

    Safety of persons, protection of equipment and continuity of power supply are the main objectives of the grounding system of a large electrical installation. For its accurate working status, it is essential to determine every branch resistance in the system. In this paper, we present a neural network method of corrosion diagnosis for the grounding grid based on the neural network theory. The feasibility of this method is discussed by means of its application to a simulant grounding grid.

  2. An efficient annealing in Boltzmann machine in Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Kin, Teoh Yeong; Hasan, Suzanawati Abu; Bulot, Norhisam; Ismail, Mohammad Hafiz

    2012-09-01

    This paper proposes and implements Boltzmann machine in Hopfield neural network doing logic programming based on the energy minimization system. The temperature scheduling in Boltzmann machine enhancing the performance of doing logic programming in Hopfield neural network. The finest temperature is determined by observing the ratio of global solution and final hamming distance using computer simulations. The study shows that Boltzmann Machine model is more stable and competent in term of representing and solving difficult combinatory problems.

  3. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  4. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  5. Advances in neural networks research: an introduction.

    PubMed

    Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar

    2009-01-01

    The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications.

  6. A flexible annealing chaotic neural network to maximum clique problem.

    PubMed

    Yang, Gang; Tang, Zheng; Zhang, Zhiqiang; Zhu, Yunyi

    2007-06-01

    Based on the analysis and comparison of several annealing strategies, we present a flexible annealing chaotic neural network which has flexible controlling ability and quick convergence rate to optimization problem. The proposed network has rich and adjustable chaotic dynamics at the beginning, and then can converge quickly to stable states. We test the network on the maximum clique problem by some graphs of the DIMACS clique instances, p-random and k random graphs. The simulations show that the flexible annealing chaotic neural network can get satisfactory solutions at very little time and few steps. The comparison between our proposed network and other chaotic neural networks denotes that the proposed network has superior executive efficiency and better ability to get optimal or near-optimal solution.

  7. Flow Control Using Neural Networks

    DTIC Science & Technology

    2007-11-02

    FEB 93 - 31 DEC 96 4. TITLE AND SUBTITLE 5 . FUNDING NUMBERS FLOW CONTROL USING NEURAL NETWORKS F49620-93-1-0135 61102F 6. AUTHOR(S) 2307/BS THORWALD...OFFICE OF SCIENTIFIC RESEARCH (AFOSRO AGENCY REPORT NUMBER 110 DUNCAN AVENUE, ROOM B115 BOLLING AFB DC 20332- 8050 11. SUPPLEMENTARY NOTES 12a...signals. Figure 5 shows a time series for an actuator that performs a ramp motion in the streamwise direction over about 1 % of the TS period and remains

  8. Neural Network Classifies Teleoperation Data

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Giancaspro, Antonio; Losito, Sergio; Pasquariello, Guido

    1994-01-01

    Prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on manipulator hand. Prototype is early, subsystem-level product of continuing effort to develop automated system that assists in training and supervising human control operator: provides symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to operator in real time during successive executions of same task. Also simplifies transition between teleoperation and autonomous modes of telerobotic system.

  9. On the estimation of stellar parameters with uncertainty prediction from Generative Artificial Neural Networks: application to Gaia RVS simulated spectra

    NASA Astrophysics Data System (ADS)

    Dafonte, C.; Fustes, D.; Manteiga, M.; Garabato, D.; Álvarez, M. A.; Ulla, A.; Allende Prieto, C.

    2016-10-01

    Aims: We present an innovative artificial neural network (ANN) architecture, called Generative ANN (GANN), that computes the forward model, that is it learns the function that relates the unknown outputs (stellar atmospheric parameters, in this case) to the given inputs (spectra). Such a model can be integrated in a Bayesian framework to estimate the posterior distribution of the outputs. Methods: The architecture of the GANN follows the same scheme as a normal ANN, but with the inputs and outputs inverted. We train the network with the set of atmospheric parameters (Teff, log g, [Fe/H] and [α/ Fe]), obtaining the stellar spectra for such inputs. The residuals between the spectra in the grid and the estimated spectra are minimized using a validation dataset to keep solutions as general as possible. Results: The performance of both conventional ANNs and GANNs to estimate the stellar parameters as a function of the star brightness is presented and compared for different Galactic populations. GANNs provide significantly improved parameterizations for early and intermediate spectral types with rich and intermediate metallicities. The behaviour of both algorithms is very similar for our sample of late-type stars, obtaining residuals in the derivation of [Fe/H] and [α/ Fe] below 0.1 dex for stars with Gaia magnitude Grvs < 12, which accounts for a number in the order of four million stars to be observed by the Radial Velocity Spectrograph of the Gaia satellite. Conclusions: Uncertainty estimation of computed astrophysical parameters is crucial for the validation of the parameterization itself and for the subsequent exploitation by the astronomical community. GANNs produce not only the parameters for a given spectrum, but a goodness-of-fit between the observed spectrum and the predicted one for a given set of parameters. Moreover, they allow us to obtain the full posterior distribution over the astrophysical parameters space once a noise model is assumed. This can be

  10. The Laplacian spectrum of neural networks.

    PubMed

    de Lange, Siemon C; de Reus, Marcel A; van den Heuvel, Martijn P

    2014-01-13

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these "conventional" graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks.

  11. Damselfly Network Simulator

    SciTech Connect

    2014-04-01

    Damselfly is a model-based parallel network simulator. It can simulate communication patterns of High Performance Computing applications on different network topologies. It outputs steady-state network traffic for a communication pattern, which can help in studying network congestion and its impact on performance.

  12. Finite time stabilization of delayed neural networks.

    PubMed

    Wang, Leimin; Shen, Yi; Ding, Zhixia

    2015-10-01

    In this paper, the problem of finite time stabilization for a class of delayed neural networks (DNNs) is investigated. The general conditions on the feedback control law are provided to ensure the finite time stabilization of DNNs. Then some specific conditions are derived by designing two different controllers which include the delay-dependent and delay-independent ones. In addition, the upper bound of the settling time for stabilization is estimated. Under fixed control strength, discussions of the extremum of settling time functional are made and a switched controller is designed to optimize the settling time. Finally, numerical simulations are carried out to demonstrate the effectiveness of the obtained results.

  13. Three dimensional living neural networks

    NASA Astrophysics Data System (ADS)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  14. Neural Network Controlled Visual Saccades

    NASA Astrophysics Data System (ADS)

    Johnson, Jeffrey D.; Grogan, Timothy A.

    1989-03-01

    The paper to be presented will discuss research on a computer vision system controlled by a neural network capable of learning through classical (Pavlovian) conditioning. Through the use of unconditional stimuli (reward and punishment) the system will develop scan patterns of eye saccades necessary to differentiate and recognize members of an input set. By foveating only those portions of the input image that the system has found to be necessary for recognition the drawback of computational explosion as the size of the input image grows is avoided. The model incorporates many features found in animal vision systems, and is governed by understandable and modifiable behavior patterns similar to those reported by Pavlov in his classic study. These behavioral patterns are a result of a neuronal model, used in the network, explicitly designed to reproduce this behavior.

  15. Computer Assisted Improvement of the Estimation Mean Squared Error with Application to Back Propagation Neural Networks.

    DTIC Science & Technology

    function. Key Words and Phrases: Parametric estimation , exponential families, nonlinear models, nonlinear least squares, neural networks, Monte Carlo simulation, computer intensive statistical methods.

  16. Marginalization in Random Nonlinear Neural Networks

    NASA Astrophysics Data System (ADS)

    Vasudeva Raju, Rajkumar; Pitkow, Xaq

    2015-03-01

    Computations involved in tasks like causal reasoning in the brain require a type of probabilistic inference known as marginalization. Marginalization corresponds to averaging over irrelevant variables to obtain the probability of the variables of interest. This is a fundamental operation that arises whenever input stimuli depend on several variables, but only some are task-relevant. Animals often exhibit behavior consistent with marginalizing over some variables, but the neural substrate of this computation is unknown. It has been previously shown (Beck et al. 2011) that marginalization can be performed optimally by a deterministic nonlinear network that implements a quadratic interaction of neural activity with divisive normalization. We show that a simpler network can perform essentially the same computation. These Random Nonlinear Networks (RNN) are feedforward networks with one hidden layer, sigmoidal activation functions, and normally-distributed weights connecting the input and hidden layers. We train the output weights connecting the hidden units to an output population, such that the output model accurately represents a desired marginal probability distribution without significant information loss compared to optimal marginalization. Simulations for the case of linear coordinate transformations show that the RNN model has good marginalization performance, except for highly uncertain inputs that have low amplitude population responses. Behavioral experiments, based on these results, could then be used to identify if this model does indeed explain how the brain performs marginalization.

  17. Electronic neural network for dynamic resource allocation

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Eberhardt, S. P.; Daud, T.

    1991-01-01

    A VLSI implementable neural network architecture for dynamic assignment is presented. The resource allocation problems involve assigning members of one set (e.g. resources) to those of another (e.g. consumers) such that the global 'cost' of the associations is minimized. The network consists of a matrix of sigmoidal processing elements (neurons), where the rows of the matrix represent resources and columns represent consumers. Unlike previous neural implementations, however, association costs are applied directly to the neurons, reducing connectivity of the network to VLSI-compatible 0 (number of neurons). Each row (and column) has an additional neuron associated with it to independently oversee activations of all the neurons in each row (and each column), providing a programmable 'k-winner-take-all' function. This function simultaneously enforces blocking (excitatory/inhibitory) constraints during convergence to control the number of active elements in each row and column within desired boundary conditions. Simulations show that the network, when implemented in fully parallel VLSI hardware, offers optimal (or near-optimal) solutions within only a fraction of a millisecond, for problems up to 128 resources and 128 consumers, orders of magnitude faster than conventional computing or heuristic search methods.

  18. Hand Gesture Recognition Using Neural Networks.

    DTIC Science & Technology

    1996-05-01

    inherent in the model. The high gesture recognition rates and quick network retraining times found in the present study suggest that a neural network approach to gesture recognition be further evaluated.

  19. A new formulation for feedforward neural networks.

    PubMed

    Razavi, Saman; Tolson, Bryan A

    2011-10-01

    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization.

  20. Extrapolation limitations of multilayer feedforward neural networks

    NASA Technical Reports Server (NTRS)

    Haley, Pamela J.; Soloway, Donald

    1992-01-01

    The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.

  1. Problem Specific applications for Neural Networks

    DTIC Science & Technology

    1988-12-01

    97 iv List Of Figures Figure Page 1. Neural Network Models ...... ............. 2 2. A Single - Layer Perceptron ..... ........... 4...the network is in use. Three of the most well-known neural networks are the single - layer perceptron , the multi-layer perceptron, and the Kohonen self...three of these networks can accept discrete (binary) or continuous inputs (5:6). 3 Single-Laver Perceptron. The single - layer perceptron (shown in Figure 2

  2. Drift chamber tracking with neural networks

    SciTech Connect

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  3. Simulative and experimental investigation on stator winding turn and unbalanced supply voltage fault diagnosis in induction motors using Artificial Neural Networks.

    PubMed

    Lashkari, Negin; Poshtan, Javad; Azgomi, Hamid Fekri

    2015-11-01

    The three-phase shift between line current and phase voltage of induction motors can be used as an efficient fault indicator to detect and locate inter-turn stator short-circuit (ITSC) fault. However, unbalanced supply voltage is one of the contributing factors that inevitably affect stator currents and therefore the three-phase shift. Thus, it is necessary to propose a method that is able to identify whether the unbalance of three currents is caused by ITSC or supply voltage fault. This paper presents a feedforward multilayer-perceptron Neural Network (NN) trained by back propagation, based on monitoring negative sequence voltage and the three-phase shift. The data which are required for training and test NN are generated using simulated model of stator. The experimental results are presented to verify the superior accuracy of the proposed method.

  4. Phase diagram of spiking neural networks

    PubMed Central

    Seyed-allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters – excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli. PMID:25788885

  5. Neural Network Classification of Cerebral Embolic Signals

    DTIC Science & Technology

    2007-11-02

    application of new signal processing techniques to the analysis and classification of embolic signals. We applied a Wavelet Neural Network algorithm...to approximate the embolic signals, with the parameters of the wavelet nodes being used to train a Neural Network to classify these signals as resulting from normal flow, or from gaseous or solid emboli.

  6. Multidisciplinary Studies of Integrated Neural Network Systems

    DTIC Science & Technology

    1994-03-01

    They accomplish this by partitioning the system into functional sub-units in a quasi-hierarchical structure of neural network modules. We studied...three specific examples of this system integration strategy and modeled their operation for the purpose of creating new neural network architectures and

  7. Neural Network Research: A Personal Perspective,

    DTIC Science & Technology

    1988-03-01

    These vision preprocessor and ART autonomous classifier examples are just two of the many neural network architectures now being developed by...computational theories with natural realizations as real-time adaptive neural network architectures with promising properties for tackling some of the

  8. Neural Network Based Helicopter Low Airspeed Indicator

    DTIC Science & Technology

    1996-10-24

    This invention relates generally to virtual sensors and, more particularly, to a means and method utilizing a neural network for estimating...helicopter airspeed at speeds below about 50 knots using only fixed system parameters (i.e., parameters measured or determined in a reference frame fixed relative to the helicopter fuselage) as inputs to the neural network .

  9. Evolving Neural Networks for Nonlinear Control.

    DTIC Science & Technology

    1996-09-30

    An approach to creating Amorphous Recurrent Neural Networks (ARNN) using Genetic Algorithms (GA) called 2pGA has been developed and shown to be...effective in evolving neural networks for the control and stabilization of both linear and nonlinear plants, the optimal control for a nonlinear regulator

  10. Neural network based architectures for aerospace applications

    NASA Technical Reports Server (NTRS)

    Ricart, Richard

    1987-01-01

    A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed.

  11. Isolated Speech Recognition Using Artificial Neural Networks

    DTIC Science & Technology

    2007-11-02

    In this project Artificial Neural Networks are used as research tool to accomplish Automated Speech Recognition of normal speech. A small size...the first stage of this work are satisfactory and thus the application of artificial neural networks in conjunction with cepstral analysis in isolated word recognition holds promise.

  12. Neural network classification - A Bayesian interpretation

    NASA Technical Reports Server (NTRS)

    Wan, Eric A.

    1990-01-01

    The relationship between minimizing a mean squared error and finding the optimal Bayesian classifier is reviewed. This provides a theoretical interpretation for the process by which neural networks are used in classification. A number of confidence measures are proposed to evaluate the performance of the neural network classifier within a statistical framework.

  13. Radiation Behavior of Analog Neural Network Chip

    NASA Technical Reports Server (NTRS)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  14. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging.

  15. Neural Networks for Handwritten English Alphabet Recognition

    NASA Astrophysics Data System (ADS)

    Perwej, Yusuf; Chaturvedi, Ashish

    2011-04-01

    This paper demonstrates the use of neural networks for developing a system that can recognize hand-written English alphabets. In this system, each English alphabet is represented by binary values that are used as input to a simple feature extraction system, whose output is fed to our neural network system.

  16. A Survey of Neural Network Publications.

    ERIC Educational Resources Information Center

    Vijayaraman, Bindiganavale S.; Osyk, Barbara

    This paper is a survey of publications on artificial neural networks published in business journals for the period ending July 1996. Its purpose is to identify and analyze trends in neural network research during that period. This paper shows which topics have been heavily researched, when these topics were researched, and how that research has…

  17. Applications of Neural Networks in Finance.

    ERIC Educational Resources Information Center

    Crockett, Henry; Morrison, Ronald

    1994-01-01

    Discusses research with neural networks in the area of finance. Highlights include bond pricing, theoretical exposition of primary bond pricing, bond pricing regression model, and an example that created networks with corporate bonds and NeuralWare Neuralworks Professional H software using the back-propagation technique. (LRW)

  18. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  19. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  20. Neural networks applications to control and computations

    NASA Technical Reports Server (NTRS)

    Luxemburg, Leon A.

    1994-01-01

    Several interrelated problems in the area of neural network computations are described. First an interpolation problem is considered, then a control problem is reduced to a problem of interpolation by a neural network via Lyapunov function approach, and finally a new, faster method of learning as compared with the gradient descent method, was introduced.

  1. Forecasting Jet Fuel Prices Using Artificial Neural Networks.

    DTIC Science & Technology

    1995-03-01

    Artificial neural networks provide a new approach to commodity forecasting that does not require algorithm or rule development. Neural networks have...NeuralWare, more people can take advantage of the power of artificial neural networks . This thesis provides an introduction to neural networks, and reviews

  2. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  3. Identification of hybrid ARX-neural network models for three-dimensional simulation of a vibroacoustic system

    NASA Astrophysics Data System (ADS)

    Magalhães, R. S.; Fontes, C. H. O.; Almeida, L. A. L.; Embiruçu, M.

    2011-10-01

    Acoustic noise in industrial areas, typically generated by compressors and vacuum pumps, may be mitigated by the combined use of passive and active noise control strategies. Despite its widespread use, the traditional Active Noise Control (ANC) technique requires error feedback and has been proven to be effective only within a small spatial region. When the movement of human ears is required within a large region and error feedback is difficult to be accomplished, new cancelling strategies have to be devised to achieve acceptable levels of spatial coverage. In the pursuit of this goal, this paper proposes a vibroacustic model to predict noise radiated from machinery. The model output is the sound signal of the noise at a given point inside a closed room. The two model inputs are the vibration signal at the noise source and the spatial coordinates of the intended point. Experimental output data were measured at several points inside a region defined by a solid rectangle. A fixed-order ARX model was chosen (AutoRegressive with eXogenous input), and for each spatial point and its corresponding pair of input-output signals, a set of parameter values was estimated. To integrate all these models into a single one, a neural network was employed to associate or approximate each set of parameters to its spatial coordinates. With this approach, the total number of parameters is expected to be greatly reduced, when considering the original separated models. Experimental results are presented and comparisons with other models are established on the basis of least-square error metrics and parsimony of parameters. A qualitative perspective for employing the proposed model in the design of large-region ANC strategies is also offered.

  4. The role of cue information in the outcome-density effect: evidence from neural network simulations and a causal learning experiment

    NASA Astrophysics Data System (ADS)

    Musca, Serban C.; Vadillo, Miguel A.; Blanco, Fernando; Matute, Helena

    2010-06-01

    Although normatively irrelevant to the relationship between a cue and an outcome, outcome density (i.e. its base-rate probability) affects people's estimation of causality. By what process causality is incorrectly estimated is of importance to an integrative theory of causal learning. A potential explanation may be that this happens because outcome density induces a judgement bias. An alternative explanation is explored here, following which the incorrect estimation of causality is grounded in the processing of cue-outcome information during learning. A first neural network simulation shows that, in the absence of a deep processing of cue information, cue-outcome relationships are acquired but causality is correctly estimated. The second simulation shows how an incorrect estimation of causality may emerge from the active processing of both cue and outcome information. In an experiment inspired by the simulations, the role of a deep processing of cue information was put to test. In addition to an outcome density manipulation, a shallow cue manipulation was introduced: cue information was either still displayed (concurrent) or no longer displayed (delayed) when outcome information was given. Behavioural and simulation results agree: the outcome-density effect was maximal in the concurrent condition. The results are discussed with respect to the extant explanations of the outcome-density effect within the causal learning framework.

  5. A new one-layer neural network for linear and quadratic programming.

    PubMed

    Gao, Xingbao; Liao, Li-Zhi

    2010-06-01

    In this paper, we present a new neural network for solving linear and quadratic programming problems in real time by introducing some new vectors. The proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem when the objective function is convex on the set defined by equality constraints. Compared with existing one-layer neural networks for quadratic programming problems, the proposed neural network has the least neurons and requires weak stability conditions. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.

  6. The use of artificial neural networks in experimental data acquisition and aerodynamic design

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1991-01-01

    It is proposed that an artificial neural network be used to construct an intelligent data acquisition system. The artificial neural networks (ANN) model has a potential for replacing traditional procedures as well as for use in computational fluid dynamics validation. Potential advantages of the ANN model are listed. As a proof of concept, the author modeled a NACA 0012 airfoil at specific conditions, using the neural network simulator NETS, developed by James Baffes of the NASA Johnson Space Center. The neural network predictions were compared to the actual data. It is concluded that artificial neural networks can provide an elegant and valuable class of mathematical tools for data analysis.

  7. Research on artificial neural network intrusion detection photochemistry based on the improved wavelet analysis and transformation

    NASA Astrophysics Data System (ADS)

    Li, Hong; Ding, Xue

    2017-03-01

    This paper combines wavelet analysis and wavelet transform theory with artificial neural network, through the pretreatment on point feature attributes before in intrusion detection, to make them suitable for improvement of wavelet neural network. The whole intrusion classification model gets the better adaptability, self-learning ability, greatly enhances the wavelet neural network for solving the problem of field detection invasion, reduces storage space, contributes to improve the performance of the constructed neural network, and reduces the training time. Finally the results of the KDDCup99 data set simulation experiment shows that, this method reduces the complexity of constructing wavelet neural network, but also ensures the accuracy of the intrusion classification.

  8. A Complex-Valued Projection Neural Network for Constrained Optimization of Real Functions in Complex Variables.

    PubMed

    Zhang, Songchuan; Xia, Youshen; Wang, Jun

    2015-12-01

    In this paper, we present a complex-valued projection neural network for solving constrained convex optimization problems of real functions with complex variables, as an extension of real-valued projection neural networks. Theoretically, by developing results on complex-valued optimization techniques, we prove that the complex-valued projection neural network is globally stable and convergent to the optimal solution. Obtained results are completely established in the complex domain and thus significantly generalize existing results of the real-valued projection neural networks. Numerical simulations are presented to confirm the obtained results and effectiveness of the proposed complex-valued projection neural network.

  9. Global exponential stability of multitime scale competitive neural networks with nonsmooth functions.

    PubMed

    Lu, Hongtao; Amari, Shun-ichi

    2006-09-01

    In this paper, we study the global exponential stability of a multitime scale competitive neural network model with nonsmooth functions, which models a literally inhibited neural network with unsupervised Hebbian learning. The network has two types of state variables, one corresponds to the fast neural activity and another to the slow unsupervised modification of connection weights. Based on the nonsmooth analysis techniques, we prove the existence and uniqueness of equilibrium for the system and establish some new theoretical conditions ensuring global exponential stability of the unique equilibrium of the neural network. Numerical simulations are conducted to illustrate the effectiveness of the derived conditions in characterizing stability regions of the neural network.

  10. General regression neural network and Monte Carlo simulation model for survival and growth of Salmonella on raw chicken skin as a function of serotype, temperature and time for use in risk assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A general regression neural network and Monte Carlo simulation model for predicting survival and growth of Salmonella on raw chicken skin as a function of serotype (Typhimurium, Kentucky, Hadar), temperature (5 to 50C) and time (0 to 8 h) was developed. Poultry isolates of Salmonella with natural r...

  11. Three-dimensional thinning by neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Jun; Shen, Wei

    1995-10-01

    3D thinning is widely used in 3D object representation in computer vision and in trajectory planning in robotics to find the topological structure of the free space. In the present paper, we propose a 3D image thinning method by neural networks. Each voxel in the 3D image corresponds to a set of neurons, called 3D Thinron, in the network. Taking the 3D Thinron as the elementary unit, the global structure of the network is a 3D array in which each Thinron is connected with the 26 neighbors in the neighborhood 3 X 3 X 3. As to the Thinron itself, the set of neurons are organized in multiple layers. In the first layer, we have neurons for boundary analysis, connectivity analysis and connectivity verification, taking as input the voxels in the 3 X 3 X 3 neighborhood and the intermediate outputs of neighboring Thinrons. In the second layer, we have the neurons for synthetical analysis to give the intermediate output of Thinron. In the third layer, we have the decision neurons whose state determines the final output. All neurons in the Thinron are the adaline neurons of Widrow, except the connectivity analysis and verification neurons which are nonlinear neurons. With the 3D Thinron neural network, the state transition of the network will take place automatically, and the network converges to the final steady state, which gives the result medial surface of 3D objects, preserving the connectivity in the initial image. The method presented is simulated and tested for 3D images, experimental results are reported.

  12. Enhancing neural-network performance via assortativity

    SciTech Connect

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-03-15

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  13. Enhancing neural-network performance via assortativity.

    PubMed

    de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J

    2011-03-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  14. Sunspot prediction using neural networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Baffes, Paul

    1990-01-01

    The earliest systematic observance of sunspot activity is known to have been discovered by the Chinese in 1382 during the Ming Dynasty (1368 to 1644) when spots on the sun were noticed by looking at the sun through thick, forest fire smoke. Not until after the 18th century did sunspot levels become more than a source of wonderment and curiosity. Since 1834 reliable sunspot data has been collected by the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Naval Observatory. Recently, considerable effort has been placed upon the study of the effects of sunspots on the ecosystem and the space environment. The efforts of the Artificial Intelligence Section of the Mission Planning and Analysis Division of the Johnson Space Center involving the prediction of sunspot activity using neural network technologies are described.

  15. Wavelet differential neural network observer.

    PubMed

    Chairez, Isaac

    2009-09-01

    State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown.

  16. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2002-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, online, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce {sub x} emissions and improve heat rate

  17. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  18. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-03-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  19. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2003-12-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  20. Neural networks for damage identification

    SciTech Connect

    Paez, T.L.; Klenke, S.E.

    1997-11-01

    Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  1. Efficient Training of Recurrent Neural Network with Time Delays.

    PubMed

    Marom, Emanuel; Saad, David; Cohen, Barak

    1997-01-01

    Training recurrent neural networks to perform certain tasks is known to be difficult. The possibility of adding synaptic delays to the network properties makes the training task more difficult. However, the disadvantage of tough training procedure is diminished by the improved network performance. During our research of training neural networks with time delays we encountered a robust method for accomplishing the training task. The method is based on adaptive simulated annealing algorithm (ASA) which was found to be superior to other training algorithms. It requires no tuning and is fast enough to enable training to be held on low end platforms such as personal computers. The implementation of the algorithm is presented over a set of typical benchmark tests of training recurrent neural networks with time delays. Copyright 1996 Elsevier Science Ltd.

  2. VLSI Cells Placement Using the Neural Networks

    SciTech Connect

    Azizi, Hacene; Zouaoui, Lamri; Mokhnache, Salah

    2008-06-12

    The artificial neural networks have been studied for several years. Their effectiveness makes it possible to expect high performances. The privileged fields of these techniques remain the recognition and classification. Various applications of optimization are also studied under the angle of the artificial neural networks. They make it possible to apply distributed heuristic algorithms. In this article, a solution to placement problem of the various cells at the time of the realization of an integrated circuit is proposed by using the KOHONEN network.

  3. CAISSON: Interconnect Network Simulator

    NASA Technical Reports Server (NTRS)

    Springer, Paul L.

    2006-01-01

    Cray response to HPCS initiative. Model future petaflop computer interconnect. Parallel discrete event simulation techniques for large scale network simulation. Built on WarpIV engine. Run on laptop and Altix 3000. Can be sized up to 1000 simulated nodes per host node. Good parallel scaling characteristics. Flexible: multiple injectors, arbitration strategies, queue iterators, network topologies.

  4. Deterministic chaos control in neural networks on various topologies

    NASA Astrophysics Data System (ADS)

    Neto, A. J. F.; Lima, F. W. S.

    2017-01-01

    Using numerical simulations, we study the control of deterministic chaos in neural networks on various topologies like Voronoi-Delaunay, Barabási-Albert, Small-World networks and Erdös-Rényi random graphs by "pinning" the state of a "special" neuron. We show that the chaotic activity of the networks or graphs, when control is on, can become constant or periodic.

  5. Artificial neural network for multifunctional areas.

    PubMed

    Riccioli, Francesco; El Asmar, Toufic; El Asmar, Jean-Pierre; Fagarazzi, Claudio; Casini, Leonardo

    2016-01-01

    The issues related to the appropriate planning of the territory are particularly pronounced in highly inhabited areas (urban areas), where in addition to protecting the environment, it is important to consider an anthropogenic (urban) development placed in the context of sustainable growth. This work aims at mathematically simulating the changes in the land use, by implementing an artificial neural network (ANN) model. More specifically, it will analyze how the increase of urban areas will develop and whether this development would impact on areas with particular socioeconomic and environmental value, defined as multifunctional areas. The simulation is applied to the Chianti Area, located in the province of Florence, in Italy. Chianti is an area with a unique landscape, and its territorial planning requires a careful examination of the territory in which it is inserted.

  6. Artificial neural network modelling of biological oxygen demand in rivers at the national level with input selection based on Monte Carlo simulations.

    PubMed

    Šiljić, Aleksandra; Antanasijević, Davor; Perić-Grujić, Aleksandra; Ristić, Mirjana; Pocajt, Viktor

    2015-03-01

    Biological oxygen demand (BOD) is the most significant water quality parameter and indicates water pollution with respect to the present biodegradable organic matter content. European countries are therefore obliged to report annual BOD values to Eurostat; however, BOD data at the national level is only available for 28 of 35 listed European countries for the period prior to 2008, among which 46% of data is missing. This paper describes the development of an artificial neural network model for the forecasting of annual BOD values at the national level, using widely available sustainability and economical/industrial parameters as inputs. The initial general regression neural network (GRNN) model was trained, validated and tested utilizing 20 inputs. The number of inputs was reduced to 15 using the Monte Carlo simulation technique as the input selection method. The best results were achieved with the GRNN model utilizing 25% less inputs than the initial model and a comparison with a multiple linear regression model trained and tested using the same input variables using multiple statistical performance indicators confirmed the advantage of the GRNN model. Sensitivity analysis has shown that inputs with the greatest effect on the GRNN model were (in descending order) precipitation, rural population with access to improved water sources, treatment capacity of wastewater treatment plants (urban) and treatment of municipal waste, with the last two having an equal effect. Finally, it was concluded that the developed GRNN model can be useful as a tool to support the decision-making process on sustainable development at a regional, national and international level.

  7. Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.

    PubMed

    Zhang, Yanjun; Tao, Gang; Chen, Mou

    2016-09-01

    This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method.

  8. Neural networks within multi-core optic fibers

    PubMed Central

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-01-01

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks. PMID:27383911

  9. Neural networks within multi-core optic fibers

    NASA Astrophysics Data System (ADS)

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-01

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  10. Neural networks within multi-core optic fibers.

    PubMed

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  11. Neural network regulation driven by autonomous neural firings

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  12. ANNarchy: a code generation approach to neural simulations on parallel hardware

    PubMed Central

    Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.

    2015-01-01

    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957

  13. Object detection using pulse coupled neural networks.

    PubMed

    Ranganath, H S; Kuntimad, G

    1999-01-01

    This paper describes an object detection system based on pulse coupled neural networks. The system is designed and implemented to illustrate the power, flexibility and potential the pulse coupled neural networks have in real-time image processing. In the preprocessing stage, a pulse coupled neural network suppresses noise by smoothing the input image. In the segmentation stage, a second pulse coupled neural-network iteratively segments the input image. During each iteration, with the help of a control module, the segmentation network deletes regions that do not satisfy the retention criteria from further processing and produces an improved segmentation of the retained image. In the final stage each group of connected regions that satisfies the detection criteria is identified as an instance of the object of interest.

  14. A neural network prototyping package within IRAF

    NASA Technical Reports Server (NTRS)

    Bazell, D.; Bankman, I.

    1992-01-01

    We outline our plans for incorporating a Neural Network Prototyping Package into the IRAF environment. The package we are developing will allow the user to choose between different types of networks and to specify the details of the particular architecture chosen. Neural networks consist of a highly interconnected set of simple processing units. The strengths of the connections between units are determined by weights which are adaptively set as the network 'learns'. In some cases, learning can be a separate phase of the user cycle of the network while in other cases the network learns continuously. Neural networks have been found to be very useful in pattern recognition and image processing applications. They can form very general 'decision boundaries' to differentiate between objects in pattern space and they can be used for associative recall of patterns based on partial cures and for adaptive filtering. We discuss the different architectures we plan to use and give examples of what they can do.

  15. Description of interatomic interactions with neural networks

    NASA Astrophysics Data System (ADS)

    Hajinazar, Samad; Shao, Junping; Kolmogorov, Aleksey N.

    Neural networks are a promising alternative to traditional classical potentials for describing interatomic interactions. Recent research in the field has demonstrated how arbitrary atomic environments can be represented with sets of general functions which serve as an input for the machine learning tool. We have implemented a neural network formalism in the MAISE package and developed a protocol for automated generation of accurate models for multi-component systems. Our tests illustrate the performance of neural networks and known classical potentials for a range of chemical compositions and atomic configurations. Supported by NSF Grant DMR-1410514.

  16. Genetic algorithm for neural networks optimization

    NASA Astrophysics Data System (ADS)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  17. Noise cancellation of memristive neural networks.

    PubMed

    Wen, Shiping; Zeng, Zhigang; Huang, Tingwen; Yu, Xinghuo

    2014-12-01

    This paper investigates noise cancellation problem of memristive neural networks. Based on the reproducible gradual resistance tuning in bipolar mode, a first-order voltage-controlled memristive model is employed with asymmetric voltage thresholds. Since memristive devices are especially tiny to be densely packed in crossbar-like structures and possess long time memory needed by neuromorphic synapses, this paper shows how to approximate the behavior of synapses in neural networks using this memristive device. Also certain templates of memristive neural networks are established to implement the noise cancellation.

  18. Neural network with formed dynamics of activity

    SciTech Connect

    Dunin-Barkovskii, V.L.; Osovets, N.B.

    1995-03-01

    The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.

  19. Neural networks techniques applied to reservoir engineering

    SciTech Connect

    Flores, M.; Barragan, C.

    1995-12-31

    Neural Networks are considered the greatest technological advance since the transistor. They are expected to be a common household item by the year 2000. An attempt to apply Neural Networks to an important geothermal problem has been made, predictions on the well production and well completion during drilling in a geothermal field. This was done in Los Humeros geothermal field, using two common types of Neural Network models, available in commercial software. Results show the learning capacity of the developed model, and its precision in the predictions that were made.

  20. Stock market index prediction using neural networks

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  1. Threshold control of chaotic neural network.

    PubMed

    He, Guoguang; Shrimali, Manish Dev; Aihara, Kazuyuki

    2008-01-01

    The chaotic neural network constructed with chaotic neurons exhibits rich dynamic behaviour with a nonperiodic associative memory. In the chaotic neural network, however, it is difficult to distinguish the stored patterns in the output patterns because of the chaotic state of the network. In order to apply the nonperiodic associative memory into information search, pattern recognition etc. it is necessary to control chaos in the chaotic neural network. We have studied the chaotic neural network with threshold activated coupling, which provides a controlled network with associative memory dynamics. The network converges to one of its stored patterns or/and reverse patterns which has the smallest Hamming distance from the initial state of the network. The range of the threshold applied to control the neurons in the network depends on the noise level in the initial pattern and decreases with the increase of noise. The chaos control in the chaotic neural network by threshold activated coupling at varying time interval provides controlled output patterns with different temporal periods which depend upon the control parameters.

  2. Neural Networks for Signal Processing and Control

    NASA Astrophysics Data System (ADS)

    Hesselroth, Ted Daniel

    Neural networks are developed for controlling a robot-arm and camera system and for processing images. The networks are based upon computational schemes that may be found in the brain. In the first network, a neural map algorithm is employed to control a five-joint pneumatic robot arm and gripper through feedback from two video cameras. The pneumatically driven robot arm employed shares essential mechanical characteristics with skeletal muscle systems. To control the position of the arm, 200 neurons formed a network representing the three-dimensional workspace embedded in a four-dimensional system of coordinates from the two cameras, and learned a set of pressures corresponding to the end effector positions, as well as a set of Jacobian matrices for interpolating between these positions. Because of the properties of the rubber-tube actuators of the arm, the position as a function of supplied pressure is nonlinear, nonseparable, and exhibits hysteresis. Nevertheless, through the neural network learning algorithm the position could be controlled to an accuracy of about one pixel (~3 mm) after two hundred learning steps. Applications of repeated corrections in each step via the Jacobian matrices leads to a very robust control algorithm since the Jacobians learned by the network have to satisfy the weak requirement that they yield a reduction of the distance between gripper and target. The second network is proposed as a model for the mammalian vision system in which backward connections from the primary visual cortex (V1) to the lateral geniculate nucleus play a key role. The application of hebbian learning to the forward and backward connections causes the formation of receptive fields which are sensitive to edges, bars, and spatial frequencies of preferred orientations. The receptive fields are learned in such a way as to maximize the rate of transfer of information from the LGN to V1. Orientational preferences are organized into a feature map in the primary visual

  3. An artificial neural network controller for intelligent transportation systems applications

    SciTech Connect

    Vitela, J.E.; Hanebutte, U.R.; Reifman, J.

    1996-04-01

    An Autonomous Intelligent Cruise Control (AICC) has been designed using a feedforward artificial neural network, as an example for utilizing artificial neural networks for nonlinear control problems arising in intelligent transportation systems applications. The AICC is based on a simple nonlinear model of the vehicle dynamics. A Neural Network Controller (NNC) code developed at Argonne National Laboratory to control discrete dynamical systems was used for this purpose. In order to test the NNC, an AICC-simulator containing graphical displays was developed for a system of two vehicles driving in a single lane. Two simulation cases are shown, one involving a lead vehicle with constant velocity and the other a lead vehicle with varying acceleration. More realistic vehicle dynamic models will be considered in future work.

  4. Gradient calculations for dynamic recurrent neural networks: a survey.

    PubMed

    Pearlmutter, B A

    1995-01-01

    Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some "tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed.

  5. Nonequilibrium landscape theory of neural networks.

    PubMed

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  6. A neural network model of harmonic detection

    NASA Astrophysics Data System (ADS)

    Lewis, Clifford F.

    2003-04-01

    Harmonic detection theories postulate that a virtual pitch is perceived when a sufficient number of harmonics is present. The harmonics need not be consecutive, but higher harmonics contribute less than lower harmonics [J. Raatgever and F. A. Bilsen, in Auditory Physiology and Perception, edited by Y. Cazals, K. Horner, and L. Demany (Pergamon, Oxford, 1992), pp. 215-222 M. K. McBeath and J. F. Wayand, Abstracts of the Psychonom. Soc. 3, 55 (1998)]. A neural network model is presented that has the potential to simulate this operation. Harmonics are first passed through a bank of rounded exponential filters with lateral inhibition. The results are used as inputs for an autoassociator neural network. The model is trained using harmonic data for symphonic musical instruments, in order to test whether it can self-organize by learning associations between co-occurring harmonics. It is shown that the trained model can complete the pattern for missing-fundamental sounds. The Performance of the model in harmonic detection will be compared with experimental results for humans.

  7. A gentle introduction to artificial neural networks.

    PubMed

    Zhang, Zhongheng

    2016-10-01

    Artificial neural network (ANN) is a flexible and powerful machine learning technique. However, it is under utilized in clinical medicine because of its technical challenges. The article introduces some basic ideas behind ANN and shows how to build ANN using R in a step-by-step framework. In topology and function, ANN is in analogue to the human brain. There are input and output signals transmitting from input to output nodes. Input signals are weighted before reaching output nodes according to their respective importance. Then the combined signal is processed by activation function. I simulated a simple example to illustrate how to build a simple ANN model using nnet() function. This function allows for one hidden layer with varying number of units in that layer. The basic structure of ANN can be visualized with plug-in plot.nnet() function. The plot function is powerful that it allows for varieties of adjustment to the appearance of the neural networks. Prediction with ANN can be performed with predict() function, similar to that of conventional generalized linear models. Finally, the prediction power of ANN is examined using confusion matrix and average accuracy. It appears that ANN is slightly better than conventional linear model.

  8. Neural networks for convex hull computation.

    PubMed

    Leung, Y; Zhang, J S; Xu, Z B

    1997-01-01

    Computing convex hull is one of the central problems in various applications of computational geometry. In this paper, a convex hull computing neural network (CHCNN) is developed to solve the related problems in the N-dimensional spaces. The algorithm is based on a two-layered neural network, topologically similar to ART, with a newly developed adaptive training strategy called excited learning. The CHCNN provides a parallel online and real-time processing of data which, after training, yields two closely related approximations, one from within and one from outside, of the desired convex hull. It is shown that accuracy of the approximate convex hulls obtained is around O[K(-1)(N-1/)], where K is the number of neurons in the output layer of the CHCNN. When K is taken to be sufficiently large, the CHCNN can generate any accurate approximate convex hull. We also show that an upper bound exists such that the CHCNN will yield the precise convex hull when K is larger than or equal to this bound. A series of simulations and applications is provided to demonstrate the feasibility, effectiveness, and high efficiency of the proposed algorithm.

  9. Adaptive Optimization of Aircraft Engine Performance Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Long, Theresa W.

    1995-01-01

    Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.

  10. Optimization of a polymer composite employing molecular mechanic simulations and artificial neural networks for a novel intravaginal bioadhesive drug delivery device.

    PubMed

    Ndesendo, Valence M K; Pillay, Viness; Choonara, Yahya E; du Toit, Lisa C; Kumar, Pradeep; Buchmann, Eckhart; Meyer, Leith C R; Khan, Riaz A

    2012-01-01

    This study aimed at elucidating an optimal synergistic polymer composite for achieving a desirable molecular bioadhesivity and Matrix Erosion of a bioactive-loaded Intravaginal Bioadhesive Polymeric Device (IBPD) employing Molecular Mechanic Simulations and Artificial Neural Networks (ANN). Fifteen lead caplet-shaped devices were formulated by direct compression with the model bioactives zidovudine and polystyrene sulfonate. The Matrix Erosion was analyzed in simulated vaginal fluid to assess the critical integrity. Blueprinting the molecular mechanics of bioadhesion between vaginal epithelial glycoprotein (EGP), mucin (MUC) and the IBPD were performed on HyperChem 8.0.8 software (MM+ and AMBER force fields) for the quantification and characterization of correlative molecular interactions during molecular bioadhesion. Results proved that the IBPD bioadhesivity was pivoted on the conformation, orientation, and poly(acrylic acid) (PAA) composition that interacted with EGP and MUC present on the vaginal epithelium due to heterogeneous surface residue distributions (free energy= -46.33 kcalmol(-1)). ANN sensitivity testing as a connectionist model enabled strategic polymer selection for developing an IBPD with an optimally prolonged Matrix Erosion and superior molecular bioadhesivity (ME = 1.21-7.68%; BHN = 2.687-4.981 N/mm(2)). Molecular modeling aptly supported the EGP-MUC-PAA molecular interaction at the vaginal epithelium confirming the role of PAA in bioadhesion of the IBPD once inserted into the posterior fornix of the vagina.

  11. Modelling personal exposure to particulate air pollution: an assessment of time-integrated activity modelling, Monte Carlo simulation & artificial neural network approaches.

    PubMed

    McCreddin, A; Alam, M S; McNabola, A

    2015-01-01

    An experimental assessment of personal exposure to PM10 in 59 office workers was carried out in Dublin, Ireland. 255 samples of 24-h personal exposure were collected in real time over a 28 month period. A series of modelling techniques were subsequently assessed for their ability to predict 24-h personal exposure to PM10. Artificial neural network modelling, Monte Carlo simulation and time-activity based models were developed and compared. The results of the investigation showed that using the Monte Carlo technique to randomly select concentrations from statistical distributions of exposure concentrations in typical microenvironments encountered by office workers produced the most accurate results, based on 3 statistical measures of model performance. The Monte Carlo simulation technique was also shown to have the greatest potential utility over the other techniques, in terms of predicting personal exposure without the need for further monitoring data. Over the 28 month period only a very weak correlation was found between background air quality and personal exposure measurements, highlighting the need for accurate models of personal exposure in epidemiological studies.

  12. Neural Action Fields for Optic Flow Based Navigation: A Simulation Study of the Fly Lobula Plate Network

    PubMed Central

    Borst, Alexander; Weber, Franz

    2011-01-01

    Optic flow based navigation is a fundamental way of visual course control described in many different species including man. In the fly, an essential part of optic flow analysis is performed in the lobula plate, a retinotopic map of motion in the environment. There, the so-called lobula plate tangential cells possess large receptive fields with different preferred directions in different parts of the visual field. Previous studies demonstrated an extensive connectivity between different tangential cells, providing, in principle, the structural basis for their large and complex receptive fields. We present a network simulation of the tangential cells, comprising most of the neurons studied so far (22 on each hemisphere) with all the known connectivity between them. On their dendrite, model neurons receive input from a retinotopic array of Reichardt-type motion detectors. Model neurons exhibit receptive fields much like their natural counterparts, demonstrating that the connectivity between the lobula plate tangential cells indeed can account for their complex receptive field structure. We describe the tuning of a model neuron to particular types of ego-motion (rotation as well as translation around/along a given body axis) by its ‘action field’. As we show for model neurons of the vertical system (VS-cells), each of them displays a different type of action field, i.e., responds maximally when the fly is rotating around a particular body axis. However, the tuning width of the rotational action fields is relatively broad, comparable to the one with dendritic input only. The additional intra-lobula-plate connectivity mainly reduces their translational action field amplitude, i.e., their sensitivity to translational movements along any body axis of the fly. PMID:21305019

  13. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  14. Artificial neural network modeling of dissolved oxygen in reservoir.

    PubMed

    Chen, Wei-Bo; Liu, Wen-Cheng

    2014-02-01

    The water quality of reservoirs is one of the key factors in the operation and water quality management of reservoirs. Dissolved oxygen (DO) in water column is essential for microorganisms and a significant indicator of the state of aquatic ecosystems. In this study, two artificial neural network (ANN) models including back propagation neural network (BPNN) and adaptive neural-based fuzzy inference system (ANFIS) approaches and multilinear regression (MLR) model were developed to estimate the DO concentration in the Feitsui Reservoir of northern Taiwan. The input variables of the neural network are determined as water temperature, pH, conductivity, turbidity, suspended solids, total hardness, total alkalinity, and ammonium nitrogen. The performance of the ANN models and MLR model was assessed through the mean absolute error, root mean square error, and correlation coefficient computed from the measured and model-simulated DO values. The results reveal that ANN estimation performances were superior to those of MLR. Comparing to the BPNN and ANFIS models through the performance criteria, the ANFIS model is better than the BPNN model for predicting the DO values. Study results show that the neural network particularly using ANFIS model is able to predict the DO concentrations with reasonable accuracy, suggesting that the neural network is a valuable tool for reservoir management in Taiwan.

  15. Decision-making differences: novices, experts, and a neural network

    NASA Astrophysics Data System (ADS)

    Manning, David; Bunting, Sam; Leach, John

    2000-04-01

    We investigated the decision making performance of trained radiographers, novice radiographers and a neural network in the detection of fractures. Ground truth was established by the independent agreement of experienced radiologists for 740 single view digitized radiographs of the wrist. The images were categorized into negatives and positives; 520 of these were used to train the back propagation, three layer neural network in a supervised mode, and the remainder were used to create a test bank. The test was presented to 20 novice observers, 12 experienced radiographers trained in the detection of skeletal trauma and then to the trained neural network. ROC Az values for all the decision makers were not significantly different (p > 0.1) but there were significant differences in the values of True Positive and True Negative Fractions. The neural network showed a greater aptitude for distinguishing the normals. By filtering the neural net decisions through the human data we simulated the effect of assisted reporting. Results suggest that if fracture prevalence is very low in a population, a neural network demonstrating high specificity may have utility in reducing the number of images which must be reviewed by human experts.

  16. Results of the neural network investigation

    NASA Astrophysics Data System (ADS)

    Uvanni, Lee A.

    1992-04-01

    Rome Laboratory has designed and implemented a neural network based automatic target recognition (ATR) system under contract F30602-89-C-0079 with Booz, Allen & Hamilton (BAH), Inc., of Arlington, Virginia. The system utilizes a combination of neural network paradigms and conventional image processing techniques in a parallel environment on the IE- 2000 SUN 4 workstation at Rome Laboratory. The IE-2000 workstation was designed to assist the Air Force and Department of Defense to derive the needs for image exploitation and image exploitation support for the late 1990s - year 2000 time frame. The IE-2000 consists of a developmental testbed and an applications testbed, both with the goal of solving real world problems on real-world facilities for image exploitation. To fully exploit the parallel nature of neural networks, 18 Inmos T800 transputers were utilized, in an attempt to provide a near- linear speed-up for each subsystem component implemented on them. The initial design contained three well-known neural network paradigms, each modified by BAH to some extent: the Selective Attention Neocognitron (SAN), the Binary Contour System/Feature Contour System (BCS/FCS), and Adaptive Resonance Theory 2 (ART-2), and one neural network designed by BAH called the Image Variance Exploitation Network (IVEN). Through rapid prototyping, the initial system evolved into a completely different final design, called the Neural Network Image Exploitation System (NNIES), where the final system consists of two basic components: the Double Variance (DV) layer and the Multiple Object Detection And Location System (MODALS). A rapid prototyping neural network CAD Tool, designed by Booz, Allen & Hamilton, was used to rapidly build and emulate the neural network paradigms. Evaluation of the completed ATR system included probability of detections and probability of false alarms among other measures.

  17. Speech transmission index from running speech: A neural network approach

    NASA Astrophysics Data System (ADS)

    Li, F. F.; Cox, T. J.

    2003-04-01

    Speech transmission index (STI) is an important objective parameter concerning speech intelligibility for sound transmission channels. It is normally measured with specific test signals to ensure high accuracy and good repeatability. Measurement with running speech was previously proposed, but accuracy is compromised and hence applications limited. A new approach that uses artificial neural networks to accurately extract the STI from received running speech is developed in this paper. Neural networks are trained on a large set of transmitted speech examples with prior knowledge of the transmission channels' STIs. The networks perform complicated nonlinear function mappings and spectral feature memorization to enable accurate objective parameter extraction from transmitted speech. Validations via simulations demonstrate the feasibility of this new method on a one-net-one-speech extract basis. In this case, accuracy is comparable with normal measurement methods. This provides an alternative to standard measurement techniques, and it is intended that the neural network method can facilitate occupied room acoustic measurements.

  18. Sensor failure detection and recovery by neural networks

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei; Nurre, J.

    1991-01-01

    A new method of sensor failure detection, isolation, and accommodation is described using a neural network approach. In a propulsion system such as the Space Shuttle Main Engine, the dynamics are usually much higher than the order of the system. This built-in redundancy of the sensors can be utilized to detect and correct sensor failure problems. The goal of the proposed scheme is to train a neural network to identify the sensor whose measurement is not consistent with other sensor outputs. Another neural network is trained to recover the value of critical variables when their measurements fail. Techniques for training the network with a limited amount of data are developed. The proposed scheme is tested using the simulated data of the Space Shuttle Main Engine (SSME) inflight sensor group.

  19. Vein matching using artificial neural network in vein authentication systems

    NASA Astrophysics Data System (ADS)

    Noori Hoshyar, Azadeh; Sulaiman, Riza

    2011-10-01

    Personal identification technology as security systems is developing rapidly. Traditional authentication modes like key; password; card are not safe enough because they could be stolen or easily forgotten. Biometric as developed technology has been applied to a wide range of systems. According to different researchers, vein biometric is a good candidate among other biometric traits such as fingerprint, hand geometry, voice, DNA and etc for authentication systems. Vein authentication systems can be designed by different methodologies. All the methodologies consist of matching stage which is too important for final verification of the system. Neural Network is an effective methodology for matching and recognizing individuals in authentication systems. Therefore, this paper explains and implements the Neural Network methodology for finger vein authentication system. Neural Network is trained in Matlab to match the vein features of authentication system. The Network simulation shows the quality of matching as 95% which is a good performance for authentication system matching.

  20. Recognition of Telugu characters using neural networks.

    PubMed

    Sukhaswami, M B; Seetharamulu, P; Pujari, A K

    1995-09-01

    The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.

  1. Neural Networks for Dynamic Flight Control

    DTIC Science & Technology

    1993-12-01

    uses the Adaline (22) model for development of the neural networks. Neural Graphics and other AFIT applications use a slightly different model. The...primary difference in the Nguyen application is that the Adaline uses the nonlinear function .f(a) = tanh(a) where standard backprop uses the sigmoid

  2. Constructive Autoassociative Neural Network for Facial Recognition

    PubMed Central

    Fernandes, Bruno J. T.; Cavalcanti, George D. C.; Ren, Tsang I.

    2014-01-01

    Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature. PMID:25542018

  3. A neural network architecture for data classification.

    PubMed

    Lezoray, O

    2001-02-01

    This article aims at showing an architecture of neural networks designed for the classification of data distributed among a high number of classes. A significant gain in the global classification rate can be obtained by using our architecture. This latter is based on a set of several little neural networks, each one discriminating only two classes. The specialization of each neural network simplifies their structure and improves the classification. Moreover, the learning step automatically determines the number of hidden neurons. The discussion is illustrated by tests on databases from the UCI machine learning database repository. The experimental results show that this architecture can achieve a faster learning, simpler neural networks and an improved performance in classification.

  4. Imbibition well stimulation via neural network design

    DOEpatents

    Weiss, William

    2007-08-14

    A method for stimulation of hydrocarbon production via imbibition by utilization of surfactants. The method includes use of fuzzy logic and neural network architecture constructs to determine surfactant use.

  5. Temporal Coding in Realistic Neural Networks

    NASA Astrophysics Data System (ADS)

    Gerasyuta, S. M.; Ivanov, D. V.

    1995-10-01

    The modification of realistic neural network model have been proposed. The model differs from the Hopfield model because of the two characteristic contributions to synaptic efficacious: the short-time contribution which is determined by the chemical reactions in the synapses and the long-time contribution corresponding to the structural changes of synaptic contacts. The approximation solution of the realistic neural network model equations is obtained. This solution allows us to calculate the postsynaptic potential as function of input. Using the approximate solution of realistic neural network model equations the behaviour of postsynaptic potential of realistic neural network as function of time for the different temporal sequences of stimuli is described. The various outputs are obtained for the different temporal sequences of the given stimuli. These properties of the temporal coding can be exploited as a recognition element capable of being selectively tuned to different inputs.

  6. A neural network for bounded linear programming

    SciTech Connect

    Culioli, J.C.; Protopopescu, V.; Britton, C.; Ericson, N. )

    1989-01-01

    The purpose of this paper is to describe a neural network implementation of an algorithm recently designed at ORNL to solve the Transportation and the Assignment Problems, and, more generally, any explicitly bounded linear program. 9 refs.

  7. Blood glucose prediction using neural network

    NASA Astrophysics Data System (ADS)

    Soh, Chit Siang; Zhang, Xiqin; Chen, Jianhong; Raveendran, P.; Soh, Phey Hong; Yeo, Joon Hock

    2008-02-01

    We used neural network for blood glucose level determination in this study. The data set used in this study was collected using a non-invasive blood glucose monitoring system with six laser diodes, each laser diode operating at distinct near infrared wavelength between 1500nm and 1800nm. The neural network is specifically used to determine blood glucose level of one individual who participated in an oral glucose tolerance test (OGTT) session. Partial least squares regression is also used for blood glucose level determination for the purpose of comparison with the neural network model. The neural network model performs better in the prediction of blood glucose level as compared with the partial least squares model.

  8. Using neural networks in software repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  9. Limitations of opto-electronic neural networks

    NASA Technical Reports Server (NTRS)

    Yu, Jeffrey; Johnston, Alan; Psaltis, Demetri; Brady, David

    1989-01-01

    Consideration is given to the limitations of implementing neurons, weights, and connections in neural networks for electronics and optics. It is shown that the advantages of each technology are utilized when electronically fabricated neurons are included and a combination of optics and electronics are employed for the weights and connections. The relationship between the types of neural networks being constructed and the choice of technologies to implement the weights and connections is examined.

  10. Predicting Car Production using a Neural Network

    DTIC Science & Technology

    2003-04-24

    World Almanac Education Group, 2003 [8] E. Petroutsos, Mastering Visual Basic .NET, SYBEX Inc., 2002 [9] D. E. Rumelhart, J. L. McClelland, Parallel...In this example, 100,000 cycles (epochs) were used to train it. The initial weights were randomly selected from values between 1 and -1. Visual ... basic .NET was used to program the neural network [8]. The neural network algorithm followed the steps outlined in [9]. As stated above, a 3 layer

  11. Neural network for image segmentation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.; Prasad, Lakshman; Schlei, Bernd R.

    2000-10-01

    Image analysis is an important requirement of many artificial intelligence systems. Though great effort has been devoted to inventing efficient algorithms for image analysis, there is still much work to be done. It is natural to turn to mammalian vision systems for guidance because they are the best known performers of visual tasks. The pulse- coupled neural network (PCNN) model of the cat visual cortex has proven to have interesting properties for image processing. This article describes the PCNN application to the processing of images of heterogeneous materials; specifically PCNN is applied to image denoising and image segmentation. Our results show that PCNNs do well at segmentation if we perform image smoothing prior to segmentation. We use PCNN for obth smoothing and segmentation. Combining smoothing and segmentation enable us to eliminate PCNN sensitivity to the setting of the various PCNN parameters whose optimal selection can be difficult and can vary even for the same problem. This approach makes image processing based on PCNN more automatic in our application and also results in better segmentation.

  12. Learning and coding in biological neural networks

    NASA Astrophysics Data System (ADS)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  13. Logarithmic learning for generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network.

  14. Neural networks for segmentation, tracking, and identification

    NASA Astrophysics Data System (ADS)

    Rogers, Steven K.; Ruck, Dennis W.; Priddy, Kevin L.; Tarr, Gregory L.

    1992-09-01

    The main thrust of this paper is to encourage the use of neural networks to process raw data for subsequent classification. This article addresses neural network techniques for processing raw pixel information. For this paper the definition of neural networks includes the conventional artificial neural networks such as the multilayer perceptrons and also biologically inspired processing techniques. Previously, we have successfully used the biologically inspired Gabor transform to process raw pixel information and segment images. In this paper we extend those ideas to both segment and track objects in multiframe sequences. It is also desirable for the neural network processing data to learn features for subsequent recognition. A common first step for processing raw data is to transform the data and use the transform coefficients as features for recognition. For example, handwritten English characters become linearly separable in the feature space of the low frequency Fourier coefficients. Much of human visual perception can be modelled by assuming low frequency Fourier as the feature space used by the human visual system. The optimum linear transform, with respect to reconstruction, is the Karhunen-Loeve transform (KLT). It has been shown that some neural network architectures can compute approximations to the KLT. The KLT coefficients can be used for recognition as well as for compression. We tested the use of the KLT on the problem of interfacing a nonverbal patient to a computer. The KLT uses an optimal basis set for object reconstruction. For object recognition, the KLT may not be optimal.

  15. Delayed switching applied to memristor neural networks

    NASA Astrophysics Data System (ADS)

    Wang, Frank Z.; Helian, Na; Wu, Sining; Yang, Xiao; Guo, Yike; Lim, Guan; Rashid, Md Mamunur

    2012-04-01

    Magnetic flux and electric charge are linked in a memristor. We reported recently that a memristor has a peculiar effect in which the switching takes place with a time delay because a memristor possesses a certain inertia. This effect was named the "delayed switching effect." In this work, we elaborate on the importance of delayed switching in a brain-like computer using memristor neural networks. The effect is used to control the switching of a memristor synapse between two neurons that fire together (the Hebbian rule). A theoretical formula is found, and the design is verified by a simulation. We have also built an experimental setup consisting of electronic memristive synapses and electronic neurons.

  16. Delayed switching applied to memristor neural networks

    SciTech Connect

    Wang, Frank Z.; Yang Xiao; Lim Guan; Helian Na; Wu Sining; Guo Yike; Rashid, Md Mamunur

    2012-04-01

    Magnetic flux and electric charge are linked in a memristor. We reported recently that a memristor has a peculiar effect in which the switching takes place with a time delay because a memristor possesses a certain inertia. This effect was named the ''delayed switching effect.'' In this work, we elaborate on the importance of delayed switching in a brain-like computer using memristor neural networks. The effect is used to control the switching of a memristor synapse between two neurons that fire together (the Hebbian rule). A theoretical formula is found, and the design is verified by a simulation. We have also built an experimental setup consisting of electronic memristive synapses and electronic neurons.

  17. Precision of a radial basis function neural network tracking method

    NASA Technical Reports Server (NTRS)

    Hanan, J.; Zhou, H.; Chao, T. H.

    2003-01-01

    The precision of a radial basis function (RBF) neural network based tracking method has been assessed against real targets. Precision was assessed against traditionally measured frame-by-frame measurements from the recorded data set. The results show the potential limit for the technique and reveal intricacies associated with empirical data not necessarily observed in simulations.

  18. Evaluation of the efficiency of artificial neural networks for genetic value prediction.

    PubMed

    Silva, G N; Tomaz, R S; Sant'Anna, I C; Carneiro, V Q; Cruz, C D; Nascimento, M

    2016-03-28

    Artificial neural networks have shown great potential when applied to breeding programs. In this study, we propose the use of artificial neural networks as a viable alternative to conventional prediction methods. We conduct a thorough evaluation of the efficiency of these networks with respect to the prediction of breeding values. Therefore, we considered eight simulated scenarios, and for the purpose of genetic value prediction, seven statistical parameters in addition to the phenotypic mean in a network designed as a multilayer perceptron. After an evaluation of different network configurations, the results demonstrated the superiority of neural networks compared to estimation procedures based on linear models, and indicated high predictive accuracy and network efficiency.

  19. Neural-Network Object-Recognition Program

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  20. Fast curve fitting using neural networks

    NASA Astrophysics Data System (ADS)

    Bishop, C. M.; Roach, C. M.

    1992-10-01

    Neural networks provide a new tool for the fast solution of repetitive nonlinear curve fitting problems. In this article we introduce the concept of a neural network, and we show how such networks can be used for fitting functional forms to experimental data. The neural network algorithm is typically much faster than conventional iterative approaches. In addition, further substantial improvements in speed can be obtained by using special purpose hardware implementations of the network, thus making the technique suitable for use in fast real-time applications. The basic concepts are illustrated using a simple example from fusion research, involving the determination of spectral line parameters from measurements of B iv impurity radiation in the COMPASS-C tokamak.

  1. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  2. Performance and complementarity of two systemic models (reservoir and neural networks) used to simulate spring discharge and piezometry for a karst aquifer

    NASA Astrophysics Data System (ADS)

    Kong-A-Siou, Line; Fleury, Perrine; Johannet, Anne; Borrell Estupina, Valérie; Pistre, Séverin; Dörfliger, Nathalie

    2014-11-01

    Karst aquifers can provide previously untapped freshwater resources and have thus generated considerable interest among stakeholders involved in the water supply sector. Here we compare the capacity of two systemic models to simulate the discharge and piezometry of a karst aquifer. Systemic models have the advantage of allowing the study of heterogeneous, complex karst systems without relying on extensive geographical and meteorological datasets. The effectiveness and complementarity of the two models are evaluated for a range of hydrologic conditions and for three methods to estimate evapotranspiration (Monteith, a priori ET, and effective rainfall). The first model is a reservoir model (referred to as VENSIM, after the software used), which is designed with just one reservoir so as to be as parsimonious as possible. The second model is a neural network (NN) model. The models are designed to simulate the rainfall-runoff and rainfall-water level relations in a karst conduit. The Lez aquifer, a karst aquifer located near the city of Montpellier in southern France and a critical water resource, was chosen to compare the two models. Simulated discharge and water level were compared after completing model design and calibration. The results suggest that the NN model is more effective at incorporating the nonlinearity of the karst spring for extreme events (extreme low and high water levels), whereas VENSIM provides a better representation of intermediate-amplitude water level fluctuations. VENSIM is sensitive to the method used to estimate evapotranspiration, whereas the NN model is not. Given that the NN model performs better for extreme events, it is better for operational applications (predicting floods or determining water pumping height). VENSIM, on the other hand, seems more appropriate for representing the hydrologic state of the basin during intermediate periods, when several effects are at work: rain, evapotranspiration, development of vegetation, etc. A

  3. 3-D flame temperature field reconstruction with multiobjective neural network

    NASA Astrophysics Data System (ADS)

    Wan, Xiong; Gao, Yiqing; Wang, Yuanmei

    2003-02-01

    A novel 3-D temperature field reconstruction method is proposed in this paper, which is based on multiwavelength thermometry and Hopfield neural network computed tomography. A mathematical model of multi-wavelength thermometry is founded, and a neural network algorithm based on multiobjective optimization is developed. Through computer simulation and comparison with the algebraic reconstruction technique (ART) and the filter back-projection algorithm (FBP), the reconstruction result of the new method is discussed in detail. The study shows that the new method always gives the best reconstruction results. At last, temperature distribution of a section of four peaks candle flame is reconstructed with this novel method.

  4. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  5. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  6. Learning-induced synchronization and plasticity of a developing neural network.

    PubMed

    Chao, T C; Chen, C M

    2005-12-01

    Learning-induced synchronization of a neural network at various developing stages is studied by computer simulations using a pulse-coupled neural network model in which the neuronal activity is simulated by a one-dimensional map. Two types of Hebbian plasticity rules are investigated and their differences are compared. For both models, our simulations show a logarithmic increase in the synchronous firing frequency of the network with the culturing time of the neural network. This result is consistent with recent experimental observations. To investigate how to control the synchronization behavior of a neural network after learning, we compare the occurrence of synchronization for four networks with different designed patterns under the influence of an external signal. The effect of such a signal on the network activity highly depends on the number of connections between neurons. We discuss the synaptic plasticity and enhancement effects for a random network after learning at various developing stages.

  7. Hardware implementation of stochastic spiking neural networks.

    PubMed

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  8. Geoacoustic model inversion using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Benson, Jeremy; Chapman, N. Ross; Antoniou, Andreas

    2000-12-01

    An inversion technique using artificial neural networks (ANNs) is described for estimating geoacoustic model parameters of the ocean bottom and information about the sound source from acoustic field data. The method is applied to transmission loss data from the TRIAL SABLE experiment that was carried out in shallow water off Nova Scotia. The inversion is designed to incorporate the a priori information available for the site in order to improve the estimation accuracy. The inversion scheme involves training feedforward ANNs to estimate the geoacoustic and geometric parameters using simulated input/output training pairs generated with a forward acoustic propagation model. The inputs to the ANNs are the spectral components of the transmission loss at each sensor of a vertical hydrophone array for the two lowest frequencies that were transmitted in the experiment, 35 and 55 Hz. The output is the set of environmental model parameters, both geometric and geoacoustic, corresponding to the received field. In order to decrease the training time, a separate network was trained for each parameter. The errors for the parallel estimation are 10% lower than for those obtained using a single network to estimate all the parameters simultaneously, and the training time is decreased by a factor of six. When the experimental data are presented to the ANNs the geometric parameters, such as source range and depth, are estimated with a high accuracy. Geoacoustic parameters, such as the compressional speed in the sediment and the sediment thickness, are found with a moderate accuracy.

  9. Optical neural stimulation modeling on degenerative neocortical neural networks

    NASA Astrophysics Data System (ADS)

    Zverev, M.; Fanjul-Vélez, F.; Salas-García, I.; Arce-Diego, J. L.

    2015-07-01

    Neurodegenerative diseases usually appear at advanced age. Medical advances make people live longer and as a consequence, the number of neurodegenerative diseases continuously grows. There is still no cure for these diseases, but several brain stimulation techniques have been proposed to improve patients' condition. One of them is Optical Neural Stimulation (ONS), which is based on the application of optical radiation over specific brain regions. The outer cerebral zones can be noninvasively stimulated, without the common drawbacks associated to surgical procedures. This work focuses on the analysis of ONS effects in stimulated neurons to determine their influence in neuronal activity. For this purpose a neural network model has been employed. The results show the neural network behavior when the stimulation is provided by means of different optical radiation sources and constitute a first approach to adjust the optical light source parameters to stimulate specific neocortical areas.

  10. A mean field neural network for hierarchical module placement

    NASA Technical Reports Server (NTRS)

    Unaltuna, M. Kemal; Pitchumani, Vijay

    1992-01-01

    This paper proposes a mean field neural network for the two-dimensional module placement problem. An efficient coding scheme with only O(N log N) neurons is employed where N is the number of modules. The neurons are evolved in groups of N in log N iteration steps such that the circuit is recursively partitioned in alternating vertical and horizontal directions. In our simulations, the network was able to find optimal solutions to all test problems with up to 128 modules.

  11. Sequential state generation by model neural networks.

    PubMed Central

    Kleinfeld, D

    1986-01-01

    Sequential patterns of neural output activity form the basis of many biological processes, such as the cyclic pattern of outputs that control locomotion. I show how such sequences can be generated by a class of model neural networks that make defined sets of transitions between selected memory states. Sequence-generating networks depend upon the interplay between two sets of synaptic connections. One set acts to stabilize the network in its current memory state, while the second set, whose action is delayed in time, causes the network to make specified transitions between the memories. The dynamic properties of these networks are described in terms of motion along an energy surface. The performance of the networks, both with intact connections and with noisy or missing connections, is illustrated by numerical examples. In addition, I present a scheme for the recognition of externally generated sequences by these networks. PMID:3467316

  12. Homeostatic Scaling of Excitability in Recurrent Neural Networks

    PubMed Central

    Remme, Michiel W. H.; Wadman, Wytse J.

    2012-01-01

    Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE) in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (patho)physiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity. PMID:22570604

  13. Using neural networks to predict incinerator emissions: A case study

    SciTech Connect

    Heitz, M.W.; George, B.; Welp, J.E.

    1997-12-31

    This paper presents a case study applying a neural network to predict incinerator emissions. A neural network is a program which is used to develop relationships between process operating variables (input data) and emissions (output data). Recent Federal 503 Regulations for Sewage Sludge Incinerators have required the installation of total hydrocarbon (THC) or carbon monoxide (CO) continuous emission monitoring systems (CEMS) to assure emission compliance. These systems are expensive to install, operate, and maintain. An investigation was performed to develop a simulation model using an artificial intelligence program with the goal of improved operations and reduced air emissions. This paper presents methods used for data collection, data preprocessing, and network training, as well as the architecture and weights of the final network. The network application has improved incinerator operations and limited emissions by determining acceptable ranges of operating variables. Neural networks have been found to accurately predict incinerator emissions. Their use would reduce the burden of high monitoring and compliance costs associated with CEMS. Neural networks may be applied to other environmental monitoring and control processes.

  14. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    PubMed Central

    Jie, Shao

    2014-01-01

    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172

  15. Retrieval of atmospheric properties with radiometric measurements using neural network

    NASA Astrophysics Data System (ADS)

    Chakraborty, Rohit; Maitra, Animesh

    2016-11-01

    Microwave radiometer is an effective instrument to monitor the atmosphere continuously in different weather conditions. It measures brightness temperatures at different frequency bands which are subjected to standard retrieval methods to obtain real time profiles of various atmospheric parameters such as temperature and humidity. But the retrieval techniques used by radiometer have to be adaptive to changing weather condition and location. In the present study, three retrieval techniques have been used to obtain the temperature and relative humidity profiles from brightness temperatures, namely; piecewise linear regression, feed forward neural network and neural back propagation network. The simulated results are compared with radiosonde observations using correlation analysis and error distribution. The analysis reveals that neural network with back propagation is the most accurate technique amongst the three retrieval methods utilized in this study.

  16. Neural network tracking and extension of positive tracking periods

    NASA Technical Reports Server (NTRS)

    Hanan, Jay C.; Chao, Tien-Hsin; Moreels, Pierre

    2004-01-01

    Feature detectors have been considered for the role of supplying additional information to a neural network tracker. The feature detector focuses on areas of the image with significant information. Basically, if a picture says a thousand words, the feature detectors are looking for the key phrases (keypoints). These keypoints are rotationally invariant and may be matched across frames. Application of these advanced feature detectors to the neural network tracking system at JPL has promising potential. As part of an ongoing program, an advanced feature detector was tested for augmentation of a neural network based tracker. The advance feature detector extended tracking periods in test sequences including aircraft tracking, rover tracking, and simulated Martian landing. Future directions of research are also discussed.

  17. A neural network approach to complete coverage path planning.

    PubMed

    Yang, Simon X; Luo, Chaomin

    2004-02-01

    Complete coverage path planning requires the robot path to cover every part of the workspace, which is an essential issue in cleaning robots and many other robotic applications such as vacuum robots, painter robots, land mine detectors, lawn mowers, automated harvesters, and window cleaners. In this paper, a novel neural network approach is proposed for complete coverage path planning with obstacle avoidance of cleaning robots in nonstationary environments. The dynamics of each neuron in the topologically organized neural network is characterized by a shunting equation derived from Hodgkin and Huxley's (1952) membrane equation. There are only local lateral connections among neurons. The robot path is autonomously generated from the dynamic activity landscape of the neural network and the previous robot location. The proposed model algorithm is computationally simple. Simulation results show that the proposed model is capable of planning collision-free complete coverage robot paths.

  18. Application of Improved SOM Neural Network in Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Jiang, Xueying; Liu, Kean; Yan, Jiegou; Chen, Wenhui

    For the false alarm rate, false negative rate, training time and other issues of SOM neural network algorithm, the author Gives an improved anomaly detection SOM algorithm---FPSOM through the introduction of the learning rate, which can adaptively learn the original sample space, better reflects the status of the original data. At the same time, combined with the artificial neural network, The author also gives the intelligent detection model and the model of the training module, designed the main realization of FPSOM neural network algorithm, and finally simulation experiments were carried out in KDDCUP data sets. The experiments show that the new algorithm is better than SOM which can greatly shorten the training time, and effectively improve the detection rate and reduce the false positive rate.

  19. Applications of neural networks in chemical engineering: Hybrid systems

    SciTech Connect

    Ferrada, J.J.; Osborne-Lee, I.W. ); Grizzaffi, P.A. )

    1990-01-01

    Expert systems are known to be useful in capturing expertise and applying knowledge to chemical engineering problems such as diagnosis, process control, process simulation, and process advisory. However, expert system applications are traditionally limited to knowledge domains that are heuristic and involve only simple mathematics. Neural networks, on the other hand, represent an emerging technology capable of rapid recognition of patterned behavior without regard to mathematical complexity. Although useful in problem identification, neural networks are not very efficient in providing in-depth solutions and typically do not promote full understanding of the problem or the reasoning behind its solutions. Hence, applications of neural networks have certain limitations. This paper explores the potential for expanding the scope of chemical engineering areas where neural networks might be utilized by incorporating expert systems and neural networks into the same application, a process called hybridization. In addition, hybrid applications are compared with those using more traditional approaches, the results of the different applications are analyzed, and the feasibility of converting the preliminary prototypes described herein into useful final products is evaluated. 12 refs., 8 figs.

  20. Neural network approaches to dynamic collision-free trajectory generation.

    PubMed

    Yang, S X; Meng, M

    2001-01-01

    In this paper, dynamic collision-free trajectory generation in a nonstationary environment is studied using biologically inspired neural network approaches. The proposed neural network is topologically organized, where the dynamics of each neuron is characterized by a shunting equation or an additive equation. The state space of the neural network can be either the Cartesian workspace or the joint space of multi-joint robot manipulators. There are only local lateral connections among neurons. The real-time optimal trajectory is generated through the dynamic activity landscape of the neural network without explicitly searching over the free space nor the collision paths, without explicitly optimizing any global cost functions, without any prior knowledge of the dynamic environment, and without any learning procedures. Therefore the model algorithm is computationally efficient. The stability of the neural network system is guaranteed by the existence of a Lyapunov function candidate. In addition, this model is not very sensitive to the model parameters. Several model variations are presented and the differences are discussed. As examples, the proposed models are applied to generate collision-free trajectories for a mobile robot to solve a maze-type of problem, to avoid concave U-shaped obstacles, to track a moving target and at the same to avoid varying obstacles, and to generate a trajectory for a two-link planar robot with two targets. The effectiveness and efficiency of the proposed approaches are demonstrated through simulation and comparison studies.

  1. On Improved Least Squares Regression and Artificial Neural Network Meta-Models for Simulation via Control Variates

    DTIC Science & Technology

    2016-09-15

    optimization techniques. In 1998 Winter Simul. Conf. Proc. ( Cat . No.98CH36274), volume 1, pages 151–158. IEEE, 2013. [7] Adedeji B. Badiru and David B. Sieger...Conf. Proc. ( Cat . No.98CH36274), volume 1, pages 167–174. IEEE, 1998. [10] Russell R. Barton. Metamodels for simulation input-output relations. Proc...Control variates techniques for Monte Carlo simulation. In Proc. 2003 Int. Conf. Mach. Learn. Cybern. (IEEE Cat . No.03EX693), vol- ume 1, pages 144

  2. Some neural networks compute, others don't.

    PubMed

    Piccinini, Gualtiero

    2008-01-01

    I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend the following theses. (1) Many neural networks compute--they perform computations. (2) Some neural networks compute in a classical way. Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may well fall into this last class.

  3. Application of artificial neural network coupled with genetic algorithm and simulated annealing to solve groundwater inflow problem to an advancing open pit mine

    NASA Astrophysics Data System (ADS)

    Bahrami, Saeed; Doulati Ardejani, Faramarz; Baafi, Ernest

    2016-05-01

    In this study, hybrid models are designed to predict groundwater inflow to an advancing open pit mine and the hydraulic head (HH) in observation wells at different distances from the centre of the pit during its advance. Hybrid methods coupling artificial neural network (ANN) with genetic algorithm (GA) methods (ANN-GA), and simulated annealing (SA) methods (ANN-SA), were utilised. Ratios of depth of pit penetration in aquifer to aquifer thickness, pit bottom radius to its top radius, inverse of pit advance time and the HH in the observation wells to the distance of observation wells from the centre of the pit were used as inputs to the networks. To achieve the objective two hybrid models consisting of ANN-GA and ANN-SA with 4-5-3-1 arrangement were designed. In addition, by switching the last argument of the input layer with the argument of the output layer of two earlier models, two new models were developed to predict the HH in the observation wells for the period of the mining process. The accuracy and reliability of models are verified by field data, results of a numerical finite element model using SEEP/W, outputs of simple ANNs and some well-known analytical solutions. Predicted results obtained by the hybrid methods are closer to the field data compared to the outputs of analytical and simple ANN models. Results show that despite the use of fewer and simpler parameters by the hybrid models, the ANN-GA and to some extent the ANN-SA have the ability to compete with the numerical models.

  4. On sparsely connected optimal neural networks

    SciTech Connect

    Beiu, V.; Draghici, S.

    1997-10-01

    This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.

  5. Computational inference of neural information flow networks.

    PubMed

    Smith, V Anne; Yu, Jing; Smulders, Tom V; Hartemink, Alexander J; Jarvis, Erich D

    2006-11-24

    Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.

  6. Artificial Neural Networks and Instructional Technology.

    ERIC Educational Resources Information Center

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  7. Higher-Order Neural Networks Recognize Patterns

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly; Ochoa, Ellen

    1996-01-01

    Networks of higher order have enhanced capabilities to distinguish between different two-dimensional patterns and to recognize those patterns. Also enhanced capabilities to "learn" patterns to be recognized: "trained" with far fewer examples and, therefore, in less time than necessary to train comparable first-order neural networks.

  8. Orthogonal Patterns In A Binary Neural Network

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1991-01-01

    Report presents some recent developments in theory of binary neural networks. Subject matter relevant to associate (content-addressable) memories and to recognition of patterns - both of considerable importance in advancement of robotics and artificial intelligence. When probed by any pattern, network converges to one of stored patterns.

  9. Neural-Network Modeling Of Arc Welding

    NASA Technical Reports Server (NTRS)

    Anderson, Kristinn; Barnett, Robert J.; Springfield, James F.; Cook, George E.; Strauss, Alvin M.; Bjorgvinsson, Jon B.

    1994-01-01

    Artificial neural networks considered for use in monitoring and controlling gas/tungsten arc-welding processes. Relatively simple network, using 4 welding equipment parameters as inputs, estimates 2 critical weld-bead paramaters within 5 percent. Advantage is computational efficiency.

  10. Neural networks as perpetual information generators

    NASA Astrophysics Data System (ADS)

    Englisch, Harald; Xiao, Yegao; Yao, Kailun

    1991-07-01

    The information gain in a neural network cannot be larger than the bit capacity of the synapses. It is shown that the equation derived by Engel et al. [Phys. Rev. A 42, 4998 (1990)] for the strongly diluted network with persistent stimuli contradicts this condition. Furthermore, for any time step the correct equation is derived by taking the correlation between random variables into account.

  11. Disruption forecasting at JET using neural networks

    NASA Astrophysics Data System (ADS)

    Cannas, B.; Fanni, A.; Marongiu, E.; Sonato, P.

    2004-01-01

    Neural networks are trained to evaluate the risk of plasma disruptions in a tokamak experiment using several diagnostic signals as inputs. A saliency analysis confirms the goodness of the chosen inputs, all of which contribute to the network performance. Tests that were carried out refer to data collected from succesfully terminated and disruption terminated pulses performed during two years of JET tokamak experiments. Results show the possibility of developing a neural network predictor that intervenes well in advance in order to avoid plasma disruption or mitigate its effects.

  12. Multiwavelet neural network and its approximation properties.

    PubMed

    Jiao, L; Pan, J; Fang, Y

    2001-01-01

    A model of multiwavelet-based neural networks is proposed. Its universal and L(2) approximation properties, together with its consistency are proved, and the convergence rates associated with these properties are estimated. The structure of this network is similar to that of the wavelet network, except that the orthonormal scaling functions are replaced by orthonormal multiscaling functions. The theoretical analyses show that the multiwavelet network converges more rapidly than the wavelet network, especially for smooth functions. To make a comparison between both networks, experiments are carried out with the Lemarie-Meyer wavelet network, the Daubechies2 wavelet network and the GHM multiwavelet network, and the results support the theoretical analysis well. In addition, the results also illustrate that at the jump discontinuities, the approximation performance of the two networks are about the same.

  13. Training a Neural Network Via Large-Eddy Simulation for Autonomous Location and Quantification of CH4 Leaks at Natural Gas Facilities

    NASA Astrophysics Data System (ADS)

    Sauer, J.; Travis, B. J.; Munoz-Esparza, D.; Dubey, M. K.

    2015-12-01

    Fugitive methane (CH4) leaks from oil and gas production fields are a potential significant source of atmospheric methane. US DOE's ARPA-E MONITOR program is supporting research to locate and quantify fugitive methane leaks at natural gas facilities in order to achieve a 90% reduction in CH4 emissions. LANL, Aeris and Rice University are developing an LDS (leak detection system) that employs a compact laser absorption methane sensor and sonic anemometer coupled to an artificial neural network (ANN)-based source attribution algorithm. LANL's large-eddy simulation model, HIGRAD, provides high-fidelity simulated wind fields and turbulent CH4 plume dispersion data for various scenarios used in training the ANN. Numerous inverse solution methodologies have been applied over the last decade to assessment of greenhouse gas emissions. ANN learning is well suited to problems in which the training and observed data are noisy, or correspond to complex sensor data as is typical of meteorological and sensor data over a site. ANNs have been shown to achieve higher accuracy with more efficiency than other inverse modeling approaches in studies at larger scales, in urban environments, over short time scales, and even at small spatial scales for efficient source localization of indoor airborne contaminants. Our ANN is intended to characterize fugitive leaks rapidly, given site-specific, real-time, wind and CH4 concentration time-series data at multiple sensor locations, leading to a minimum time-to-detection and providing a first order improvement with respect to overall minimization of methane loss. Initial studies with the ANN on a variety of source location, sensor location, and meteorological condition scenarios are presented and discussed.

  14. A linear model for characterization of synchronization frequencies of neural networks.

    PubMed

    Lv, Peili; Hu, Xintao; Lv, Jinglei; Han, Junwei; Guo, Lei; Liu, Tianming

    2014-02-01

    The synchronization frequency of neural networks and its dynamics have important roles in deciphering the working mechanisms of the brain. It has been widely recognized that the properties of functional network synchronization and its dynamics are jointly determined by network topology, network connection strength, i.e., the connection strength of different edges in the network, and external input signals, among other factors. However, mathematical and computational characterization of the relationships between network synchronization frequency and these three important factors are still lacking. This paper presents a novel computational simulation framework to quantitatively characterize the relationships between neural network synchronization frequency and network attributes and input signals. Specifically, we constructed a series of neural networks including simulated small-world networks, real functional working memory network derived from functional magnetic resonance imaging, and real large-scale structural brain networks derived from diffusion tensor imaging, and performed synchronization simulations on these networks via the Izhikevich neuron spiking model. Our experiments demonstrate that both of the network synchronization strength and synchronization frequency change according to the combination of input signal frequency and network self-synchronization frequency. In particular, our extensive experiments show that the network synchronization frequency can be represented via a linear combination of the network self-synchronization frequency and the input signal frequency. This finding could be attributed to an intrinsically-preserved principle in different types of neural systems, offering novel insights into the working mechanism of neural systems.

  15. Digital implementation of shunting-inhibitory cellular neural network

    NASA Astrophysics Data System (ADS)

    Hammadou, Tarik; Bouzerdoum, Abdesselam; Bermak, Amine

    2000-05-01

    Shunting inhibition is a model of early visual processing which can provide contrast and edge enhancement, and dynamic range compression. An architecture of digital Shunting Inhibitory Cellular Neural Network for real time image processing is presented. The proposed architecture is intended to be used in a complete vision system for edge detection and image enhancement. The present hardware architecture, is modeled and simulated in VHDL. Simulation results show the functional validity of the proposed architecture.

  16. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    USGS Publications Warehouse

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  17. Permitted and forbidden sets in discrete-time linear threshold recurrent neural networks.

    PubMed

    Yi, Zhang; Zhang, Lei; Yu, Jiali; Tan, Kok Kiong

    2009-06-01

    The concepts of permitted and forbidden sets enable a new perspective of the memory in neural networks. Such concepts exhibit interesting dynamics in recurrent neural networks. This paper studies the basic theories of permitted and forbidden sets of the linear threshold discrete-time recurrent neural networks. The linear threshold transfer function has been regarded as an adequate transfer function for recurrent neural networks. Networks with this transfer function form a class of hybrid analog and digital networks which are especially useful for perceptual computations. Networks in discrete time can directly provide algorithms for efficient implementation in digital hardware. The main contribution of this paper is to establish foundations of permitted and forbidden sets. Necessary and sufficient conditions for the linear threshold discrete-time recurrent neural networks are obtained for complete convergence, existence of permitted and forbidden sets, as well as conditionally multiattractivity, respectively. Simulation studies explore some possible interesting practical applications.

  18. Stability analysis of fractional-order Hopfield neural networks with time delays.

    PubMed

    Wang, Hu; Yu, Yongguang; Wen, Guoguang

    2014-07-01

    This paper investigates the stability for fractional-order Hopfield neural networks with time delays. Firstly, the fractional-order Hopfield neural networks with hub structure and time delays are studied. Some sufficient conditions for stability of the systems are obtained. Next, two fractional-order Hopfield neural networks with different ring structures and time delays are developed. By studying the developed neural networks, the corresponding sufficient conditions for stability of the systems are also derived. It is shown that the stability conditions are independent of time delays. Finally, numerical simulations are given to illustrate the effectiveness of the theoretical results obtained in this paper.

  19. Stability analysis of switched stochastic neural networks with time-varying delays.

    PubMed

    Wu, Xiaotai; Tang, Yang; Zhang, Wenbing

    2014-03-01

    This paper is concerned with the global exponential stability of switched stochastic neural networks with time-varying delays. Firstly, the stability of switched stochastic delayed neural networks with stable subsystems is investigated by utilizing the mathematical induction method, the piecewise Lyapunov function and the average dwell time approach. Secondly, by utilizing the extended comparison principle from impulsive systems, the stability of stochastic switched delayed neural networks with both stable and unstable subsystems is analyzed and several easy to verify conditions are derived to ensure the exponential mean square stability of switched delayed neural networks with stochastic disturbances. The effectiveness of the proposed results is illustrated by two simulation examples.

  20. Ultrasonographic Diagnosis of Cirrhosis Based on Preprocessing Using Pyramid Recurrent Neural Network

    NASA Astrophysics Data System (ADS)

    Lu, Jianming; Liu, Jiang; Zhao, Xueqin; Yahagi, Takashi

    In this paper, a pyramid recurrent neural network is applied to characterize the hepatic parenchymal diseases in ultrasonic B-scan texture. The cirrhotic parenchymal diseases are classified into 4 types according to the size of hypoechoic nodular lesions. The B-mode patterns are wavelet transformed , and then the compressed data are feed into a pyramid neural network to diagnose the type of cirrhotic diseases. Compared with the 3-layer neural networks, the performance of the proposed pyramid recurrent neural network is improved by utilizing the lower layer effectively. The simulation result shows that the proposed system is suitable for diagnosis of cirrhosis diseases.

  1. Digit and command interpretation for electronic book using neural network and genetic algorithm.

    PubMed

    Lam, H K; Leung, Frank H F

    2004-12-01

    This paper presents the interpretation of digits and commands using a modified neural network and the genetic algorithm. The modified neural network exhibits a node-to-node relationship which enhances its learning and generalization abilities. A digit-and-command interpreter constructed by the modified neural networks is proposed to recognize handwritten digits and commands. A genetic algorithm is employed to train the parameters of the modified neural networks of the digit-and-command interpreter. The proposed digit-and-command interpreter is successfully realized in an electronic book. Simulation and experimental results will be presented to show the applicability and merits of the proposed approach.

  2. Applications of Neural Networks to Adaptive Control

    DTIC Science & Technology

    1989-12-01

    DTIC ;- E py 00 NAVAL POSTGRADUATE SCHOOL Monterey, California I.$ RDTIC IELECTE fl THESIS BEG7V°U APPLICATIONS OF NEURAL NETWORKS TO ADAPTIVE CONTROL...Second keader E . Robert Wood, Chairman, Department of Aeronautics and Astronautics Gordoii E . Schacher, Dean of Faculty and Graduate Education ii ABSTRACT...23: Network Dynamic Stability for q(t) . ............................. 55 ix Figure 24: Network Dynamic Stability for e (t

  3. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  4. Neural network technologies for image classification

    NASA Astrophysics Data System (ADS)

    Korikov, A. M.; Tungusova, A. V.

    2015-11-01

    We analyze the classes of problems with an objective necessity to use neural network technologies, i.e. representation and resolution problems in the neural network logical basis. Among these problems, image recognition takes an important place, in particular the classification of multi-dimensional data based on information about textural characteristics. These problems occur in aerospace and seismic monitoring, materials science, medicine and other. We reviewed different approaches for the texture description: statistical, structural, and spectral. We developed a neural network technology for resolving a practical problem of cloud image classification for satellite snapshots from the spectroradiometer MODIS. The cloud texture is described by the statistical characteristics of the GLCM (Gray Level Co- Occurrence Matrix) method. From the range of neural network models that might be applied for image classification, we chose the probabilistic neural network model (PNN) and developed an implementation which performs the classification of the main types and subtypes of clouds. Also, we chose experimentally the optimal architecture and parameters for the PNN model which is used for image classification.

  5. Neurale Netwerken en Radarsystemen (Neural Networks and Radar Systems)

    DTIC Science & Technology

    1989-08-01

    general issues in cognitive science", Parallel distributed processing, Vol 1: Foundations, Rumelhart et al. 1986 pp 110-146 THO rapport Pagina 151 36 D.E...34Neural networks (part 2)",Expert Focus, IEEE Expert, Spring 1988. 61 J.A. Anderson, " Cognitive and Psychological Computations with Neural Models", IEEE...Pagina 154 69 David H. Ackley, Geoffrey E. Hinton and Terrence J. Sejnowski, "A Learning Algorithm for Boltzmann machines", cognitive science 9, 147-169

  6. Estimates on compressed neural networks regression.

    PubMed

    Zhang, Yongquan; Li, Youmei; Sun, Jianyong; Ji, Jiabing

    2015-03-01

    When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.

  7. An adaptive Hinfinity controller design for bank-to-turn missiles using ridge Gaussian neural networks.

    PubMed

    Lin, Chuan-Kai; Wang, Sheng-De

    2004-11-01

    A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the Hinfinity control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with Hinfinity stabilization.

  8. Flexible body control using neural networks

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  9. An annealed chaotic maximum neural network for bipartite subgraph problem.

    PubMed

    Wang, Jiahai; Tang, Zheng; Wang, Ronglong

    2004-04-01

    In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms.

  10. A modular architecture for transparent computation in recurrent neural networks.

    PubMed

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments.

  11. Adaptive neural networks for mobile robotic control

    NASA Astrophysics Data System (ADS)

    Burnett, Jeff R.; Dagli, Cihan H.

    2001-03-01

    Movement of a differential drive robot has non-linear dependence on the current position and orientation. A controller must be able to deal with the non-linearity of the plant. The controller must either linearize the plant and deal with special cases, or be non-linear itself. Once the controller is designed, implementation on a real robotic platform presents challenges due to the varying parameters of the plant. Robots of the same model may have different motor frictions. The surface the robot maneuvers on may change e.g. carpet to tile. Batteries will drain, providing less power over time. A feed-forward neural network controller could overcome these challenges. The network could learn the non- linearities of the plant and monitor the error for parameter changes and adapt to them. In this manner, a single controller can be designed for an ideal robot, and then used to populate a multi-robot colony without manually fine tuning the controller for each robot. This paper shall demonstrate such a controller, outlining design in simulation and implementation on Khepera robotic platforms.

  12. Binaural sound localization using neural networks

    NASA Astrophysics Data System (ADS)

    Craig, Rushby C.

    1991-12-01

    The purpose of this study was to investigate the use of Artificial Neural Networks to localize sound sources from simulated, human binaural signals. Only sound sources originating from a circle on the horizontal plane were considered. Experiments were performed to examine the ability of the networks to localize using three-different feature sets. The feature sets used were: time-samples of the signals, Fast Fourier Transform magnitude and cross correlation data, and auto-correlation and cross correlation data. The two different types of sound source signals considered were tones and gaussian noise. The feature set which yielded the best results in terms of classification accuracy (over 91 percent) for both tones and noise was the auto-correlation and cross-correlation data. These results were achieved using 18 classes (20 per class). The other two feature sets did not produce accuracy results as high or as consistent between the two signal types. When using time-samples of the signals as features, it was observed that in order to accurately classify tones of random-frequency, it was necessary to train with random-frequency tones rather than with tones of one, or a few discrete frequencies.

  13. Prototyping distributed simulation networks

    NASA Technical Reports Server (NTRS)

    Doubleday, Dennis L.

    1990-01-01

    Durra is a declarative language designed to support application-level programming. The use of Durra is illustrated to describe a simple distributed application: a simulation of a collection of networked vehicle simulators. It is shown how the language is used to describe the application, its components and structure, and how the runtime executive provides for the execution of the application.

  14. The Neurally Controlled Animat: Biological Brains Acting with Simulated Bodies.

    PubMed

    Demarse, Thomas B; Wagenaar, Daniel A; Blau, Axel W; Potter, Steve M

    2001-01-01

    The brain is perhaps the most advanced and robust computation system known. We are creating a method to study how information is processed and encoded in living cultured neuronal networks by interfacing them to a computer-generated animal, the Neurally-Controlled Animat, within a virtual world. Cortical neurons from rats are dissociated and cultured on a surface containing a grid of electrodes (multi-electrode arrays, or MEAs) capable of both recording and stimulating neural activity. Distributed patterns of neural activity are used to control the behavior of the Animat in a simulated environment. The computer acts as its sensory system providing electrical feedback to the network about the Animat's movement within its environment. Changes in the Animat's behavior due to interaction with its surroundings are studied in concert with the biological processes (e.g., neural plasticity) that produced those changes, to understand how information is processed and encoded within a living neural network. Thus, we have created a hybrid real-time processing engine and control system that consists of living, electronic, and simulated components. Eventually this approach may be applied to controlling robotic devices, or lead to better real-time silicon-based information processing and control algorithms that are fault tolerant and can repair themselves.

  15. Training Deep Spiking Neural Networks Using Backpropagation

    PubMed Central

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations. PMID:27877107

  16. Foreign currency rate forecasting using neural networks

    NASA Astrophysics Data System (ADS)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  17. Training Deep Spiking Neural Networks Using Backpropagation.

    PubMed

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  18. Application of neural networks to flight test diagnostics

    SciTech Connect

    Wheeler, R.M. Jr.; Sheaffer, D.A.

    1991-08-01

    A system has been designed which can provide summary information about specific noisy electric pulses that are generated during flight testing. This is important from a telemetry viewpoint, since limited bandwidth often rules out transmitting all of the pulse data. The system is based on a neural network processing paradigm. The neural network serves as a mapping between pulse data inputs and pulse category outputs. Output categories correspond to presence or type of component failure. Extensive computer simulations have shown that the system can recognize qualitative pulse features which are useful for diagnostic purposes. A second version of the system, also using a neural network, was designed to perform data compression. In this case, an entire pulse is efficiently coded for transmission and the original signal is reconstructed upon receiving the coded transmission. Successful simulations for both systems have demonstrated feasibility and have led to a hardware development effort aimed at prototyping a fieldable system. Based on these results, it appears that the neural network approach may be applicable to other diagnostic and data analysis problems arising in component or system testing. 3 refs., 16 figs., 2 tabs.

  19. Neural Network-Based Sensor Validation for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Moller, James C.; Litt, Jonathan S.; Guo, Ten-Huei

    1998-01-01

    Sensor failure detection, isolation, and accommodation using a neural network approach is described. An auto-associative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures.

  20. Spiking neural networks for higher-level information fusion

    NASA Astrophysics Data System (ADS)

    Bomberger, Neil A.; Waxman, Allen M.; Pait, Felipe M.

    2004-04-01

    This paper presents a novel approach to higher-level (2+) information fusion and knowledge representation using semantic networks composed of coupled spiking neuron nodes. Networks of spiking neurons have been shown to exhibit synchronization, in which sub-assemblies of nodes become phase locked to one another. This phase locking reflects the tendency of biological neural systems to produce synchronized neural assemblies, which have been hypothesized to be involved in feature binding. The approach in this paper embeds spiking neurons in a semantic network, in which a synchronized sub-assembly of nodes represents a hypothesis about a situation. Likewise, multiple synchronized assemblies that are out-of-phase with one another represent multiple hypotheses. The initial network is hand-coded, but additional semantic relationships can be established by associative learning mechanisms. This approach is demonstrated with a simulated scenario involving the tracking of suspected criminal vehicles between meeting places in an urban environment.

  1. Neural network approaches for noisy language modeling.

    PubMed

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  2. Surrogate population models for large-scale neural simulations.

    PubMed

    Tripp, Bryan P

    2015-06-01

    Because different parts of the brain have rich interconnections, it is not possible to model small parts realistically in isolation. However, it is also impractical to simulate large neural systems in detail. This article outlines a new approach to multiscale modeling of neural systems that involves constructing efficient surrogate models of populations. Given a population of neuron models with correlated activity and with specific, nonrandom connections, a surrogate model is constructed in order to approximate the aggregate outputs of the population. The surrogate model requires less computation than the neural model, but it has a clear and specific relationship with the neural model. For example, approximate spike rasters for specific neurons can be derived from a simulation of the surrogate model. This article deals specifically with neural engineering framework (NEF) circuits of leaky-integrate-and-fire point neurons. Weighted sums of spikes are modeled by interpolating over latent variables in the population activity, and linear filters operate on gaussian random variables to approximate spike-related fluctuations. It is found that the surrogate models can often closely approximate network behavior with orders-of-magnitude reduction in computational demands, although there are certain systematic differences between the spiking and surrogate models. Since individual spikes are not modeled, some simulations can be performed with much longer steps sizes (e.g., 20 ms). Possible extensions to non-NEF networks and to more complex neuron models are discussed.

  3. Neural network system for traffic flow management

    NASA Astrophysics Data System (ADS)

    Gilmore, John F.; Elibiary, Khalid J.; Petersson, L. E. Rickard

    1992-09-01

    Atlanta will be the home of several special events during the next five years ranging from the 1996 Olympics to the 1994 Super Bowl. When combined with the existing special events (Braves, Falcons, and Hawks games, concerts, festivals, etc.), the need to effectively manage traffic flow from surface streets to interstate highways is apparent. This paper describes a system for traffic event response and management for intelligent navigation utilizing signals (TERMINUS) developed at Georgia Tech for adaptively managing special event traffic flows in the Atlanta, Georgia area. TERMINUS (the original name given Atlanta, Georgia based upon its role as a rail line terminating center) is an intelligent surface street signal control system designed to manage traffic flow in Metro Atlanta. The system consists of three components. The first is a traffic simulation of the downtown Atlanta area around Fulton County Stadium that models the flow of traffic when a stadium event lets out. Parameters for the surrounding area include modeling for events during various times of day (such as rush hour). The second component is a computer graphics interface with the simulation that shows the traffic flows achieved based upon intelligent control system execution. The final component is the intelligent control system that manages surface street light signals based upon feedback from control sensors that dynamically adapt the intelligent controller's decision making process. The intelligent controller is a neural network model that allows TERMINUS to control the configuration of surface street signals to optimize the flow of traffic away from special events.

  4. Intelligent neural network classifier for automatic testing

    NASA Astrophysics Data System (ADS)

    Bai, Baoxing; Yu, Heping

    1996-10-01

    This paper is concerned with an application of a multilayer feedforward neural network for the vision detection of industrial pictures, and introduces a high characteristics image processing and recognizing system which can be used for real-time testing blemishes, streaks and cracks, etc. on the inner walls of high-accuracy pipes. To take full advantage of the functions of the artificial neural network, such as the information distributed memory, large scale self-adapting parallel processing, high fault-tolerance ability, this system uses a multilayer perceptron as a regular detector to extract features of the images to be inspected and classify them.

  5. Implementation aspects of Graph Neural Networks

    NASA Astrophysics Data System (ADS)

    Barcz, A.; Szymański, Z.; Jankowski, S.

    2013-10-01

    This article summarises the results of implementation of a Graph Neural Network classi er. The Graph Neural Network model is a connectionist model, capable of processing various types of structured data, including non- positional and cyclic graphs. In order to operate correctly, the GNN model must implement a transition function being a contraction map, which is assured by imposing a penalty on model weights. This article presents research results concerning the impact of the penalty parameter on the model training process and the practical decisions that were made during the GNN implementation process.

  6. Livermore Big Artificial Neural Network Toolkit

    SciTech Connect

    Essen, Brian Van; Jacobs, Sam; Kim, Hyojin; Dryden, Nikoli; Moon, Tim

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  7. Automatic identification of species with neural networks

    PubMed Central

    Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification. PMID:25392749

  8. Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks

    SciTech Connect

    Ziaul Huque

    2007-08-31

    This is the final technical report for the project titled 'Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks'. The aim of the project was to develop an efficient chemistry model for combustion simulations. The reduced chemistry model was developed mathematically without the need of having extensive knowledge of the chemistry involved. To aid in the development of the model, Neural Networks (NN) was used via a new network topology known as Non-linear Principal Components Analysis (NPCA). A commonly used Multilayer Perceptron Neural Network (MLP-NN) was modified to implement NPCA-NN. The training rate of NPCA-NN was improved with the GEneralized Regression Neural Network (GRNN) based on kernel smoothing techniques. Kernel smoothing provides a simple way of finding structure in data set without the imposition of a parametric model. The trajectory data of the reaction mechanism was generated based on the optimization techniques of genetic algorithm (GA). The NPCA-NN algorithm was then used for the reduction of Dimethyl Ether (DME) mechanism. DME is a recently discovered fuel made from natural gas, (and other feedstock such as coal, biomass, and urban wastes) which can be used in compression ignition engines as a substitute for diesel. An in-house two-dimensional Computational Fluid Dynamics (CFD) code was developed based on Meshfree technique and time marching solution algorithm. The project also provided valuable research experience to two graduate students.

  9. Analog implementation of pulse-coupled neural networks.

    PubMed

    Ota, Y; Wilamowski, B M

    1999-01-01

    This paper presents a compact architecture for analog CMOS hardware implementation of voltage-mode pulse-coupled neural networks (PCNN's). The hardware implementation methods shows inherent fault tolerance specialties and high speed, which is usually more than an order of magnitude over the software counterpart. A computational style described in this article mimics a biological neural network using pulse-stream signaling and analog summation and multiplication. Pulse-stream encoding technique uses pulse streams to carry information and control analog circuitry, while storing further analog information on the time axis. The main feature of the proposed neuron circuit is that the structure is compact, yet exhibiting all the basic properties of natural biological neurons. Functional and structural forms of neural and synaptic functions are presented along with simulation results. Finally, the proposed design is applied to image processing to demonstrate successful restoration of images and their features.

  10. A Squeezed Artificial Neural Network for the Symbolic Network Reliability Functions of Binary-State Networks.

    PubMed

    Yeh, Wei-Chang

    2016-08-18

    Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.

  11. Cryptococcus neoformans Chemotyping by Quantitative Analysis of 1H Nuclear Magnetic Resonance Spectra of Glucuronoxylomannans with a Computer-Simulated Artificial Neural Network

    PubMed Central

    Cherniak, Robert; Valafar, Homayoun; Morris, Laura C.; Valafar, Faramarz

    1998-01-01

    The complete assignment of the proton chemical shifts obtained by nuclear magnetic resonance (NMR) spectroscopy of de-O-acetylated glucuronoxylomannans (GXMs) from Cryptococcus neoformans permitted the high-resolution determination of the total structure of any GXM. Six structural motifs based on an α-(1→3)-mannotriose substituted with variable quantities of 2-O-β- and 4-O-β-xylopyranosyl and 2-O-β-glucopyranosyluronic acid were identified. The chemical shifts of only the anomeric protons of the mannosyl residues served as structure reporter groups (SRG) for the identification and quantitation of the six triads present in any GXM. The assigned protons for the mannosyl residues resonated at clearly distinguishable positions in the spectrum and supplied all the information essential for the assignment of the complete GXM structure. This technique for assigning structure is referred to as the SRG concept. The SRG concept was used to analyze the distribution of the six mannosyl triads of GXMs obtained from 106 isolates of C. neoformans. The six mannosyl triads occurred singularly or in combination with one or more of the other triads. The identification and quantitation of the SRG were simplified by using a computer-simulated artificial neural network (ANN) to automatically analyze the SRG region of the one-dimensional proton NMR spectra. The occurrence and relative distribution of the six mannosyl triads were used to chemotype C. neoformans on the basis of subtle variations in GXM structure determined by analysis of the SRG region of the proton NMR spectrum by the ANN. The data for the distribution of the six SRGs from GXMs of 106 isolates of C. neoformans yielded eight chemotypes, Chem1 through Chem8. PMID:9521136

  12. Seismic response modeling of multi-story buildings using neural networks

    NASA Astrophysics Data System (ADS)

    Conte, Joel P.; Durrani, Ahmad J.; Shelton, Robert O.

    1994-05-01

    A neural network based approach to model the seismic response of multi-story frame buildings is presented. The seismic response of frames is emulated using multi-layer feedforward neural networks with a backpropagation learning algorithm. Actual earthquake accelerograms and corresponding structural response obtained from analytical models of buildings are used in training the neural networks. The application of the neural network model is demonstrated by studying one to six story high building frames subjected to seismic base excitation. Furthermore, the learning ability of the network is examined for the case of multiple inputs where lateral forces at floor levels are included simultaneously with the base excitation. The effects of the network parameters on learning and accuracy of predictions are discussed. Based on this study, it is found that appropriately configured neural network models can successfully learn and simulate the linear elastic dynamic behavior of multi-story buildings.

  13. Mutual information in a dilute, asymmetric neural network model

    NASA Astrophysics Data System (ADS)

    Greenfield, Elliot

    We study the computational properties of a neural network consisting of binary neurons with dilute asymmetric synaptic connections. This simple model allows us to simulate large networks which can reflect more of the architecture and dynamics of real neural networks. Our main goal is to determine the dynamical behavior that maximizes the network's ability to perform computations. To this end, we apply information theory, measuring the average mutual information between pairs of pre- and post-synaptic neurons. Communication of information between neurons is an essential requirement for collective computation. Previous workers have demonstrated that neural networks with asymmetric connections undergo a transition from ordered to chaotic behavior as certain network parameters, such as the connectivity, are changed. We find that the average mutual information has a peak near the order-chaos transition, implying that the network can most efficiently communicate information between cells in this region. The mutual information peak becomes increasingly pronounced when the basic model is extended to incorporate more biologically realistic features, such as a variable threshold and nonlinear summation of inputs. We find that the peak in mutual information near the phase transition is a robust feature of the system for a wide range of assumptions about post-synaptic integration.

  14. Porosity Log Prediction Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Dwi Saputro, Oki; Lazuardi Maulana, Zulfikar; Dzar Eljabbar Latief, Fourier

    2016-08-01

    Well logging is important in oil and gas exploration. Many physical parameters of reservoir is derived from well logging measurement. Geophysicists often use well logging to obtain reservoir properties such as porosity, water saturation and permeability. Most of the time, the measurement of the reservoir properties are considered expensive. One of method to substitute the measurement is by conducting a prediction using artificial neural network. In this paper, artificial neural network is performed to predict porosity log data from other log data. Three well from ‘yy’ field are used to conduct the prediction experiment. The log data are sonic, gamma ray, and porosity log. One of three well is used as training data for the artificial neural network which employ the Levenberg-Marquardt Backpropagation algorithm. Through several trials, we devise that the most optimal input training is sonic log data and gamma ray log data with 10 hidden layer. The prediction result in well 1 has correlation of 0.92 and mean squared error of 5.67 x10-4. Trained network apply to other well data. The result show that correlation in well 2 and well 3 is 0.872 and 0.9077 respectively. Mean squared error in well 2 and well 3 is 11 x 10-4 and 9.539 x 10-4. From the result we can conclude that sonic log and gamma ray log could be good combination for predicting porosity with neural network.

  15. Payload Invariant Control via Neural Networks: Development and Experimental Evaluation

    DTIC Science & Technology

    1989-12-01

    control is proposed and experimentally evaluated. An Adaptive Model-Based Neural Network Controller (AMBNNC) uses multilayer perceptron artificial neural ... networks to estimate the payload during high speed manipulator motion. The payload estimate adapts the feedforward compensator to unmodeled system

  16. The labeled systems of multiple neural networks.

    PubMed

    Nemissi, M; Seridi, H; Akdag, H

    2008-08-01

    This paper proposes an implementation scheme of K-class classification problem using systems of multiple neural networks. Usually, a multi-class problem is decomposed into simple sub-problems solved independently using similar single neural networks. For the reason that these sub-problems are not equivalent in their complexity, we propose a system that includes reinforced networks destined to solve complicated parts of the entire problem. Our approach is inspired from principles of the multi-classifiers systems and the labeled classification, which aims to improve performances of the networks trained by the Back-Propagation algorithm. We propose two implementation schemes based on both OAO (one-against-all) and OAA (one-against-one). The proposed models are evaluated using iris and human thigh databases.

  17. Development of programmable artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J.

    1993-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  18. A neural network based speech recognition system

    NASA Astrophysics Data System (ADS)

    Carroll, Edward J.; Coleman, Norman P., Jr.; Reddy, G. N.

    1990-02-01

    An overview is presented of the development of a neural network based speech recognition system. The two primary tasks involved were the development of a time invariant speech encoder and a pattern recognizer or detector. The speech encoder uses amplitude normalization and a Fast Fourier Transform to eliminate amplitude and frequency shifts of acoustic clues. The detector consists of a back-propagation network which accepts data from the encoder and identifies individual words. This use of neural networks offers two advantages over conventional algorithmic detectors: the detection time is no more than a few network time constants, and its recognition speed is independent of the number of the words in the vocabulary. The completed system has functioned as expected with high tolerance to input variation and with error rates comparable to a commercial system when used in a noisy environment.

  19. A neural network with modular hierarchical learning

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)

    1994-01-01

    This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.

  20. Neural Network Noise Anomaly Recognition System and Method

    DTIC Science & Technology

    2000-10-04

    determine when an input waveform deviates from learned noise characteristics. A plurality of neural networks is preferably provided, which each receives a...plurality of samples of intervals or windows of the input waveform. Each of the neural networks produces an output based on whether an anomaly is...detected with respect to the noise, which the neural network is trained to detect. The plurality of outputs of the neural networks is preferably applied to

  1. Analysis of Wideband Beamformers Designed with Artificial Neural Networks

    DTIC Science & Technology

    1990-12-01

    TECHNICAL REPORT 0-90-1 ANALYSIS OF WIDEBAND BEAMFORMERS DESIGNED WITH ARTIFICIAL NEURAL NETWORKS by Cary Cox Instrumentation Services Division...included. A briel tutorial on beamformers and neural networks is also provided. 14. SUBJECT TERMS 15, NUMBER OF PAGES Artificial neural networks Fecdforwa:,l...Beamformers Designed with Artificial Neural Networks ". The study was conducted under the general supervision of Messrs. George P. Bonner, Chief

  2. Nonlinear system identification based on internal recurrent neural networks.

    PubMed

    Puscasu, Gheorghe; Codres, Bogdan; Stancu, Alexandru; Murariu, Gabriel

    2009-04-01

    A novel approach for nonlinear complex system identification based on internal recurrent neural networks (IRNN) is proposed in this paper. The computational complexity of neural identification can be greatly reduced if the whole system is decomposed into several subsystems. This approach employs internal state estimation when no measurements coming from the sensors are available for the system states. A modified backpropagation algorithm is introduced in order to train the IRNN for nonlinear system identification. The performance of the proposed design approach is proven on a car simulator case study.

  3. Knowledge learning on fuzzy expert neural networks

    NASA Astrophysics Data System (ADS)

    Fu, Hsin-Chia; Shann, J.-J.; Pao, Hsiao-Tien

    1994-03-01

    The proposed fuzzy expert network is an event-driven, acyclic neural network designed for knowledge learning on a fuzzy expert system. Initially, the network is constructed according to a primitive (rough) expert rules including the input and output linguistic variables and values of the system. For each inference rule, it corresponds to an inference network, which contains five types of nodes: Input, Membership-Function, AND, OR, and Defuzzification Nodes. We propose a two-phase learning procedure for the inference network. The first phase is the competitive backpropagation (CBP) training phase, and the second phase is the rule- pruning phase. The CBP learning algorithm in the training phase enables the network to learn the fuzzy rules as precisely as backpropagation-type learning algorithms and yet as quickly as competitive-type learning algorithms. After the CBP training, the rule-pruning process is performed to delete redundant weight connections for simple network structures and yet compatible retrieving performance.

  4. Simplified Learning Scheme For Analog Neural Network

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1991-01-01

    Synaptic connections adjusted one at a time in small increments. Simplified gradient-descent learning scheme for electronic neural-network processor less efficient than better-known back-propagation scheme, but offers two advantages: easily implemented in circuitry because data-access circuitry separated from learning circuitry; and independence of data-access circuitry makes possible to implement feedforward as well as feedback networks, including those of multiple-attractor type. Important in such applications as recognition of patterns.

  5. Are artificial neural networks black boxes?

    PubMed

    Benitez, J M; Castro, J L; Requena, I

    1997-01-01

    Artificial neural networks are efficient computing models which have shown their strengths in solving hard problems in artificial intelligence. They have also been shown to be universal approximators. Notwithstanding, one of the major criticisms is their being black boxes, since no satisfactory explanation of their behavior has been offered. In this paper, we provide such an interpretation of neural networks so that they will no longer be seen as black boxes. This is stated after establishing the equality between a certain class of neural nets and fuzzy rule-based systems. This interpretation is built with fuzzy rules using a new fuzzy logic operator which is defined after introducing the concept of f-duality. In addition, this interpretation offers an automated knowledge acquisition procedure.

  6. Perspective: network-guided pattern formation of neural dynamics.

    PubMed

    Hütt, Marc-Thorsten; Kaiser, Marcus; Hilgetag, Claus C

    2014-10-05

    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings and lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatio-temporal pattern formation and propose a novel perspective for analysing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics.

  7. Neural Network Classification of Environmental Samples

    DTIC Science & Technology

    1996-12-01

    Biological and Artificial Neural Networks. Air Force Institute of Technology, 1990. 24. Rosenblatt. Principles of Neurodynamics . New York, NY: Spartan...Parallel Distributed Processing: Explorations in the Microstructure of Cognition . MIT Press, 1986. 29. Smagt, Patrick P. Van Der. "Minimisation Methods

  8. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  9. Localizing Tortoise Nests by Neural Networks.

    PubMed

    Barbuti, Roberto; Chessa, Stefano; Micheli, Alessio; Pucci, Rita

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  10. Neural network application to comprehensive engine diagnostics

    NASA Technical Reports Server (NTRS)

    Marko, Kenneth A.

    1994-01-01

    We have previously reported on the use of neural networks for detection and identification of faults in complex microprocessor controlled powertrain systems. The data analyzed in those studies consisted of the full spectrum of signals passing between the engine and the real-time microprocessor controller. The specific task of the classification system was to classify system operation as nominal or abnormal and to identify the fault present. The primary concern in earlier work was the identification of faults, in sensors or actuators in the powertrain system as it was exercised over its full operating range. The use of data from a variety of sources, each contributing some potentially useful information to the classification task, is commonly referred to as sensor fusion and typifies the type of problems successfully addressed using neural networks. In this work we explore the application of neural networks to a different diagnostic problem, the diagnosis of faults in newly manufactured engines and the utility of neural networks for process control.

  11. Multidimensional neural growing networks and computer intelligence

    SciTech Connect

    Yashchenko, V.A.

    1995-03-01

    This paper examines information-computation processes in time and in space and some aspects of computer intelligence using multidimensional matrix neural growing networks. In particular, issues of object-oriented {open_quotes}thinking{close_quotes} of computers are considered.

  12. Nonlinear Time Series Analysis via Neural Networks

    NASA Astrophysics Data System (ADS)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  13. Automatic target identification using neural networks

    NASA Astrophysics Data System (ADS)

    Abdallah, Mahmoud A.; Samu, Tayib I.; Grissom, William A.

    1995-10-01

    Neural network theories are applied to attain human-like performance in areas such as speech recognition, statistical mapping, and target recognition or identification. In target identification, one of the difficult tasks has been the extraction of features to be used to train the neural network which is subsequently used for the target's identification. The purpose of this paper is to describe the development of an automatic target identification system using features extracted from a specific class of targets. The extracted features were the graphical representations of the silhouettes of the targets. Image processing techniques and some Fast Fourier Transform (FFT) properties were implemented to extract the features. The FFT eliminates variations in the extracted features due to rotation or scaling. A Neural Network was trained with the extracted features using the Learning Vector Quantization paradigm. An identification system was set up to test the algorithm. The image processing software was interfaced with MATLAB Neural Network Toolbox via a computer program written in C language to automate the target identification process. The system performed well as at classified the objects used to train it irrespective of rotation, scaling, and translation. This automatic target identification system had a classification success rate of about 95%.

  14. Optoelectronic Integrated Circuits For Neural Networks

    NASA Technical Reports Server (NTRS)

    Psaltis, D.; Katz, J.; Kim, Jae-Hoon; Lin, S. H.; Nouhi, A.

    1990-01-01

    Many threshold devices placed on single substrate. Integrated circuits containing optoelectronic threshold elements developed for use as planar arrays of artificial neurons in research on neural-network computers. Mounted with volume holograms recorded in photorefractive crystals serving as dense arrays of variable interconnections between neurons.

  15. Neural networks in the former Soviet Union

    SciTech Connect

    Wunsch, D.C. II.

    1993-01-01

    A brief overview is given of neural networks activities in the former Soviet Union that have potential aerospace applications. Activities at institutes in Moscow, the former Leningrad, Kiev, Taganrog, Rostov-on-Don, and Krasnoyarsk are addressed, including the most important scientists involved. 21 refs.

  16. Localizing Tortoise Nests by Neural Networks

    PubMed Central

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  17. Neural Network Design on the SRC-6 Reconfigurable Computer

    DTIC Science & Technology

    2006-12-01

    speeds of FPGA systems. This thesis explores the use of a Feed-forward, Multi-Layer Perceptron (MLP) Artificial Neural Network (ANN) architecture... Implementation of a Fast Artificial Neural Network Library (FANN), Graduate Project Report, Department of Computer Science, University of Copenhagen (DIKU...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited NEURAL NETWORK

  18. Hyperspectral Imagery Classification Using a Backpropagation Neural Network

    DTIC Science & Technology

    1993-12-01

    A backpropagation neural network was developed and implemented for classifying AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) hyperspectral...imagery. It is a fully interconnected linkage of three layers of neural network . Fifty input layer neurons take in signals from Bands 41 to 90 of the...moderate AVIRIS pixel resolution of 20 meters by 20 meters. Backpropagation neural network , Hyperspectral imagery

  19. Electrically Modifiable Nonvolatile SONOS Synapses for Electronic Neural Networks.

    DTIC Science & Technology

    1992-09-30

    for the electrically reprogrammable analog conductance in an artificial neural network. We have demonstrated the attractive featuies of this synaptic ...Electrically Modifiable Synaptic Element for VLSI Neural Network Implementation", Proceedings of the 1991 IEEE Nonvolatile Semiconductor Memory Workshop...Nonvolatile Eletrically Modifiable Synaptic Element for VLSI Neural Network Implementation", 11th IEEE Nonvolatile Semiconductor Memory Workshop, 1991. 19. A

  20. [Application of artificial neural networks in infectious diseases].

    PubMed

    Xu, Jun-fang; Zhou, Xiao-nong

    2011-02-28

    With the development of information technology, artificial neural networks has been applied to many research fields. Due to the special features such as nonlinearity, self-adaptation, and parallel processing, artificial neural networks are applied in medicine and biology. This review summarizes the application of artificial neural networks in the relative factors, prediction and diagnosis of infectious diseases in recent years.

  1. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    PubMed

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  2. Fault diagnosis via neural networks: The Boltzmann machine

    SciTech Connect

    Marseguerra, M.; Zio, E. . Dept. of Nuclear Engineering)

    1994-07-01

    The Boltzmann machine is a general-purpose artificial neural network that can be used as an associative memory as well as a mapping tool. The usual information entropy is introduced, and a network energy function is suitably defined. The network's training procedure is based on the simulated annealing during which a combination of energy minimization and entropy maximization is achieved. An application in the nuclear reactor field is presented in which the Boltzmann input-output machine is used to detect and diagnose a pipe break in a simulated auxiliary feedwater system feeding two coupled steam generators. The break may occur on either the hot or the cold leg of any of the two steam generators. The binary input data to the network encode only the trends of the thermohydraulic signals so that the network is actually a polarity device. The results indicate that the trained neural network is actually capable of performing its task. The method appears to be robust enough so that it may also be applied with success in the presence of substantial amounts of noise that cause the network to be fed with wrong signals.

  3. Proceedings of the Neural Network Workshop for the Hanford Community

    SciTech Connect

    Keller, P.E.

    1994-01-01

    These proceedings were generated from a series of presentations made at the Neural Network Workshop for the Hanford Community. The abstracts and viewgraphs of each presentation are reproduced in these proceedings. This workshop was sponsored by the Computing and Information Sciences Department in the Molecular Science Research Center (MSRC) at the Pacific Northwest Laboratory (PNL). Artificial neural networks constitute a new information processing technology that is destined within the next few years, to provide the world with a vast array of new products. A major reason for this is that artificial neural networks are able to provide solutions to a wide variety of complex problems in a much simpler fashion than is possible using existing techniques. In recognition of these capabilities, many scientists and engineers are exploring the potential application of this new technology to their fields of study. An artificial neural network (ANN) can be a software simulation, an electronic circuit, optical system, or even an electro-chemical system designed to emulate some of the brain`s rudimentary structure as well as some of the learning processes that are believed to take place in the brain. For a very wide range of applications in science, engineering, and information technology, ANNs offer a complementary and potentially superior approach to that provided by conventional computing and conventional artificial intelligence. This is because, unlike conventional computers, which have to be programmed, ANNs essentially learn from experience and can be trained in a straightforward fashion to carry out tasks ranging from the simple to the highly complex.

  4. Predicting physical time series using dynamic ridge polynomial neural networks.

    PubMed

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  5. Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks

    PubMed Central

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  6. Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks.

    PubMed

    Cheng, Long; Hou, Zeng-Guang; Lin, Yingzi; Tan, Min; Zhang, Wenjun Chris; Wu, Fang-Xiang

    2011-05-01

    A recurrent neural network is proposed for solving the non-smooth convex optimization problem with the convex inequality and linear equality constraints. Since the objective function and inequality constraints may not be smooth, the Clarke's generalized gradients of the objective function and inequality constraints are employed to describe the dynamics of the proposed neural network. It is proved that the equilibrium point set of the proposed neural network is equivalent to the optimal solution of the original optimization problem by using the Lagrangian saddle-point theorem. Under weak conditions, the proposed neural network is proved to be stable, and the state of the neural network is convergent to one of its equilibrium points. Compared with the existing neural network models for non-smooth optimization problems, the proposed neural network can deal with a larger class of constraints and is not based on the penalty method. Finally, the proposed neural network is used to solve the identification problem of genetic regulatory networks, which can be transformed into a non-smooth convex optimization problem. The simulation results show the satisfactory identification accuracy, which demonstrates the effectiveness and efficiency of the proposed approach.

  7. Simulating neural systems with Xyce.

    SciTech Connect

    Schiek, Richard Louis; Thornquist, Heidi K.; Mei, Ting; Warrender, Christina E.; Aimone, James Bradley; Teeter, Corinne; Duda, Alex M.

    2012-12-01

    Sandias parallel circuit simulator, Xyce, can address large scale neuron simulations in a new way extending the range within which one can perform high-fidelity, multi-compartment neuron simulations. This report documents the implementation of neuron devices in Xyce, their use in simulation and analysis of neuron systems.

  8. Multiview fusion for activity recognition using deep neural networks

    NASA Astrophysics Data System (ADS)

    Kavi, Rahul; Kulathumani, Vinod; Rohit, Fnu; Kecojevic, Vlad

    2016-07-01

    Convolutional neural networks (ConvNets) coupled with long short term memory (LSTM) networks have been recently shown to be effective for video classification as they combine the automatic feature extraction capabilities of a neural network with additional memory in the temporal domain. This paper shows how multiview fusion can be applied to such a ConvNet LSTM architecture. Two different fusion techniques are presented. The system is first evaluated in the context of a driver activity recognition system using data collected in a multicamera driving simulator. These results show significant improvement in accuracy with multiview fusion and also show that deep learning performs better than a traditional approach using spatiotemporal features even without requiring any background subtraction. The system is also validated on another publicly available multiview action recognition dataset that has 12 action classes and 8 camera views.

  9. A stochastic learning algorithm for layered neural networks

    SciTech Connect

    Bartlett, E.B.; Uhrig, R.E.

    1992-12-31

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given.

  10. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  11. HAWC Energy Reconstruction via Neural Network

    NASA Astrophysics Data System (ADS)

    Marinelli, Samuel; HAWC Collaboration

    2016-03-01

    The High-Altitude Water-Cherenkov (HAWC) γ-ray observatory is located at 4100 m above sea level on the Sierra Negra mountain in the state of Puebla, Mexico. Its 300 water-filled tanks are instrumented with PMTs that detect Cherenkov light produced by charged particles in atmospheric air showers induced by TeV γ-rays. The detector became fully operational in March of 2015. With a 2-sr field of view and duty cycle exceeding 90%, HAWC is a survey instrument sensitive to diverse γ-ray sources, including supernova remnants, pulsar wind nebulae, active galactic nuclei, and others. Particle-acceleration mechanisms at these sources can be inferred by studying their energy spectra, particularly at high energies. We have developed a technique for estimating primary- γ-ray energies using an artificial neural network (ANN). Input variables to the ANN are selected to characterize shower multiplicity in the detector, the fraction of the shower contained in the detector, and atmospheric attenuation of the shower. Monte Carlo simulations show that the new estimator has superior performance to the current estimator used in HAWC publications. This work was supported by the National Science Foundation.

  12. Event Discrimination using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Menon, Hareesh; Hughes, Richard; Daling, Alec; Winer, Brian

    2017-01-01

    Convolutional Neural Networks (CNNs) are computational models that have been shown to be effective at classifying different types of images. We present a method to use CNNs to distinguish events involving the production of a top quark pair and a Higgs boson from events involving the production of a top quark pair and several quark and gluon jets. To do this, we generate and simulate data using MADGRAPH and DELPHES for a general purpose LHC detector at 13 TeV. We produce images using a particle flow algorithm by binning the particles geometrically based on their position in the detector and weighting the bins by the energy of each particle within each bin, and by defining channels based on particle types (charged track, neutral hadronic, neutral EM, lepton, heavy flavor). Our classification results are competitive with standard machine learning techniques. We have also looked into the classification of the substructure of the events, in a process known as scene labeling. In this context, we look for the presence of boosted objects (such as top quarks) with substructure encompassed within single jets. Preliminary results on substructure classification will be presented.

  13. Storage capacity of attractor neural networks with depressing synapses

    NASA Astrophysics Data System (ADS)

    Torres, Joaquín J.; Pantic, Lovorka; Kappen, Hilbert J.

    2002-12-01

    We compute the capacity of a binary neural network with dynamic depressing synapses to store and retrieve an infinite number of patterns. We use a biologically motivated model of synaptic depression and a standard mean-field approach. We find that at T=0 the critical storage capacity decreases with the degree of the depression. We confirm the validity of our main mean-field results with numerical simulations.

  14. Existence and uniform stability analysis of fractional-order complex-valued neural networks with time delays.

    PubMed

    Rakkiyappan, R; Cao, Jinde; Velmurugan, G

    2015-01-01

    This paper deals with the problem of existence and uniform stability analysis of fractional-order complex-valued neural networks with constant time delays. Complex-valued recurrent neural networks is an extension of real-valued recurrent neural networks that includes complex-valued states, connection weights, or activation functions. This paper explains sufficient condition for the existence and uniform stability analysis of such networks. Three numerical simulations are delineated to substantiate the effectiveness of the theoretical results.

  15. Optical implementation of neural networks

    NASA Astrophysics Data System (ADS)

    Yu, Francis T. S.; Guo, Ruyan

    2002-12-01

    An adaptive optical neuro-computing (ONC) using inexpensive pocket size liquid crystal televisions (LCTVs) had been developed by the graduate students in the Electro-Optics Laboratory at The Pennsylvania State University. Although this neuro-computing has only 8×8=64 neurons, it can be easily extended to 16×20=320 neurons. The major advantages of this LCTV architecture as compared with other reported ONCs, are low cost and the flexibility to operate. To test the performance, several neural net models are used. These models are Interpattern Association, Hetero-association and unsupervised learning algorithms. The system design considerations and experimental demonstrations are also included.

  16. Intrinsic adaptation in autonomous recurrent neural networks.

    PubMed

    Marković, Dimitrije; Gros, Claudius

    2012-02-01

    A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the quality of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.

  17. Neural networks in windprofiler data processing

    NASA Astrophysics Data System (ADS)

    Weber, H.; Richner, H.; Kretzschmar, R.; Ruffieux, D.

    2003-04-01

    Wind profilers are basically Doppler radars yielding 3-dimensional wind profiles that are deduced from the Doppler shift caused by turbulent elements in the atmosphere. These signals can be contaminated by other airborne elements such as birds or hydrometeors. Using a feed-forward neural network with one hidden layer and one output unit, birds and hydrometeors can be successfully identified in non-averaged single spectra; theses are subsequently removed in the wind computation. An infrared camera was used to identify birds in one of the beams of the wind profiler. After training the network with about 6000 contaminated data sets, it was able to identify contaminated data in a test data set with a reliability of 96 percent. The assumption was made that the neural network parameters obtained in the beam for which bird data was collected can be transferred to the other beams (at least three beams are needed for computing wind vectors). Comparing the evolution of a wind field with and without the neural network shows a significant improvement of wind data quality. Current work concentrates on training the network also for hydrometeors. It is hoped that the instrument's capability can thus be expanded to measure not only correct winds, but also observe bird migration, estimate precipitation and -- by combining precipitation information with vertical velocity measurement -- the monitoring of the height of the melting layer.

  18. Classifying multispectral data by neural networks

    NASA Technical Reports Server (NTRS)

    Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.

    1993-01-01

    Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.

  19. Back propagation neural networks for facial verification

    SciTech Connect

    Garnett, A.E.; Solheim, I.; Payne, T.; Castain, R.H.

    1992-10-01

    We conducted a test to determine the aptitude of neural networks to recognize human faces. The pictures we collected of 511 subjects captured both profiles and many natural expressions. Some of the subjects were wearing glasses, sunglasses, or hats in some of the pictures. The images were compressed by a factor of 100 and converted into image vectors of 1400 pixels. The image vectors were fed into a back propagation neural network with one hidden layer and one output node. The networks were trained to recognize one target person and to reject all other persons. Neural networks for 37 target subjects were trained with 8 different training sets that consisted of different subsets of the data. The networks were then tested on the rest of the data, which consisted of 7000 or more unseen pictures. Results indicate that a false acceptance rate of less than 1 percent can be obtained, and a false rejection rate of 2 percent can be obtained when certain restrictions are followed.

  20. a Heterosynaptic Learning Rule for Neural Networks

    NASA Astrophysics Data System (ADS)

    Emmert-Streib, Frank

    In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.