Science.gov

Sample records for neural network simulations

  1. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  2. Electronic Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Moopenn, Alex W.; Thakoor, Anilkumar P.; Lambe, John J.

    1988-01-01

    Experimental circuits faster than simulation programs run on digital computers. Serial shift register routes clock pulses C1 to neurons in sequence. Clock pulses C2 interrogate neurons. Neuron interconnection information stored in simulated synapses. Can be expanded to greater complexity.

  3. Neural network computer simulation of medical aerosols.

    PubMed

    Richardson, C J; Barlow, D J

    1996-06-01

    Preliminary investigations have been conducted to assess the potential for using artificial neural networks to simulate aerosol behaviour, with a view to employing this type of methodology in the evaluation and design of pulmonary drug-delivery systems. Details are presented of the general purpose software developed for these tasks; it implements a feed-forward back-propagation algorithm with weight decay and connection pruning, the user having complete run-time control of the network architecture and mode of training. A series of exploratory investigations is then reported in which different network structures and training strategies are assessed in terms of their ability to simulate known patterns of fluid flow in simple model systems. The first of these involves simulations of cellular automata-generated data for fluid flow through a partially obstructed two-dimensional pipe. The artificial neural networks are shown to be highly successful in simulating the behaviour of this simple linear system, but with important provisos relating to the information content of the training data and the criteria used to judge when the network is properly trained. A second set of investigations is then reported in which similar networks are used to simulate patterns of fluid flow through aerosol generation devices, using training data furnished through rigorous computational fluid dynamics modelling. These more complex three-dimensional systems are modelled with equal success. It is concluded that carefully tailored, well trained networks could provide valuable tools not just for predicting but also for analysing the spatial dynamics of pharmaceutical aerosols. PMID:8832491

  4. Simulation of large systems with neural networks

    SciTech Connect

    Paez, T.L.

    1994-09-01

    Artificial neural networks (ANNs) have been shown capable of simulating the behavior of complex, nonlinear, systems, including structural systems. Under certain circumstances, it is desirable to simulate structures that are analyzed with the finite element method. For example, when we perform a probabilistic analysis with the Monte Carlo method, we usually perform numerous (hundreds or thousands of) repetitions of a response simulation with different input and system parameters to estimate the chance of specific response behaviors. In such applications, efficiency in computation of response is critical, and response simulation with ANNs can be valuable. However, finite element analyses of complex systems involve the use of models with tens or hundreds of thousands of degrees of freedom, and ANNs are practically limited to simulations that involve far fewer variables. This paper develops a technique for reducing the amount of information required to characterize the response of a general structure. We show how the reduced information can be used to train a recurrent ANN. Then the trained ANN can be used to simulate the reduced behavior of the original system, and the reduction transformation can be inverted to provide a simulation of the original system. A numerical example is presented.

  5. Neural network simulations of the nervous system.

    PubMed

    van Leeuwen, J L

    1990-01-01

    Present knowledge of brain mechanisms is mainly based on anatomical and physiological studies. Such studies are however insufficient to understand the information processing of the brain. The present new focus on neural network studies is the most likely candidate to fill this gap. The present paper reviews some of the history and current status of neural network studies. It signals some of the essential problems for which answers have to be found before substantial progress in the field can be made. PMID:2245130

  6. A neural network simulation package in CLIPS

    NASA Technical Reports Server (NTRS)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  7. Neural Network Simulation Package from Ohio State University

    SciTech Connect

    Wickham, K.L.

    1990-08-01

    This report describes the Neural Network Simulation Package acquired from Ohio State University. The package known as Neural Shell V2.1 was evaluated and benchmarked at the INEL Supercomputing Center (ISC). The emphasis was on the Back Propagation Net which is currently considered one of the more promising types of neural networks. This report also provides additional documentation that may be helpful to anyone using the package.

  8. Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.

    1997-01-01

    The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.

  9. Artificial neural network simulation of battery performance

    SciTech Connect

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  10. F77NNS - A FORTRAN-77 NEURAL NETWORK SIMULATOR

    NASA Technical Reports Server (NTRS)

    Mitchell, P. H.

    1994-01-01

    F77NNS (A FORTRAN-77 Neural Network Simulator) simulates the popular back error propagation neural network. F77NNS is an ANSI-77 FORTRAN program designed to take advantage of vectorization when run on machines having this capability, but it will run on any computer with an ANSI-77 FORTRAN Compiler. Artificial neural networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to biological nerve cells. Problems which involve pattern matching or system modeling readily fit the class of problems which F77NNS is designed to solve. The program's formulation trains a neural network using Rumelhart's back-propagation algorithm. Typically the nodes of a network are grouped together into clumps called layers. A network will generally have an input layer through which the various environmental stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. The back-propagation training algorithm can require massive computational resources to implement a large network such as a network capable of learning text-to-phoneme pronunciation rules as in the famous Sehnowski experiment. The Sehnowski neural network learns to pronounce 1000 common English words. The standard input data defines the specific inputs that control the type of run to be made, and input files define the NN in terms of the layers and nodes, as well as the input/output (I/O) pairs. The program has a restart capability so that a neural network can be solved in stages suitable to the user's resources and desires. F77NNS allows the user to customize the patterns of connections between layers of a network. The size of the neural network to be solved is limited only by the amount of random access memory (RAM) available to the

  11. Simulation of dynamic processes with adaptive neural networks.

    SciTech Connect

    Tzanos, C. P.

    1998-02-03

    Many industrial processes are highly non-linear and complex. Their simulation with first-principle or conventional input-output correlation models is not satisfactory, either because the process physics is not well understood, or it is so complex that direct simulation is either not adequately accurate, or it requires excessive computation time, especially for on-line applications. Artificial intelligence techniques (neural networks, expert systems, fuzzy logic) or their combination with simple process-physics models can be effectively used for the simulation of such processes. Feedforward (static) neural networks (FNNs) can be used effectively to model steady-state processes. They have also been used to model dynamic (time-varying) processes by adding to the network input layer input nodes that represent values of input variables at previous time steps. The number of previous time steps is problem dependent and, in general, can be determined after extensive testing. This work demonstrates that for dynamic processes that do not vary fast with respect to the retraining time of the neural network, an adaptive feedforward neural network can be an effective simulator that is free of the complexities introduced by the use of input values at previous time steps.

  12. Synthesis of recurrent neural networks for dynamical system simulation.

    PubMed

    Trischler, Adam P; D'Eleuterio, Gabriele M T

    2016-08-01

    We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. PMID:27182811

  13. Design of a neural network simulator on a transputer array

    NASA Technical Reports Server (NTRS)

    Mcintire, Gary; Villarreal, James; Baffes, Paul; Rua, Monica

    1987-01-01

    A brief summary of neural networks is presented which concentrates on the design constraints imposed. Major design issues are discussed together with analysis methods and the chosen solutions. Although the system will be capable of running on most transputer architectures, it currently is being implemented on a 40-transputer system connected to a toroidal architecture. Predictions show a performance level equivalent to that of a highly optimized simulator running on the SX-2 supercomputer.

  14. Integrated Circuit For Simulation Of Neural Network

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.; Khanna, Satish K.

    1988-01-01

    Ballast resistors deposited on top of circuit structure. Cascadable, programmable binary connection matrix fabricated in VLSI form as basic building block for assembly of like units into content-addressable electronic memory matrices operating somewhat like networks of neurons. Connections formed during storage of data, and data recalled from memory by prompting matrix with approximate or partly erroneous signals. Redundancy in pattern of connections causes matrix to respond with correct stored data.

  15. Neural Networks

    SciTech Connect

    Smith, Patrick I.

    2003-09-23

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  16. Estimating uncertainty of streamflow simulation using Bayesian neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Xuesong; Liang, Faming; Srinivasan, Raghavan; van Liew, Michael

    2009-02-01

    Recent studies have shown that Bayesian neural networks (BNNs) are powerful tools for providing reliable hydrologic prediction and quantifying the prediction uncertainty. The reasonable estimation of the prediction uncertainty, a valuable tool for decision making to address water resources management and design problems, is influenced by the techniques used to deal with different uncertainty sources. In this study, four types of BNNs with different treatments of the uncertainties related to parameters (neural network's weights) and model structures were applied for uncertainty estimation of streamflow simulation in two U.S. Department of Agriculture Agricultural Research Service watersheds (Little River Experimental Watershed in Georgia and Reynolds Creek Experimental Watershed in Idaho). An advanced Markov chain Monte Carlo algorithm, evolutionary Monte Carlo, was used to train the BNNs and to estimate uncertainty limits of streamflow simulation. The results obtained in these two case study watersheds show that the 95% uncertainty limits estimated by different types of BNNs are different from each other. The BNNs that only consider the parameter uncertainty with noninformative prior knowledge contain the least number of observed streamflow data in their 95% uncertainty bound. By considering variable model structure and informative prior knowledge, the BNNs can provide more reasonable quantification of the uncertainty of streamflow simulation. This study stresses the need for improving understanding and quantifying methods of different uncertainty sources for effective estimation of uncertainty of hydrologic simulation using BNNs.

  17. Efficiently passing messages in distributed spiking neural network simulation

    PubMed Central

    Thibeault, Corey M.; Minkovich, Kirill; O'Brien, Michael J.; Harris, Frederick C.; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked. PMID:23772213

  18. Clustering, simulation, and neural networks in real-world applications

    NASA Astrophysics Data System (ADS)

    Padgett, Mary Lou; Josephson, Eleanor M.; White, C. R.; Duffield, Don W.

    1995-04-01

    Real-world applications of neural networks often involve simulation and clustering. Reduction of subjective decisions and increasing the potential for real-time automation of cluster evaluation is a target of the cluster check (CC) method suggested here. CC quantitatively assess the variation within a cluster, produces a `central' pattern for a cluster which is robust in the presence of wide variation and skewed data, and suggests a measure for the distance between clusters. Such a measure of the effectiveness of a segmentation scheme is helpful in many applications, including traditional analysis, neural systems, fuzzy systems and evolutionary systems. This work reports successful use of the CC and companion analytic procedures to measure the consistency of movement of neuroanatomical tracer down neural pathways associated with injection sites (tract tracing). Opposite injection sites produce distinctive L shaped accumulations of tracer in different locations. Consistency of pathways for particular injection sites varies from 0.10 to 0.20 out of a possible 0.80. The pathway rejected by the nonparametric statistics and subdivided by Kohonen's self organizing map (SOM) measures 0.20. These quantitative results are consistent with the expert qualitative inspection traditionally accepted in the study of neuronanatomy of the rat olfactory bulb and tubercle. This work suggests further application of the CC and companion techniques to fault detection, identification and recovery of systems for control of diabetes and systems for control of missiles. Use of managerial decisions in the supervisory portions of these systems may also be facilitated by the consistency measure and distance metric allowing reinforcement of consistent decisions and focus on areas needing reconsideration. Automation of such procedures may facilitate real-time, robust and fault-tolerant control by adding a capability for evaluation needed for automated reinforcement and/or selection in neural

  19. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  20. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  1. Neural network setpoint control of an advanced test reactor experiment loop simulation

    SciTech Connect

    Cordes, G.A.; Bryan, S.R.; Powell, R.H.; Chick, D.R.

    1990-09-01

    This report describes the design, implementation, and application of artificial neural networks to achieve temperature and flow rate control for a simulation of a typical experiment loop in the Advanced Test Reactor (ATR) located at the Idaho National Engineering Laboratory (INEL). The goal of the project was to research multivariate, nonlinear control using neural networks. A loop simulation code was adapted for the project and used to create a training set and test the neural network controller for comparison with the existing loop controllers. The results for three neural network designs are documented and compared with existing loop controller action. The neural network was shown to be as accurate at loop control as the classical controllers in the operating region represented by the training set. 9 refs., 28 figs., 2 tabs.

  2. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    NASA Astrophysics Data System (ADS)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  3. Designing laboratory wind simulations using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Križan, Josip; Gašparac, Goran; Kozmar, Hrvoje; Antonić, Oleg; Grisogono, Branko

    2015-05-01

    While experiments in boundary layer wind tunnels remain to be a major research tool in wind engineering and environmental aerodynamics, designing the modeling hardware required for a proper atmospheric boundary layer (ABL) simulation can be costly and time consuming. Hence, possibilities are sought to speed-up this process and make it more time-efficient. In this study, two artificial neural networks (ANNs) are developed to determine an optimal design of the Counihan hardware, i.e., castellated barrier wall, vortex generators, and surface roughness, in order to simulate the ABL flow developing above urban, suburban, and rural terrains, as previous ANN models were created for one terrain type only. A standard procedure is used in developing those two ANNs in order to further enhance best-practice possibilities rather than to improve existing ANN designing methodology. In total, experimental results obtained using 23 different hardware setups are used when creating ANNs. In those tests, basic barrier height, barrier castellation height, spacing density, and height of surface roughness elements are the parameters that were varied to create satisfactory ABL simulations. The first ANN was used for the estimation of mean wind velocity, turbulent Reynolds stress, turbulence intensity, and length scales, while the second one was used for the estimation of the power spectral density of velocity fluctuations. This extensive set of studied flow and turbulence parameters is unmatched in comparison to the previous relevant studies, as it includes here turbulence intensity and power spectral density of velocity fluctuations in all three directions, as well as the Reynolds stress profiles and turbulence length scales. Modeling results agree well with experiments for all terrain types, particularly in the lower ABL within the height range of the most engineering structures, while exhibiting sensitivity to abrupt changes and data scattering in profiles of wind-tunnel results. The

  4. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    NASA Astrophysics Data System (ADS)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  5. Dynamical system modeling via signal reduction and neural network simulation

    SciTech Connect

    Paez, T.L.; Hunter, N.F.

    1997-11-01

    Many dynamical systems tested in the field and the laboratory display significant nonlinear behavior. Accurate characterization of such systems requires modeling in a nonlinear framework. One construct forming a basis for nonlinear modeling is that of the artificial neural network (ANN). However, when system behavior is complex, the amount of data required to perform training can become unreasonable. The authors reduce the complexity of information present in system response measurements using decomposition via canonical variate analysis. They describe a method for decomposing system responses, then modeling the components with ANNs. A numerical example is presented, along with conclusions and recommendations.

  6. Unified-theory-of-reinforcement neural networks do not simulate the blocking effect.

    PubMed

    Calvin, Nicholas T; J McDowell, J

    2015-11-01

    For the last 20 years the unified theory of reinforcement (Donahoe et al., 1993) has been used to develop computer simulations to evaluate its plausibility as an account for behavior. The unified theory of reinforcement states that operant and respondent learning occurs via the same neural mechanisms. As part of a larger project to evaluate the operant behavior predicted by the theory, this project was the first replication of neural network models based on the unified theory of reinforcement. In the process of replicating these neural network models it became apparent that a previously published finding, namely, that the networks simulate the blocking phenomenon (Donahoe et al., 1993), was a misinterpretation of the data. We show that the apparent blocking produced by these networks is an artifact of the inability of these networks to generate the same conditioned response to multiple stimuli. The piecemeal approach to evaluate the unified theory of reinforcement via simulation is critiqued and alternatives are discussed. PMID:26319369

  7. Simulation of restricted neural networks with reprogrammable neurons

    SciTech Connect

    Hartline, D.K. )

    1989-05-01

    This paper describes a network model composed of reprogrammable neurons. It incorporates the following design features: spikes can be generated by a model representing repetitive firing at axon (and dendritic) trigger zones; active responses (plateau potentials; delaying mechanisms) are simulated with Hodgkin-huxley type kinetics; synaptic interactions both spike-mediated and non-spiking chemical ('chemotonic'), simulate transmitter release and binding to postsynaptic receptors. Facilitation and antifacilitation of spike-mediated postsynaptic potentials (PSP's) are included. Chemical pools are used to simulate second messenger systems, trapping of ions in extracellular spaces, and electrogenic pumps, as well as biochemical reaction chains of quite general character. Modulation of any of the parameters of any compartment can be effected through the pools. Intracellular messengers of three kinds are simulated explicitly: those produced by voltage-gated processes (e.g. Ca); those dependent on transmitter (or hormone) binding; and those dependent on other internal messengers (e.g., internally released Ca; enzymatically activated pathways).

  8. Development of a Neural Network Simulator for Studying the Constitutive Behavior of Structural Composite Materials

    DOE PAGESBeta

    Na, Hyuntae; Lee, Seung-Yub; Üstündag, Ersan; Ross, Sarah L.; Ceylan, Halil; Gopalakrishnan, Kasthurirangan

    2013-01-01

    This paper introduces a recent development and application of a noncommercial artificial neural network (ANN) simulator with graphical user interface (GUI) to assist in rapid data modeling and analysis in the engineering diffraction field. The real-time network training/simulation monitoring tool has been customized for the study of constitutive behavior of engineering materials, and it has improved data mining and forecasting capabilities of neural networks. This software has been used to train and simulate the finite element modeling (FEM) data for a fiber composite system, both forward and inverse. The forward neural network simulation precisely reduplicates FEM results several orders ofmore » magnitude faster than the slow original FEM. The inverse simulation is more challenging; yet, material parameters can be meaningfully determined with the aid of parameter sensitivity information. The simulator GUI also reveals that output node size for materials parameter and input normalization method for strain data are critical train conditions in inverse network. The successful use of ANN modeling and simulator GUI has been validated through engineering neutron diffraction experimental data by determining constitutive laws of the real fiber composite materials via a mathematically rigorous and physically meaningful parameter search process, once the networks are successfully trained from the FEM database.« less

  9. On Precision of Recurrent Higher-Order Neural Network that Simulates Turing Machines

    NASA Astrophysics Data System (ADS)

    Tanaka, Ken

    When a neural network simulates a Turing machine, the states of finite state controller and the symbols on infinite tape are encoded in continuous numbers of neuron's outputs. The precision of outputs is regarded as a space resource in neural computations. We show a sufficient condition about the precision to guarantee the correctness of computations. Linear precision suffice in regard to nT, where n is the number of neurons and T is the iteration count of state updates.

  10. Visual NNet: An Educational ANN's Simulation Environment Reusing Matlab Neural Networks Toolbox

    ERIC Educational Resources Information Center

    Garcia-Roselló, Emilio; González-Dacosta, Jacinto; Lado, Maria J.; Méndez, Arturo J.; Garcia Pérez-Schofield, Baltasar; Ferrer, Fátima

    2011-01-01

    Artificial Neural Networks (ANN's) are nowadays a common subject in different curricula of graduate and postgraduate studies. Due to the complex algorithms involved and the dynamic nature of ANN's, simulation software has been commonly used to teach this subject. This software has usually been developed specifically for learning purposes, because…

  11. Application of a neural network to simulate analysis in an optimization process

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Lamarsh, William J., II

    1992-01-01

    A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.

  12. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware.

    PubMed

    Knight, James C; Tully, Philip J; Kaplan, Bernhard A; Lansner, Anders; Furber, Steve B

    2016-01-01

    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061

  13. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware

    PubMed Central

    Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.

    2016-01-01

    SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061

  14. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    PubMed

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite. PMID:25309418

  15. UCLA SFINX - a neural network simulation environment. Research report, 1 Jul 86-30 Jun 87

    SciTech Connect

    Paik, E.; Gungner, D.; Skrzypek, J.

    1987-06-01

    Massively parallel computing architectures are of widespread interest because they can significantly reduce the execution time of some computationally intensive algorithms. There are tasks, such as the guidance of an autonomous robot over an unknown terrain, where a system's survival is dependent on real time interactions with its environment. These time constraints force algorithms to be recast in a form that more closely matches, and thereby taking advantage of, the underlying computing architecture. Similarly, neurophysiology has shown that natural systems derive needed real time functionality from massively parallel networks by organizing structural components around functional goals. SFINX (Structure and Function In Neural connections) is a neural network simulation environment that allows researchers to investigate the behavior of various neural structures. It is designed to easily express and simulate the highly regular patterns often found in large networks, but it is also general enough to model parallel systems of arbitrary interconnectivity. This paper compares SFINX to previous neural network simulators and describes its features and overall organization.

  16. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    PubMed Central

    Cheung, Kit; Schultz, Simon R.; Luk, Wayne

    2016-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  17. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    PubMed

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  18. OpenSim: A Flexible Distributed Neural Network Simulator with Automatic Interactive Graphics.

    PubMed

    Jarosch, Andreas; Leber, Jean Francois

    1997-06-01

    An object-oriented simulator called OpenSim is presented that achieves a high degree of flexibility by relying on a small set of building blocks. The state variables and algorithms put in this framework can easily be accessed through a command shell. This allows one to distribute a large-scale simulation over several workstations and to generate the interactive graphics automatically. OpenSim opens new possibilities for cooperation among Neural Network researchers. Copyright 1997 Elsevier Science Ltd. PMID:12662864

  19. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method.

    PubMed

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data. PMID:26958442

  20. SAGRAD: A Program for Neural Network Training with Simulated Annealing and the Conjugate Gradient Method

    PubMed Central

    Bernal, Javier; Torres-Jimenez, Jose

    2015-01-01

    SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller’s scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller’s algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller’s algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller’s algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data. PMID:26958442

  1. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  2. Simulated apoptosis/neurogenesis regulates learning and memory capabilities of adaptive neural networks.

    PubMed

    Chambers, R Andrew; Potenza, Marc N; Hoffman, Ralph E; Miranker, Willard

    2004-04-01

    Characterization of neuronal death and neurogenesis in the adult brain of birds, humans, and other mammals raises the possibility that neuronal turnover represents a special form of neuroplasticity associated with stress responses, cognition, and the pathophysiology and treatment of psychiatric disorders. Multilayer neural network models capable of learning alphabetic character representations via incremental synaptic connection strength changes were used to assess additional learning and memory effects incurred by simulation of coordinated apoptotic and neurogenic events in the middle layer. Using a consistent incremental learning capability across all neurons and experimental conditions, increasing the number of middle layer neurons undergoing turnover increased network learning capacity for new information, and increased forgetting of old information. Simulations also showed that specific patterns of neural turnover based on individual neuronal connection characteristics, or the temporal-spatial pattern of neurons chosen for turnover during new learning impacts new learning performance. These simulations predict that apoptotic and neurogenic events could act together to produce specific learning and memory effects beyond those provided by ongoing mechanisms of connection plasticity in neuronal populations. Regulation of rates as well as patterns of neuronal turnover may serve an important function in tuning the informatic properties of plastic networks according to novel informational demands. Analogous regulation in the hippocampus may provide for adaptive cognitive and emotional responses to novel and stressful contexts, or operate suboptimally as a basis for psychiatric disorders. The implications of these elementary simulations for future biological and neural modeling research on apoptosis and neurogenesis are discussed. PMID:14702022

  3. Inverse simulation system for manual-controlled rendezvous and docking based on artificial neural network

    NASA Astrophysics Data System (ADS)

    Zhou, Wanmeng; Wang, Hua; Tang, Guojin; Guo, Shuai

    2016-09-01

    The time-consuming experimental method for handling qualities assessment cannot meet the increasing fast design requirements for the manned space flight. As a tool for the aircraft handling qualities research, the model-predictive-control structured inverse simulation (MPC-IS) has potential applications in the aerospace field to guide the astronauts' operations and evaluate the handling qualities more effectively. Therefore, this paper establishes MPC-IS for the manual-controlled rendezvous and docking (RVD) and proposes a novel artificial neural network inverse simulation system (ANN-IS) to further decrease the computational cost. The novel system was obtained by replacing the inverse model of MPC-IS with the artificial neural network. The optimal neural network was trained by the genetic Levenberg-Marquardt algorithm, and finally determined by the Levenberg-Marquardt algorithm. In order to validate MPC-IS and ANN-IS, the manual-controlled RVD experiments on the simulator were carried out. The comparisons between simulation results and experimental data demonstrated the validity of two systems and the high computational efficiency of ANN-IS.

  4. Nested Neural Networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1992-01-01

    Report presents analysis of nested neural networks, consisting of interconnected subnetworks. Analysis based on simplified mathematical models more appropriate for artificial electronic neural networks, partly applicable to biological neural networks. Nested structure allows for retrieval of individual subpatterns. Requires fewer wires and connection devices than fully connected networks, and allows for local reconstruction of damaged subnetworks without rewiring entire network.

  5. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  6. Research on the Simulation of Neural Networks and Semaphores

    NASA Astrophysics Data System (ADS)

    Zhu, Haibo

    In recent years, much research has been devoted to the emulation of the Turing machine; unfortunately, few have enabled the exploration of SMPs. Given the current status of decentralized algorithms, security experts obviously desire the significant unification of wide-area networks and telephony, which embodies the confusing principles of steganography. In this paper, we present new empathic communication (Bam), demonstrating that digital-to-analog converters and checksums are largely incompatible.

  7. A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors.

    PubMed

    Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L; Nicolau, Alex; Veidenbaum, Alexander V

    2009-01-01

    Neural network simulators that take into account the spiking behavior of neurons are useful for studying brain mechanisms and for various neural engineering applications. Spiking Neural Network (SNN) simulators have been traditionally simulated on large-scale clusters, super-computers, or on dedicated hardware architectures. Alternatively, Compute Unified Device Architecture (CUDA) Graphics Processing Units (GPUs) can provide a low-cost, programmable, and high-performance computing platform for simulation of SNNs. In this paper we demonstrate an efficient, biologically realistic, large-scale SNN simulator that runs on a single GPU. The SNN model includes Izhikevich spiking neurons, detailed models of synaptic plasticity and variable axonal delay. We allow user-defined configuration of the GPU-SNN model by means of a high-level programming interface written in C++ but similar to the PyNN programming interface specification. PyNN is a common programming interface developed by the neuronal simulation community to allow a single script to run on various simulators. The GPU implementation (on NVIDIA GTX-280 with 1 GB of memory) is up to 26 times faster than a CPU version for the simulation of 100K neurons with 50 Million synaptic connections, firing at an average rate of 7 Hz. For simulation of 10 Million synaptic connections and 100K neurons, the GPU SNN model is only 1.5 times slower than real-time. Further, we present a collection of new techniques related to parallelism extraction, mapping of irregular communication, and network representation for effective simulation of SNNs on GPUs. The fidelity of the simulation results was validated on CPU simulations using firing rate, synaptic weight distribution, and inter-spike interval analysis. Our simulator is publicly available to the modeling community so that researchers will have easy access to large-scale SNN simulations. PMID:19615853

  8. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  9. Simulation of the effect of learning on the topology of the functional connectivity of neural networks

    NASA Astrophysics Data System (ADS)

    García, I.; Jiménez, J.; Mujica, R.

    2014-04-01

    We introduce a procedure for simulating adaptive learning in neural networks and the effect this learning has on the way in which the functional connections between the nodes of the network are established. The procedure combines two mechanisms: firstly, the gradual dilution of the network through the elimination of synaptic weights in increasing order of magnitude, thus reducing the costs of the network structure. Secondly, to train the network as it is diluted so as not to compromise its performance pursuant to the proposed task. Considering different levels of learning difficulty, we compare the topology of the functional connectivities that result from the application of this procedure with those obtained using fMRI in healthy volunteers. According to our results, the topology of functional connectivities in healthy subjects can be interpreted as the product of a learning process with a specific degree of difficulty.

  10. Morphological neural networks

    SciTech Connect

    Ritter, G.X.; Sussner, P.

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  11. Decoherence and Entanglement Simulation in a Model of Quantum Neural Network Based on Quantum Dots

    NASA Astrophysics Data System (ADS)

    Altaisky, Mikhail V.; Zolnikova, Nadezhda N.; Kaputkina, Natalia E.; Krylov, Victor A.; Lozovik, Yurii E.; Dattani, Nikesh S.

    2016-02-01

    We present the results of the simulation of a quantum neural network based on quantum dots using numerical method of path integral calculation. In the proposed implementation of the quantum neural network using an array of single-electron quantum dots with dipole-dipole interaction, the coherence is shown to survive up to 0.1 nanosecond in time and up to the liquid nitrogen temperature of 77K.We study the quantum correlations between the quantum dots by means of calculation of the entanglement of formation in a pair of quantum dots on the GaAs based substrate with dot size of 100 ÷ 101 nanometer and interdot distance of 101 ÷ 102 nanometers order.

  12. Simulation of Neurocomputing Based on Photophobic Reactions of Euglena: Toward Microbe-Based Neural Network Computing

    NASA Astrophysics Data System (ADS)

    Ozasa, Kazunari; Aono, Masashi; Maeda, Mizuo; Hara, Masahiko

    In order to develop an adaptive computing system, we investigate microscopic optical feedback to a group of microbes (Euglena gracilis in this study) with a neural network algorithm, expecting that the unique characteristics of microbes, especially their strategies to survive/adapt against unfavorable environmental stimuli, will explicitly determine the temporal evolution of the microbe-based feedback system. The photophobic reactions of Euglena are extracted from experiments, and built in the Monte-Carlo simulation of a microbe-based neurocomputing. The simulation revealed a good performance of Euglena-based neurocomputing. Dynamic transition among the solutions is discussed from the viewpoint of feedback instability.

  13. Neural Network Development Tool (NETS)

    NASA Technical Reports Server (NTRS)

    Baffes, Paul T.

    1990-01-01

    Artificial neural networks formed from hundreds or thousands of simulated neurons, connected in manner similar to that in human brain. Such network models learning behavior. Using NETS involves translating problem to be solved into input/output pairs, designing network configuration, and training network. Written in C.

  14. Simulation and optimization of a pulsating heat pipe using artificial neural network and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Godarzi, Ali Abbasi; Saber, Mohammad; Shafii, Mohammad Behshad

    2016-01-01

    In this paper, a novel approach has been presented to simulate and optimize the pulsating heat pipes (PHPs). The used pulsating heat pipe setup was designed and constructed for this study. Due to the lack of a general mathematical model for exact analysis of the PHPs, a method has been applied for simulation and optimization using the natural algorithms. In this way, the simulator consists of a kind of multilayer perceptron neural network, which is trained by experimental results obtained from our PHP setup. The results show that the complex behavior of PHPs can be successfully described by the non-linear structure of this simulator. The input variables of the neural network are input heat flux to evaporator (q″), filling ratio (FR) and inclined angle (IA) and its output is thermal resistance of PHP. Finally, based upon the simulation results and considering the heat pipe's operating constraints, the optimum operating point of the system is obtained by using genetic algorithm (GA). The experimental results show that the optimum FR (38.25 %), input heat flux to evaporator (39.93 W) and IA (55°) that obtained from GA are acceptable.

  15. Closed Loop Interactions between Spiking Neural Network and Robotic Simulators Based on MUSIC and ROS

    PubMed Central

    Weidel, Philipp; Djurfeldt, Mikael; Duarte, Renato C.; Morrison, Abigail

    2016-01-01

    In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning. PMID:27536234

  16. Closed Loop Interactions between Spiking Neural Network and Robotic Simulators Based on MUSIC and ROS.

    PubMed

    Weidel, Philipp; Djurfeldt, Mikael; Duarte, Renato C; Morrison, Abigail

    2016-01-01

    In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning. PMID:27536234

  17. Neural networks for triggering

    SciTech Connect

    Denby, B. ); Campbell, M. ); Bedeschi, F. ); Chriss, N.; Bowers, C. ); Nesti, F. )

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  18. Application of artificial neural network in simulating subjective evaluation of tumor segmentation

    NASA Astrophysics Data System (ADS)

    Lv, Dongjiao; Deng, Xiang

    2011-03-01

    Systematic validation of tumor segmentation technique is very important in ensuring the accuracy and reproducibility of tumor segmentation algorithm in clinical applications. In this paper, we present a new method for evaluating 3D tumor segmentation using Artificial Neural Network (ANN) and combined objective metrics. In our evaluation method, a three-layer feed-forwarding backpropagation ANN is first trained to simulate radiologist's subjective rating using a set of objective metrics. The trained neural network is then used to evaluate the tumor segmentation on a five-point scale in a way similar to expert's evaluation. The accuracy of segmentation evaluation is quantified using average correct rank and frequency of the reference rating in the top ranks of simulated score list. Experimental results from 93 lesions showed that our evaluation method performs better than individual metrics. The optimal combination of metrics from normalized volume difference, volume overlap, Root Mean Square symmetric surface distance and maximum symmetric surface distance showed the smallest average correct rank (1.43) and highest frequency of the reference rating in the top two places of simulated rating list (93.55%). Our results also demonstrate that the ANN based non-linear combination method showed better evaluation accuracy than linear combination method in all performance measures. Our evaluation technique has the potential to facilitate large scale segmentation validation study by predicting radiologists rating, and to assist development of new tumor segmentation algorithms. It can also be extended to validation of segmentation algorithms for other applications.

  19. Neural Networks for Readability Analysis.

    ERIC Educational Resources Information Center

    McEneaney, John E.

    This paper describes and reports on the performance of six related artificial neural networks that have been developed for the purpose of readability analysis. Two networks employ counts of linguistic variables that simulate a traditional regression-based approach to readability. The remaining networks determine readability from "visual snapshots"…

  20. Medical image diagnoses by artificial neural networks with image correlation, wavelet transform, simulated annealing

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.

    1993-09-01

    Classical artificial neural networks (ANN) and neurocomputing are reviewed for implementing a real time medical image diagnosis. An algorithm known as the self-reference matched filter that emulates the spatio-temporal integration ability of the human visual system might be utilized for multi-frame processing of medical imaging data. A Cauchy machine, implementing a fast simulated annealing schedule, can determine the degree of abnormality by the degree of orthogonality between the patient imagery and the class of features of healthy persons. An automatic inspection process based on multiple modality image sequences is simulated by incorporating the following new developments: (1) 1-D space-filling Peano curves to preserve the 2-D neighborhood pixels' relationship; (2) fast simulated Cauchy annealing for the global optimization of self-feature extraction; and (3) a mini-max energy function for the intra-inter cluster-segregation respectively useful for top-down ANN designs.

  1. SIMONE: a realistic neural network simulator to reproduce MEA-based recordings.

    PubMed

    Escolá, Ricardo; Pouzat, Christophe; Chaffiol, Antoine; Yvert, Blaise; Magnin, Isabelle E; Guillemaud, Régis

    2008-04-01

    Contemporary multielectrode arrays (MEAs) used to record extracellular activity from neural tissues can deliver data at rates on the order of 100 Mbps. Such rates require efficient data compression and/or preprocessing algorithms implemented on an application specific integrated circuit (ASIC) close to the MEA. We present SIMONE (Statistical sIMulation Of Neuronal networks Engine), a versatile simulation tool whose parameters can be either fixed or defined by a probability distribution. We validated our tool by simulating data recorded from the first olfactory relay of an insect. Different key aspects make this tool suitable for testing the robustness and accuracy of neural signal processing algorithms (such as the detection, alignment, and classification of spikes). For instance, most of the parameters can be defined by a probabilistic distribution, then tens of simulations may be obtained from the same scenario. This is especially useful when validating the robustness of the processing algorithm. Moreover, the number of active cells and the exact firing activity of each one of them is perfectly known, which provides an easy way to test accuracy. PMID:18403283

  2. A consensual neural network

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Ersoy, O. K.; Swain, P. H.

    1991-01-01

    A neural network architecture called a consensual neural network (CNN) is proposed for the classification of data from multiple sources. Its relation to hierarchical and ensemble neural networks is discussed. CNN is based on the statistical consensus theory and uses nonlinearly transformed input data. The input data are transformed several times, and the different transformed data are applied as if they were independent inputs. The independent inputs are classified using stage neural networks and outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote-sensing data and geographic data are given.

  3. Circuit Design and Simulation of AN Augmented Adaptive Resonance Theory (aart) Neural Network.

    NASA Astrophysics Data System (ADS)

    Ho, Ching-Sung

    This dissertation presents circuit implementations for an binary-input adaptive resonance theory neural network architecture, called the augmented ART-1 neural network (AART1-NN). The AART1-NN is a modification of the popular ART1-NN, developed by Carpenter and Grossberg, and it exhibits the same behavior as the ART1-NN. The AART1-NN is a real -time model, and has the ability to classify an arbitrary set of binary input patterns into different clusters. The design of the AART1-NN circuit is based on a set of coupled nonlinear differential equations that constitute the AART1 -NN model. Various ways are examined to implement an efficient and practical AART1-NN in electronic hardware. They include designing circuits of AART1-NN by means of discrete electronic components, such as operational amplifiers, capacitors, and resistors, digital VLSI circuit, and mixed analog/digital VLSI circuit. The implemented circuit prototypes are verified using the PSpice circuit simulator, running on Sun workstations. Results obtained from PSpice circuit simulations are also compared with results obtained by solving the coupled differential equations numerically. The prototype systems developed in this work can be used as building blocks for larger AART1-NN architectures, as well as for other types of ART architectures that involve the AART1-NN model.

  4. Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Wong, Jay Ming

    2014-01-01

    Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.

  5. A neural-network-based model for the dynamic simulation of the tire/suspension system while traversing road irregularities.

    PubMed

    Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano

    2008-09-01

    This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy. PMID:18779087

  6. Exploring neural network technology

    SciTech Connect

    Naser, J.; Maulbetsch, J.

    1992-12-01

    EPRI is funding several projects to explore neural network technology, a form of artificial intelligence that some believe may mimic the way the human brain processes information. This research seeks to provide a better understanding of fundamental neural network characteristics and to identify promising utility industry applications. Results to date indicate that the unique attributes of neural networks could lead to improved monitoring, diagnostic, and control capabilities for a variety of complex utility operations. 2 figs.

  7. Simulation studies of data classification by artificial neural networks: potential applications in medical imaging and decision making.

    PubMed

    Wu, Y; Doi, K; Metz, C E; Asada, N; Giger, M L

    1993-05-01

    Artificial neural networks are being investigated in the field of medical imaging as a means to facilitate pattern recognition and patient classification. In the work reported here, the effects of internal structure and the nature of input data on the performance of neural networks were investigated systematically using computer-simulated data. Network performance was evaluated quantitatively by means of receiver operating characteristic analysis and compared with the performance of an ideal statistical decision maker. We found that the relatively simple neural networks investigated in this study can perform at the level of an ideal decision maker. These simple networks were also found to learn accurately even when the training data are extremely unbalanced with respect to the prevalence of actually positive cases and to differentiate input data patterns by recognizing their unique characteristics. PMID:8334172

  8. Flank wears Simulation by using back propagation neural network when cutting hardened H-13 steel in CNC End Milling

    NASA Astrophysics Data System (ADS)

    Hazza, Muataz Hazza F. Al; Adesta, Erry Y. T.; Riza, Muhammad

    2013-12-01

    High speed milling has many advantages such as higher removal rate and high productivity. However, higher cutting speed increase the flank wear rate and thus reducing the cutting tool life. Therefore estimating and predicting the flank wear length in early stages reduces the risk of unaccepted tooling cost. This research presents a neural network model for predicting and simulating the flank wear in the CNC end milling process. A set of sparse experimental data for finish end milling on AISI H13 at hardness of 48 HRC have been conducted to measure the flank wear length. Then the measured data have been used to train the developed neural network model. Artificial neural network (ANN) was applied to predict the flank wear length. The neural network contains twenty hidden layer with feed forward back propagation hierarchical. The neural network has been designed with MATLAB Neural Network Toolbox. The results show a high correlation between the predicted and the observed flank wear which indicates the validity of the models.

  9. Efficient Simulation of Wing Modal Response: Application of 2nd Order Shape Sensitivities and Neural Networks

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Liu, Youhua

    2000-01-01

    At the preliminary design stage of a wing structure, an efficient simulation, one needing little computation but yielding adequately accurate results for various response quantities, is essential in the search of optimal design in a vast design space. In the present paper, methods of using sensitivities up to 2nd order, and direct application of neural networks are explored. The example problem is how to decide the natural frequencies of a wing given the shape variables of the structure. It is shown that when sensitivities cannot be obtained analytically, the finite difference approach is usually more reliable than a semi-analytical approach provided an appropriate step size is used. The use of second order sensitivities is proved of being able to yield much better results than the case where only the first order sensitivities are used. When neural networks are trained to relate the wing natural frequencies to the shape variables, a negligible computation effort is needed to accurately determine the natural frequencies of a new design.

  10. Brain without mind: Computer simulation of neural networks with modifiable neuronal interactions

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Rafelski, Johann; Winston, Jeffrey V.

    1985-07-01

    Aspects of brain function are examined in terms of a nonlinear dynamical system of highly interconnected neuron-like binary decision elements. The model neurons operate synchronously in discrete time, according to deterministic or probabilistic equations of motion. Plasticity of the nervous system, which underlies such cognitive collective phenomena as adaptive development, learning, and memory, is represented by temporal modification of interneuronal connection strengths depending on momentary or recent neural activity. A formal basis is presented for the construction of local plasticity algorithms, or connection-modification routines, spanning a large class. To build an intuitive understanding of the behavior of discrete-time network models, extensive computer simulations have been carried out (a) for nets with fixed, quasirandom connectivity and (b) for nets with connections that evolve under one or another choice of plasticity algorithm. From the former experiments, insights are gained concerning the spontaneous emergence of order in the form of cyclic modes of neuronal activity. In the course of the latter experiments, a simple plasticity routine (“brainwashing,” or “anti-learning”) was identified which, applied to nets with initially quasirandom connectivity, creates model networks which provide more felicitous starting points for computer experiments on the engramming of content-addressable memories and on learning more generally. The potential relevance of this algorithm to developmental neurobiology and to sleep states is discussed. The model considered is at the same time a synthesis of earlier synchronous neural-network models and an elaboration upon them; accordingly, the present article offers both a focused review of the dynamical properties of such systems and a selection of new findings derived from computer simulation.

  11. Deinterlacing using modular neural network

    NASA Astrophysics Data System (ADS)

    Woo, Dong H.; Eom, Il K.; Kim, Yoo S.

    2004-05-01

    Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.

  12. Neural-Network-Development Program

    NASA Technical Reports Server (NTRS)

    Phillips, Todd A.

    1993-01-01

    NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.

  13. Simulated village locations in Thailand: A multi-scale model including a neural network approach

    PubMed Central

    Malanson, George P.; Entwisle, Barbara

    2010-01-01

    The simulation of rural land use systems, in general, and rural settlement dynamics in particular has developed with synergies of theory and methods for decades. Three current issues are: linking spatial patterns and processes, representing hierarchical relations across scales, and considering nonlinearity to address complex non-stationary settlement dynamics. We present a hierarchical simulation model to investigate complex rural settlement dynamics in Nang Rong, Thailand. This simulation uses sub-models to allocate new villages at three spatial scales. Regional and sub-regional models, which involve a nonlinear space-time autoregressive model implemented in a neural network approach, determine the number of new villages to be established. A dynamic village niche model, establishing pattern-process link, was designed to enable the allocation of villages into specific locations. Spatiotemporal variability in model performance indicates the pattern of village location changes as a settlement frontier advances from rice-growing lowlands to higher elevations. Experiments results demonstrate this simulation model can enhance our understanding of settlement development in Nang Rong and thus gain insight into complex land use systems in this area. PMID:21399748

  14. Monthly runoff simulation: Comparing and combining conceptual and neural network models

    NASA Astrophysics Data System (ADS)

    Nilsson, Patrik; Uvo, Cintia B.; Berndtsson, Ronny

    2006-04-01

    Runoff estimation is of high importance for many practical engineering applications so that, e.g. power production, dam safety and water supply can be ensured. The methods and time step relevant for runoff simulations vary depending on the location and the application. Long-term runoff simulation for Scandinavia is of high importance as its hydropower production is affected by climate variability, which strongly influences winter temperature and precipitation. This work investigates the possibility of modelling monthly runoff for two Norwegian river basins. Two methodologies—artificial neural networks (NN) and conceptual runoff modelling (CM)—are compared and NN offer the best estimations of monthly runoff for both tested basins with R2=0.82 and 0.71, respectively. The combination of NN and CM by using snow accumulation and the soil moisture calculated by the CM as input to the NN proved to be an excellent alternative to perform high quality monthly runoff simulations and improved the simulations skill for both basins ( R2=0.86 and 0.75, respectively).

  15. Simulated village locations in Thailand: A multi-scale model including a neural network approach.

    PubMed

    Tang, Wenwu; Malanson, George P; Entwisle, Barbara

    2009-04-01

    The simulation of rural land use systems, in general, and rural settlement dynamics in particular has developed with synergies of theory and methods for decades. Three current issues are: linking spatial patterns and processes, representing hierarchical relations across scales, and considering nonlinearity to address complex non-stationary settlement dynamics. We present a hierarchical simulation model to investigate complex rural settlement dynamics in Nang Rong, Thailand. This simulation uses sub-models to allocate new villages at three spatial scales. Regional and sub-regional models, which involve a nonlinear space-time autoregressive model implemented in a neural network approach, determine the number of new villages to be established. A dynamic village niche model, establishing pattern-process link, was designed to enable the allocation of villages into specific locations. Spatiotemporal variability in model performance indicates the pattern of village location changes as a settlement frontier advances from rice-growing lowlands to higher elevations. Experiments results demonstrate this simulation model can enhance our understanding of settlement development in Nang Rong and thus gain insight into complex land use systems in this area. PMID:21399748

  16. Neural network simulation of habituation and dishabituation in infant speech perception

    NASA Astrophysics Data System (ADS)

    Gauthier, Bruno; Shi, Rushen; Proulx, Robert

    2001-05-01

    The habituation techniques used in infant speech perception studies are based on the fact that infants show renewed interest towards novel stimuli. Recent work has shown the possibility of using artificial neural networks to model habituation and dishabituation (e.g., Schafer and Mareschal, 2001). In our study we examine weather the self-organizing-feature-maps (SOM) (Kohonen, 1989) are appropriate for modeling short-term habituation to a repeated speech stimulus. We found that although SOMs are particularly useful for simulating categorization, they can be modified to model habituation and dishabituation, so that they can be applied to direct comparisons with behavioral data on infants' speech discrimination abilities. In particular, we modified the SOMs to include additional parameters that control the relation of input similarity, lateral inhibition, and local and lateral activation between neurons. Preliminary results suggest that these parameters are sufficient for the network to simulate the loss of sensitivity of the auditory system due to the presentation of multiple tokens of a speech stimulus, as well as to model the recovery of sensitivity to a novel stimulus. The implications of this approach to infant speech perception research will be considered.

  17. Model neural networks

    SciTech Connect

    Kepler, T.B.

    1989-01-01

    After a brief introduction to the techniques and philosophy of neural network modeling by spin glass inspired system, the author investigates several properties of these discrete models for autoassociative memory. Memories are represented as patterns of neural activity; their traces are stored in a distributed manner in the matrix of synaptic coupling strengths. Recall is dynamic, an initial state containing partial information about one of the memories evolves toward that memory. Activity in each neuron creates fields at every other neuron, the sum total of which determines its activity. By averaging over the space of interaction matrices with memory constraints enforced by the choice of measure, we show that the exist universality classes defined by families of field distributions and the associated network capacities. He demonstrates the dominant role played by the field distribution in determining the size of the domains of attraction and present, in two independent ways, an expression for this size. He presents a class of convergent learning algorithms which improve upon known algorithms for producing such interaction matrices. He demonstrates that spurious states, or unexperienced memories, may be practically suppressed by the inducement of n-cycles and chaos. He investigates aspects of chaos in these systems, and then leave discrete modeling to implement the analysis of chaotic behavior on a continuous valued network realized in electronic hardware. In each section he combine analytical calculation and computer simulations.

  18. Latent Heat and Sensible Heat Fluxes Simulation in Maize Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Safa, B.

    2015-12-01

    Latent Heat (LE) and Sensible Heat (H) flux are two major components of the energy balance at the earth's surface which play important roles in the water cycle and global warming. There are various methods for their estimation or measurement. Eddy covariance is a direct and accurate technique for their measurement. Some limitations lead to prevention of the extensive use of the eddy covariance technique. Therefore, simulation approaches can be utilized for their estimation. ANNs are the information processing systems, which can inspect the empirical data and investigate the relations (hidden rules) among them, and then make the network structure. In this study, multi-layer perceptron neural network trained by the steepest descent Back-Propagation (BP) algorithm was tested to simulate LE and H flux above two maize sites (rain-fed & irrigated) near Mead, Nebraska. Network training and testing was fulfilled using hourly data of including year, local time of day (DTime), leaf area index (LAI), soil water content (SWC) in 10 and 25 cm depths, soil temperature (Ts) in 10 cm depth, air temperature (Ta), vapor pressure deficit (VPD), wind speed (WS), irrigation and precipitation (P), net radiation (Rn), and the fraction of incoming Photosynthetically Active Radiation (PAR) absorbed by the canopy (fPAR), which were selected from days of year (DOY) 169 to 222 for 2001, 2003, 2005, 2007, and 2009. The results showed high correlation between actual and estimated data; the R² values for LE flux in irrigated and rain-fed sites were 0.9576, and 0.9642; and for H flux 0.8001, and 0.8478, respectively. Furthermore, the RMSE values ranged from 0.0580 to 0.0721 W/m² for LE flux and from 0.0824 to 0.0863 W/m² for H flux. In addition, the sensitivity of the fluxes with respect to each input was analyzed over the growth stages. Thus, the most powerful effects among the inputs for LE flux were identified net radiation, leaf area index, vapor pressure deficit, wind speed, and for H

  19. Spatial Estimation, Data Assimilation and Stochastic Conditional Simulation using the Counterpropagation Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Besaw, L. E.; Rizzo, D. M.; Boitnoitt, G. N.

    2006-12-01

    Accurate, yet cost effective, sites characterization and analysis of uncertainty are the first steps in remediation efforts at sites with subsurface contamination. From the time of source identification to the monitoring and assessment of a remediation design, the management objectives change, resulting in increased costs and the need for additional data acquisition. Parameter estimation is a key component in reliable site characterization, contaminant flow and transport predictions, plume delineation and many other data management goals. We implement a data-driven parameter estimation technique using a counterpropagation Artificial Neural Network (ANN) that is able to incorporate multiple types of data. This method is applied to estimates of geophysical properties measured on a slab of Berea sandstone and delineation of the leachate plume migrating from a landfill in upstate N.Y. The estimates generated by the ANN have been found to be statistically similar to estimates generated using conventional geostatistical kriging methods. The associated parameter uncertainty in site characterization, due to sparsely distributed samples (spatial or temporal) and incomplete site knowledge, is of major concern in resource mining and environmental engineering. We also illustrate the ability of the ANN method to perform conditional simulation using the spatial structure of parameters identified with semi-variogram analysis. This method allows for the generation of simulations that respect the observed measurement data, as well as the data's underlying spatial structure. The method of conditional simulation is used in a 3-dimensional application to estimate the uncertainty of soil lithology.

  20. A multiscale modelling of bone ultrastructure elastic proprieties using finite elements simulation and neural network method.

    PubMed

    Barkaoui, Abdelwahed; Tlili, Brahim; Vercher-Martínez, Ana; Hambli, Ridha

    2016-10-01

    Bone is a living material with a complex hierarchical structure which entails exceptional mechanical properties, including high fracture toughness, specific stiffness and strength. Bone tissue is essentially composed by two phases distributed in approximately 30-70%: an organic phase (mainly type I collagen and cells) and an inorganic phase (hydroxyapatite-HA-and water). The nanostructure of bone can be represented throughout three scale levels where different repetitive structural units or building blocks are found: at the first level, collagen molecules are arranged in a pentameric structure where mineral crystals grow in specific sites. This primary bone structure constitutes the mineralized collagen microfibril. A structural organization of inter-digitating microfibrils forms the mineralized collagen fibril which represents the second scale level. The third scale level corresponds to the mineralized collagen fibre which is composed by the binding of fibrils. The hierarchical nature of the bone tissue is largely responsible of their significant mechanical properties; consequently, this is a current outstanding research topic. Scarce works in literature correlates the elastic properties in the three scale levels at the bone nanoscale. The main goal of this work is to estimate the elastic properties of the bone tissue in a multiscale approach including a sensitivity analysis of the elastic behaviour at each length scale. This proposal is achieved by means of a novel hybrid multiscale modelling that involves neural network (NN) computations and finite elements method (FEM) analysis. The elastic properties are estimated using a neural network simulation that previously has been trained with the database results of the finite element models. In the results of this work, parametric analysis and averaged elastic constants for each length scale are provided. Likewise, the influence of the elastic constants of the tissue constituents is also depicted. Results highlight

  1. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  2. Critical Branching Neural Networks

    ERIC Educational Resources Information Center

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  3. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  4. Comparison of Artificial Neural Networks and ARIMA statistical models in simulations of target wind time series

    NASA Astrophysics Data System (ADS)

    Kolokythas, Kostantinos; Vasileios, Salamalikis; Athanassios, Argiriou; Kazantzidis, Andreas

    2015-04-01

    The wind is a result of complex interactions of numerous mechanisms taking place in small or large scales, so, the better knowledge of its behavior is essential in a variety of applications, especially in the field of power production coming from wind turbines. In the literature there is a considerable number of models, either physical or statistical ones, dealing with the problem of simulation and prediction of wind speed. Among others, Artificial Neural Networks (ANNs) are widely used for the purpose of wind forecasting and, in the great majority of cases, outperform other conventional statistical models. In this study, a number of ANNs with different architectures, which have been created and applied in a dataset of wind time series, are compared to Auto Regressive Integrated Moving Average (ARIMA) statistical models. The data consist of mean hourly wind speeds coming from a wind farm on a hilly Greek region and cover a period of one year (2013). The main goal is to evaluate the models ability to simulate successfully the wind speed at a significant point (target). Goodness-of-fit statistics are performed for the comparison of the different methods. In general, the ANN showed the best performance in the estimation of wind speed prevailing over the ARIMA models.

  5. Stochastic simulation and spatial estimation with multiple data types using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Besaw, Lance E.; Rizzo, Donna M.

    2007-11-01

    A novel data-driven artificial neural network (ANN) that quantitatively combines large numbers of multiple types of soft data is presented for performing stochastic simulation and/or spatial estimation. A counterpropagation ANN is extended with a radial basis function to estimate parameter fields that reproduce the spatial structure exhibited in autocorrelated parameters. Applications involve using three geophysical properties measured on a slab of Berea sandstone and the delineation of landfill leachate at a site in the Netherlands using electrical formation conductivity as our primary variable and six types of secondary data (e.g., hydrochemistry, archaea, and bacteria). The ANN estimation fields are statistically similar to geostatistical methods (indicator simulation and cokriging) and reference fields (when available). The method is a nonparametric clustering/classification algorithm that can assimilate significant amounts of disparate data types with both continuous and categorical responses without the computational burden associated with the construction of positive definite covariance and cross-covariance matrices. The combination of simplicity and computational speed makes the method ideally suited for environmental subsurface characterization and other Earth science applications with spatially autocorrelated variables.

  6. Neural network applications

    NASA Technical Reports Server (NTRS)

    Padgett, Mary L.; Desai, Utpal; Roppel, T.A.; White, Charles R.

    1993-01-01

    A design procedure is suggested for neural networks which accommodates the inclusion of such knowledge-based systems techniques as fuzzy logic and pairwise comparisons. The use of these procedures in the design of applications combines qualitative and quantitative factors with empirical data to yield a model with justifiable design and parameter selection procedures. The procedure is especially relevant to areas of back-propagation neural network design which are highly responsive to the use of precisely recorded expert knowledge.

  7. Iterative Radial Basis Functions Neural Networks as Metamodels of Stochastic Simulations of the Quality of Search Engines in the World Wide Web.

    ERIC Educational Resources Information Center

    Meghabghab, George

    2001-01-01

    Discusses the evaluation of search engines and uses neural networks in stochastic simulation of the number of rejected Web pages per search query. Topics include the iterative radial basis functions (RBF) neural network; precision; response time; coverage; Boolean logic; regression models; crawling algorithms; and implications for search engine…

  8. Simulation of field injection experiments in heterogeneous unsaturated media using cokriging and artificial neural network

    NASA Astrophysics Data System (ADS)

    Ye, Ming; Khaleel, Raziuddin; Schaap, Marcel G.; Zhu, Jianting

    2007-07-01

    Simulations of moisture flow in heterogeneous soils are often hampered by lack of measurements of soil hydraulic parameters, making it necessary to rely on other sources of information. In this paper, we develop a methodology to integrate data that can be easily obtained (for example, initial moisture content, θi, bulk density, and soil texture) with data on soil hydraulic properties via cokriging and Artificial Neural Network (ANN)-based pedotransfer functions. The method is applied to generate heterogeneous soil hydraulic parameters at a field injection site in southeastern Washington State. Stratigraphy at the site consists of imperfectly stratified layers with irregular layer boundaries. Cokriging is first used to generate three-dimensional heterogeneous fields of bulk density and soil texture using an extensive data set of field-measured θi, which carry signature about site heterogeneity and stratigraphy. Soil texture and bulk density are subsequently input into an ANN-based site-specific pedotransfer function to generate three-dimensional heterogeneous soil hydraulic parameter fields. The stratigraphy at the site is well represented by the estimated pedotransfer variables and soil hydraulic parameters. The parameter estimates are then used to simulate a field injection experiment at the site. A relatively good agreement is obtained between the simulated and observed moisture contents. The spatial distribution pattern of observed moisture content as well as the southeastward moisture movement is captured well in the simulations. In contrast to earlier work using an effective parameter approach (Yeh et al., 2005), we are able to reproduce the observed splitting of the moisture plume in a coarse sand unit that is sandwiched between two fine-textured units. The simple method of combining cokriging and ANN for site characterization provides unbiased prediction of the observed moisture plume and is flexible so that additional measurements of various types can be

  9. Demonstration of Self-Training Autonomous Neural Networks in Space Vehicle Docking Simulations

    NASA Technical Reports Server (NTRS)

    Patrick, M. Clinton; Thaler, Stephen L.; Stevenson-Chavis, Katherine

    2006-01-01

    Neural Networks have been under examination for decades in many areas of research, with varying degrees of success and acceptance. Key goals of computer learning, rapid problem solution, and automatic adaptation have been elusive at best. This paper summarizes efforts at NASA's Marshall Space Flight Center harnessing such technology to autonomous space vehicle docking for the purpose of evaluating applicability to future missions.

  10. Using Multivariate Adaptive Regression Spline and Artificial Neural Network to Simulate Urbanization in Mumbai, India

    NASA Astrophysics Data System (ADS)

    Ahmadlou, M.; Delavar, M. R.; Tayyebi, A.; Shafizadeh-Moghadam, H.

    2015-12-01

    Land use change (LUC) models used for modelling urban growth are different in structure and performance. Local models divide the data into separate subsets and fit distinct models on each of the subsets. Non-parametric models are data driven and usually do not have a fixed model structure or model structure is unknown before the modelling process. On the other hand, global models perform modelling using all the available data. In addition, parametric models have a fixed structure before the modelling process and they are model driven. Since few studies have compared local non-parametric models with global parametric models, this study compares a local non-parametric model called multivariate adaptive regression spline (MARS), and a global parametric model called artificial neural network (ANN) to simulate urbanization in Mumbai, India. Both models determine the relationship between a dependent variable and multiple independent variables. We used receiver operating characteristic (ROC) to compare the power of the both models for simulating urbanization. Landsat images of 1991 (TM) and 2010 (ETM+) were used for modelling the urbanization process. The drivers considered for urbanization in this area were distance to urban areas, urban density, distance to roads, distance to water, distance to forest, distance to railway, distance to central business district, number of agricultural cells in a 7 by 7 neighbourhoods, and slope in 1991. The results showed that the area under the ROC curve for MARS and ANN was 94.77% and 95.36%, respectively. Thus, ANN performed slightly better than MARS to simulate urban areas in Mumbai, India.

  11. Linear eddy mixing based tabulation and artificial neural networks for large eddy simulations of turbulent flames

    SciTech Connect

    Sen, Baris Ali; Menon, Suresh

    2010-01-15

    A large eddy simulation (LES) sub-grid model is developed based on the artificial neural network (ANN) approach to calculate the species instantaneous reaction rates for multi-step, multi-species chemical kinetics mechanisms. The proposed methodology depends on training the ANNs off-line on a thermo-chemical database representative of the actual composition and turbulence (but not the actual geometrical problem) of interest, and later using them to replace the stiff ODE solver (direct integration (DI)) to calculate the reaction rates in the sub-grid. The thermo-chemical database is tabulated with respect to the thermodynamic state vector without any reduction in the number of state variables. The thermo-chemistry is evolved by stand-alone linear eddy mixing (LEM) model simulations under both premixed and non-premixed conditions, where the unsteady interaction of turbulence with chemical kinetics is included as a part of the training database. The proposed methodology is tested in LES and in stand-alone LEM studies of three distinct test cases with different reduced mechanisms and conditions. LES of premixed flame-turbulence-vortex interaction provides direct comparison of the proposed ANN method against DI and ANNs trained on thermo-chemical database created using another type of tabulation method. It is shown that the ANN trained on the LEM database can capture the correct flame physics with accuracy comparable to DI, which cannot be achieved by ANN trained on a laminar premix flame database. A priori evaluation of the ANN generality within and outside its training domain is carried out using stand-alone LEM simulations as well. Results in general are satisfactory, and it is shown that the ANN provides considerable amount of memory saving and speed-up with reasonable and reliable accuracy. The speed-up is strongly affected by the stiffness of the reduced mechanism used for the computations, whereas the memory saving is considerable regardless. (author)

  12. Studies of stimulus parameters for seizure disruption using neural network simulations

    PubMed Central

    Kudela, Pawel; Cho, Ryan J.; Bergey, Gregory K.; Franaszczuk, Piotr

    2009-01-01

    A large scale neural network simulation with realistic cortical architecture has been undertaken to investigate the effects of external electrical stimulation on the propagation and evolution of ongoing seizure activity. This is an effort to explore the parameter space of stimulation variables to uncover promising avenues of research for this therapeutic modality. The model consists of an approximately 800 μm × 800 μm region of simulated cortex, and includes seven neuron classes organized by cortical layer, inhibitory or excitatory properties, and electrophysiological characteristics. The cell dynamics are governed by a modified version of the Hodgkin-Huxley equations in single compartment format. Axonal connections are patterned after histological data and published models of local cortical wiring. Stimulation induced action potentials take place at the axon initial segments, according to threshold requirements on the applied electric field distribution. Stimulation induced action potentials in horizontal axonal branches are also separately simulated. The calculations are performed on a 16 node distributed 32-bit processor system. Clear differences in seizure evolution are presented for stimulated versus the undisturbed rhythmic activity. Data is provided for frequency dependent stimulation effects demonstrating a plateau effect of stimulation efficacy as the applied frequency is increased from 60 Hz to 200 Hz. Timing of the stimulation with respect to the underlying rhythmic activity demonstrates a phase dependent sensitivity. Electrode height and position effects are also presented. Using a dipole stimulation electrode arrangement, clear orientation effects of the dipole with respect to the model connectivity is also demonstrated. A sensitivity analysis of these results as a function of the stimulation threshold is also provided. PMID:17619199

  13. Centroid calculation using neural networks

    NASA Astrophysics Data System (ADS)

    Himes, Glenn S.; Inigo, Rafael M.

    1992-01-01

    Centroid calculation provides a means of eliminating translation problems, which is useful for automatic target recognition. a neural network implementation of centroid calculation is described that used a spatial filter and a Hopfield network to determine the centroid location of an object. spatial filtering of a segmented window creates a result whose peak vale occurs at the centroid of the input data set. A Hopfield network then finds the location of this peak and hence gives the location of the centroid. Hardware implementations of the networks are described and simulation results are provided.

  14. Application of artificial neural networks in hydrological modeling: A case study of runoff simulation of a Himalayan glacier basin

    NASA Technical Reports Server (NTRS)

    Buch, A. M.; Narain, A.; Pandey, P. C.

    1994-01-01

    The simulation of runoff from a Himalayan Glacier basin using an Artificial Neural Network (ANN) is presented. The performance of the ANN model is found to be superior to the Energy Balance Model and the Multiple Regression model. The RMS Error is used as the figure of merit for judging the performance of the three models, and the RMS Error for the ANN model is the latest of the three models. The ANN is faster in learning and exhibits excellent system generalization characteristics.

  15. Large eddy simulation of extinction and reignition with artificial neural networks based chemical kinetics

    SciTech Connect

    Sen, Baris Ali; Menon, Suresh; Hawkes, Evatt R.

    2010-03-15

    Large eddy simulation (LES) of a non-premixed, temporally evolving, syngas/air flame is performed with special emphasis on speeding-up the sub-grid chemistry computations using an artificial neural networks (ANN) approach. The numerical setup for the LES is identical to a previous direct numerical simulation (DNS) study, which reported considerable local extinction and reignition physics, and hence, offers a challenging test case. The chemical kinetics modeling with ANN is based on a recent approach, and replaces the stiff ODE solver (DI) to predict the species reaction rates in the subgrid linear eddy mixing (LEM) model based LES (LEMLES). In order to provide a comprehensive evaluation of the current approach, additional information on conditional statistics of some of the key species and temperature are extracted from the previous DNS study and are compared with the LEMLES using ANN (ANN-LEMLES, hereafter). The results show that the current approach can detect the correct extinction and reignition physics with reasonable accuracy compared to the DNS. The syngas flame structure and the scalar dissipation rate statistics obtained by the current ANN-LEMLES are provided to further probe the flame physics. It is observed that, in contrast to H{sub 2}, CO exhibits a smooth variation within the region enclosed by the stoichiometric mixture fraction. The probability density functions (PDFs) of the scalar dissipation rates calculated based on the mixture fraction and CO demonstrate that the mean value of the PDF is insensitive to extinction and reignition. However, this is not the case for the scalar dissipation rate calculated by the OH mass fraction. Overall, ANN provides considerable computational speed-up and memory saving compared to DI, and can be used to investigate turbulent flames in a computationally affordable manner. (author)

  16. Design and simulation of a multiport neural network heteroassociative memory for optical pattern recognitions

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir; Lazarev, Alexander; Grabovlyak, Sveta

    2012-04-01

    The modified matrix equivalently models (MMEMs) of multiport neural network heteroassociative memory (MP_NN_HAM) with double adaptive - equivalently weighing (DAEW) for recognition of 1D and 2D-patterns (images) are offered. It is shown, that computing process in MP_NN_HAM under using the proposed MMEMs, is reduced to two-step and multi-step algorithms and step-by-step matrix-matrix (tensor-tensor) procedures. The base operations and structural components for construction of MP_NN_HAM are matrix-matrix multipliers and matrixes of nonlinear converters, including threshold transformations. Advantages of such MMEMs for MP_NN_HAM were shown and confirmed by computer simulation results. The aim of paper is research of improved models and MP_NN_HAM for input 1D and 2D signals with unipolar coding and their capacity determination. The given results of computer simulations confirmed the perspective of such models. Results were also received for case of a MP_NN_HAM on base of MMEMs capacity exceeded a neurons amount. This memory is intended to recognize parallel and refresh P input distorted images (N-element vector). Such MP_NN_HAM is a kind of combination consisting of P independently functioning NN_HAM with common memory. Variants of optical realization of MP_NN_HAM architectures are considered in paper. A whole system is consists of two matrix-matrix (for 1D patterns) or two tensortensor (for 2D patterns) equivalentors (E) (or nonequivalentors (NE)) (MME and MMNE or TTE and TTNE).The proposed E (or NE) architecture with temporary integration has more large dimension of HAM and more simple design.

  17. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  18. Numerical Simulation and Artificial Neural Network Modeling for Predicting Welding-Induced Distortion in Butt-Welded 304L Stainless Steel Plates

    NASA Astrophysics Data System (ADS)

    Narayanareddy, V. V.; Chandrasekhar, N.; Vasudevan, M.; Muthukumaran, S.; Vasantharaja, P.

    2016-02-01

    In the present study, artificial neural network modeling has been employed for predicting welding-induced angular distortions in autogenous butt-welded 304L stainless steel plates. The input data for the neural network have been obtained from a series of three-dimensional finite element simulations of TIG welding for a wide range of plate dimensions. Thermo-elasto-plastic analysis was carried out for 304L stainless steel plates during autogenous TIG welding employing double ellipsoidal heat source. The simulated thermal cycles were validated by measuring thermal cycles using thermocouples at predetermined positions, and the simulated distortion values were validated by measuring distortion using vertical height gauge for three cases. There was a good agreement between the model predictions and the measured values. Then, a multilayer feed-forward back propagation neural network has been developed using the numerically simulated data. Artificial neural network model developed in the present study predicted the angular distortion accurately.

  19. Hyperbolic Hopfield neural networks.

    PubMed

    Kobayashi, M

    2013-02-01

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states. PMID:24808287

  20. Self-organization of neural networks

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Winston, Jeffrey V.; Rafelski, Johann

    1984-05-01

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (“brainwashing”) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conductive to the simulation of memory and learning phenomena.

  1. Advanced telerobotic control using neural networks

    NASA Technical Reports Server (NTRS)

    Pap, Robert M.; Atkins, Mark; Cox, Chadwick; Glover, Charles; Kissel, Ralph; Saeks, Richard

    1993-01-01

    Accurate Automation is designing and developing adaptive decentralized joint controllers using neural networks. We are then implementing these in hardware for the Marshall Space Flight Center PFMA as well as to be usable for the Remote Manipulator System (RMS) robot arm. Our design is being realized in hardware after completion of the software simulation. This is implemented using a Functional-Link neural network.

  2. Nested neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    Nested neural networks, consisting of small interconnected subnetworks, allow for the storage and retrieval of neural state patterns of different sizes. The subnetworks are naturally categorized by layers of corresponding to spatial frequencies in the pattern field. The storage capacity and the error correction capability of the subnetworks generally increase with the degree of connectivity between layers (the nesting degree). Storage of only few subpatterns in each subnetworks results in a vast storage capacity of patterns and subpatterns in the nested network, maintaining high stability and error correction capability.

  3. Neural Networks and Micromechanics

    NASA Astrophysics Data System (ADS)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  4. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  5. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  6. Neural networks and applications tutorial

    NASA Astrophysics Data System (ADS)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  7. Overview of artificial neural networks.

    PubMed

    Zou, Jinming; Han, Yi; So, Sung-Sau

    2008-01-01

    The artificial neural network (ANN), or simply neural network, is a machine learning method evolved from the idea of simulating the human brain. The data explosion in modem drug discovery research requires sophisticated analysis methods to uncover the hidden causal relationships between single or multiple responses and a large set of properties. The ANN is one of many versatile tools to meet the demand in drug discovery modeling. Compared to a traditional regression approach, the ANN is capable of modeling complex nonlinear relationships. The ANN also has excellent fault tolerance and is fast and highly scalable with parallel processing. This chapter introduces the background of ANN development and outlines the basic concepts crucially important for understanding more sophisticated ANN. Several commonly used learning methods and network setups are discussed briefly at the end of the chapter. PMID:19065803

  8. Neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  9. Coherence resonance in bursting neural networks

    NASA Astrophysics Data System (ADS)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  10. Multispectral-image fusion using neural networks

    NASA Astrophysics Data System (ADS)

    Kagel, Joseph H.; Platt, C. A.; Donaven, T. W.; Samstad, Eric A.

    1990-08-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard a circuit card assembly and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations results and a description of the prototype system are presented. 1.

  11. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  12. Parallel processing neural networks

    SciTech Connect

    Zargham, M.

    1988-09-01

    A model for Neural Network which is based on a particular kind of Petri Net has been introduced. The model has been implemented in C and runs on the Sequent Balance 8000 multiprocessor, however it can be directly ported to different multiprocessor environments. The potential advantages of using Petri Nets include: (1) the overall system is often easier to understand due to the graphical and precise nature of the representation scheme, (2) the behavior of the system can be analyzed using Petri Net theory. Though, the Petri Net is an obvious choice as a basis for the model, the basic Petri Net definition is not adequate to represent the neuronal system. To eliminate certain inadequacies more information has been added to the Petri Net model. In the model, a token represents either a processor or a post synaptic potential. Progress through a particular Neural Network is thus graphically depicted in the movement of the processor tokens through the Petri Net.

  13. Network Simulation

    SciTech Connect

    Fujimoto, Richard; Perumalla, Kalyan S; Riley, George F.

    2006-01-01

    A detailed introduction to the design, implementation and use of network simulation tools is presented. The requirements and issues faced in the design of simulators for wired and wireless networks are discussed. Abstractions such as packet- and fluid-level network models are covered. Several existing simulations are given as examples, with details and rationales regarding design decisions presented. Issues regarding performance and scalability are discussed in detail, describing how one can utilize distributed simulation methods to increase the scale and performance of a simulation environment. Finally, a case study of two simulation tools is presented that have been developed using distributed simulation techniques. This text is essential to any student, researcher or network architect desiring a detailed understanding of how network simulation tools are designed, implemented, and used.

  14. Uniformly sparse neural networks

    NASA Astrophysics Data System (ADS)

    Haghighi, Siamack

    1992-07-01

    Application of neural networks to problems with a large number of sensory inputs is severely limited when the processing elements (PEs) need to be fully connected. This paper presents a new network model in which a trade off between the number of connections to a node and the number of processing layers can be made. This trade off is an important issue in the VLSI implementation of neural networks. The performance and capability of a hierarchical pyramidal network architecture of limited fan-in PE layers is analyzed. Analysis of this architecture requires the development of a new learning rule, since each PE has access to limited information about the entire network input. A spatially local unsupervised training rule is developed in which each PE optimizes the fraction of its output variance contributed by input correlations, resulting in PEs behaving as adaptive local correlation detectors. It is also shown that the output of a PE optimally represents the mutual information among the inputs to that PE. Applications of the developed model in image compression and motion detection are presented.

  15. Algorithm For A Self-Growing Neural Network

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.

    1996-01-01

    CID3 algorithm simulates self-growing neural network. Constructs decision trees equivalent to hidden layers of neural network. Based on ID3 algorithm, which dynamically generates decision tree while minimizing entropy of information. CID3 algorithm generates feedforward neural network by use of either crisp or fuzzy measure of entropy.

  16. Space-Time Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  17. Prediction of molecular-dynamics simulation results using feedforward neural networks: Reaction of a C2 dimer with an activated diamond (100) surface

    NASA Astrophysics Data System (ADS)

    Agrawal, Paras M.; Samadh, Abdul N. A.; Raff, Lionel M.; Hagan, Martin T.; Bukkapatnam, Satish T.; Komanduri, Ranga

    2005-12-01

    A new approach involving neural networks combined with molecular dynamics has been used for the determination of reaction probabilities as a function of various input parameters for the reactions associated with the chemical-vapor deposition of carbon dimers on a diamond (100) surface. The data generated by the simulations have been used to train and test neural networks. The probabilities of chemisorption, scattering, and desorption as a function of input parameters, such as rotational energy, translational energy, and direction of the incident velocity vector of the carbon dimer, have been considered. The very good agreement obtained between the predictions of neural networks and those provided by molecular dynamics and the fact that, after training the network, the determination of the interpolated probabilities as a function of various input parameters involves only the evaluation of simple analytical expressions rather than computationally intensive algorithms show that neural networks are extremely powerful tools for interpolating the probabilities and rates of chemical reactions. We also find that a neural network fits the underlying trends in the data rather than the statistical variations present in the molecular-dynamics results. Consequently, neural networks can also provide a computationally convenient means of averaging the statistical variations inherent in molecular-dynamics calculations. In the present case the application of this method is found to reduce the statistical uncertainty in the molecular-dynamics results by about a factor of 3.5.

  18. Radar signal categorization using a neural network

    NASA Technical Reports Server (NTRS)

    Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.

    1991-01-01

    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.

  19. Co-combustion of peanut hull and coal blends: Artificial neural networks modeling, particle swarm optimization and Monte Carlo simulation.

    PubMed

    Buyukada, Musa

    2016-09-01

    Co-combustion of coal and peanut hull (PH) were investigated using artificial neural networks (ANN), particle swarm optimization, and Monte Carlo simulation as a function of blend ratio, heating rate, and temperature. The best prediction was reached by ANN61 multi-layer perception model with a R(2) of 0.99994. Blend ratio of 90 to 10 (PH to coal, wt%), temperature of 305°C, and heating rate of 49°Cmin(-1) were determined as the optimum input values and yield of 87.4% was obtained under PSO optimized conditions. The validation experiments resulted in yields of 87.5%±0.2 after three replications. Monte Carlo simulations were used for the probabilistic assessments of stochastic variability and uncertainty associated with explanatory variables of co-combustion process. PMID:27243606

  20. Interacting neural networks.

    PubMed

    Metzler, R; Kinzel, W; Kanter, I

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random. PMID:11088736

  1. Warranty optimisation based on the prediction of costs to the manufacturer using neural network model and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Stamenkovic, Dragan D.; Popovic, Vladimir M.

    2015-02-01

    Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.

  2. Neural network modeling of emotion

    NASA Astrophysics Data System (ADS)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  3. Time-Dependent DIII-D Heat Transport Simulations Using Neural-Network Models

    NASA Astrophysics Data System (ADS)

    Penna, J. M.; Smith, S. P.; Meneghini, O.; Luna, C. J.

    2014-10-01

    The neural network transport model BRAINFUSE has been developed to produce transport fluxes based on local parameters. The BRAIN-FUSE model has been integrated into the transport modeling framework ONETWO in order to develop time dependent solutions and has been validated by artificially varying the input neutral beam power and comparing the output to DIII-D scans. These efforts have led to the development of a time-dependent workflow within the OMFIT integrated modeling framework. The new work flow can evolve the electron and ion temperatures as a function of time dependent sources and equilibria. The effects of different engineering parameters can be explored and optimized in support of DIII-D operations. The efficiency of this workflow enables planning plasma operations of next-day experiments, as will be required for ITER. Work supported in part by the National Undergraduate Fellowship Program in Plasma Physics and Fusion Energy Sciences and the US Department of Energy under DE-FG02-94ER54235 & DE-FC02-04ER54698.

  4. Constructing prediction interval for artificial neural network rainfall runoff models based on ensemble simulations

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, K. S.; Cibin, R.; Sudheer, K. P.; Chaubey, I.

    2013-08-01

    This paper presents a method of constructing prediction interval for artificial neural network (ANN) rainfall runoff models during calibration with a consideration of generating ensemble predictions. A two stage optimization procedure is envisaged in this study for construction of prediction interval for the ANN output. In Stage 1, ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector. In Stage 2, possible variability of ANN parameters (obtained in Stage 1) is optimized so as to create an ensemble of models with the consideration of minimum residual variance for the ensemble mean, while ensuring a maximum of the measured data to fall within the estimated prediction interval. The width of the prediction interval is also minimized simultaneously. The method is demonstrated using a real world case study of rainfall runoff data for an Indian basin. The method was able to produce ensembles with a prediction interval (average width) of 26.49 m3/s with 97.17% of the total observed data points lying within the interval in validation. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs.

  5. Dynamic interactions in neural networks

    SciTech Connect

    Arbib, M.A. ); Amari, S. )

    1989-01-01

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  6. Optimization neural network for solving flow problems.

    PubMed

    Perfetti, R

    1995-01-01

    This paper describes a neural network for solving flow problems, which are of interest in many areas of application as in fuel, hydro, and electric power scheduling. The neural network consist of two layers: a hidden layer and an output layer. The hidden units correspond to the nodes of the flow graph. The output units represent the branch variables. The network has a linear order of complexity, it is easily programmable, and it is suited for analog very large scale integration (VLSI) realization. The functionality of the proposed network is illustrated by a simulation example concerning the maximal flow problem. PMID:18263420

  7. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  8. Neural Networks for the Beginner.

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  9. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  10. Integration of Volterra model with artificial neural networks for rainfall-runoff simulation in forested catchment of northern Iran

    NASA Astrophysics Data System (ADS)

    Kashani, Mahsa H.; Ghorbani, Mohammad Ali; Dinpashoh, Yagob; Shahmorad, Sedaghat

    2016-09-01

    Rainfall-runoff simulation is an important task in water resources management. In this study, an integrated Volterra model with artificial neural networks (IVANN) was presented to simulate the rainfall-runoff process. The proposed integrated model includes the semi-distributed forms of the Volterra and ANN models which can explore spatial variation in rainfall-runoff process without requiring physical characteristic parameters of the catchments, while taking advantage of the potential of Volterra and ANNs models in nonlinear mapping. The IVANN model was developed using hourly rainfall and runoff data pertaining to thirteen storms to study short-term responses of a forest catchment in northern Iran; and its performance was compared with that of semi-distributed integrated ANN (IANN) model and lumped Volterra model. The Volterra model was applied as a nonlinear model (second-order Volterra (SOV) model) and solved using the ordinary least square (OLS) method. The models performance were evaluated and compared using five performance criteria namely coefficient of efficiency, root mean square error, error of total volume, relative error of peak discharge and error of time for peak to arrive. Results showed that the IVANN model performs well than the other semi-distributed and lumped models to simulate the rainfall-runoff process. Comparing to the integrated models, the lumped SOV model has lower precision to simulate the rainfall-runoff process.

  11. Chlorophyll a Simulation in a Lake Ecosystem Using a Model with Wavelet Analysis and Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Wang, Xuan; Chen, Bin; Zhao, Ying; Yang, Zhifeng

    2013-05-01

    Accurate and reliable forecasting is important for the sustainable management of ecosystems. Chlorophyll a (Chl a) simulation and forecasting can provide early warning information and enable managers to make appropriate decisions for protecting lake ecosystems. In this study, we proposed a method for Chl a simulation in a lake that coupled the wavelet analysis and the artificial neural networks (WA-ANN). The proposed method had the advantage of data preprocessing, which reduced noise and managed nonstationary data. Fourteen variables were included in the developed and validated model, relating to hydrologic, ecological and meteorologic time series data from January 2000 to December 2009 at the Lake Baiyangdian study area, North China. The performance of the proposed WA-ANN model for monthly Chl a simulation in the lake ecosystem was compared with a multiple stepwise linear regression (MSLR) model, an autoregressive integrated moving average (ARIMA) model and a regular ANN model. The results showed that the WA-ANN model was suitable for Chl a simulation providing a more accurate performance than the MSLR, ARIMA, and ANN models. We recommend that the proposed method be widely applied to further facilitate the development and implementation of lake ecosystem management.

  12. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    PubMed

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency. PMID:26211074

  13. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  14. Aerodynamic Design Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.

    2003-01-01

    The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.

  15. Neural networks for nuclear spectroscopy

    SciTech Connect

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T.

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  16. Electronic device aspects of neural network memories

    NASA Technical Reports Server (NTRS)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  17. Neural network training with global optimization techniques.

    PubMed

    Yamazaki, Akio; Ludermir, Teresa B

    2003-04-01

    This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks. PMID:12923920

  18. Ozone Modeling Using Neural Networks.

    NASA Astrophysics Data System (ADS)

    Narasimhan, Ramesh; Keller, Joleen; Subramaniam, Ganesh; Raasch, Eric; Croley, Brandon; Duncan, Kathleen; Potter, William T.

    2000-03-01

    Ozone models for the city of Tulsa were developed using neural network modeling techniques. The neural models were developed using meteorological data from the Oklahoma Mesonet and ozone, nitric oxide, and nitrogen dioxide (NO2) data from Environmental Protection Agency monitoring sites in the Tulsa area. An initial model trained with only eight surface meteorological input variables and NO2 was able to simulate ozone concentrations with a correlation coefficient of 0.77. The trained model was then used to evaluate the sensitivity to the primary variables that affect ozone concentrations. The most important variables (NO2, temperature, solar radiation, and relative humidity) showed response curves with strong nonlinear codependencies. Incorporation of ozone concentrations from the previous 3 days into the model increased the correlation coefficient to 0.82. As expected, the ozone concentrations correlated best with the most recent (1-day previous) values. The model's correlation coefficient was increased to 0.88 by the incorporation of upper-air data from the National Weather Service's Nested Grid Model. Sensitivity analysis for the upper-air variables indicated unusual positive correlations between ozone and the relative humidity from 500 hPa to the tropopause in addition to the other expected correlations with upper-air temperatures, vertical wind velocity, and 1000-500-hPa layer thickness. The neural model results are encouraging for the further use of these systems to evaluate complex parameter cosensitivities, and for the use of these systems in automated ozone forecast systems.

  19. Further validation of artificial neural network-based emissions simulation models for conventional and hybrid electric vehicles.

    PubMed

    Tóth-Nagy, Csaba; Conley, John J; Jarrett, Ronald P; Clark, Nigel N

    2006-07-01

    With the advent of hybrid electric vehicles, computer-based vehicle simulation becomes more useful to the engineer and designer trying to optimize the complex combination of control strategy, power plant, drive train, vehicle, and driving conditions. With the desire to incorporate emissions as a design criterion, researchers at West Virginia University have developed artificial neural network (ANN) models for predicting emissions from heavy-duty vehicles. The ANN models were trained on engine and exhaust emissions data collected from transient dynamometer tests of heavy-duty diesel engines then used to predict emissions based on engine speed and torque data from simulated operation of a tractor truck and hybrid electric bus. Simulated vehicle operation was performed with the ADVISOR software package. Predicted emissions (carbon dioxide [CO2] and oxides of nitrogen [NO(x)]) were then compared with actual emissions data collected from chassis dynamometer tests of similar vehicles. This paper expands on previous research to include different driving cycles for the hybrid electric bus and varying weights of the conventional truck. Results showed that different hybrid control strategies had a significant effect on engine behavior (and, thus, emissions) and may affect emissions during different driving cycles. The ANN models underpredicted emissions of CO2 and NO(x) in the case of a class-8 truck but were more accurate as the truck weight increased. PMID:16878583

  20. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  1. Teaching Students to Model Neural Circuits and Neural Networks Using an Electronic Spreadsheet Simulator. Microcomputing Working Paper Series.

    ERIC Educational Resources Information Center

    Hewett, Thomas T.

    There are a number of areas in psychology where an electronic spreadsheet simulator can be used to study and explore functional relationships among a number of parameters. For example, when dealing with sensation, perception, and pattern recognition, it is sometimes desirable for students to understand both the basic neurophysiology and the…

  2. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  3. Neural Networks Of VLSI Components

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1991-01-01

    Concept for design of electronic neural network calls for assembly of very-large-scale integrated (VLSI) circuits of few standard types. Each VLSI chip, which contains both analog and digital circuitry, used in modular or "building-block" fashion by interconnecting it in any of variety of ways with other chips. Feedforward neural network in typical situation operates under control of host computer and receives inputs from, and sends outputs to, other equipment.

  4. Interval neural networks

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

  5. Correlational Neural Networks.

    PubMed

    Chandar, Sarath; Khapra, Mitesh M; Larochelle, Hugo; Ravindran, Balaraman

    2016-02-01

    Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches. PMID:26654210

  6. Comparing artificial and biological dynamical neural networks

    NASA Astrophysics Data System (ADS)

    McAulay, Alastair D.

    2006-05-01

    Modern computers can be made more friendly and otherwise improved by making them behave more like humans. Perhaps we can learn how to do this from biology in which human brains evolved over a long period of time. Therefore, we first explain a commonly used biological neural network (BNN) model, the Wilson-Cowan neural oscillator, that has cross-coupled excitatory (positive) and inhibitory (negative) neurons. The two types of neurons are used for frequency modulation communication between neurons which provides immunity to electromagnetic interference. We then evolve, for the first time, an artificial neural network (ANN) to perform the same task. Two dynamical feed-forward artificial neural networks use cross-coupling feedback (like that in a flip-flop) to form an ANN nonlinear dynamic neural oscillator with the same equations as the Wilson-Cowan neural oscillator. Finally we show, through simulation, that the equations perform the basic neural threshold function, switching between stable zero output and a stable oscillation, that is a stable limit cycle. Optical implementation with an injected laser diode and future research are discussed.

  7. Block-based neural networks.

    PubMed

    Moon, S W; Kong, S G

    2001-01-01

    This paper presents a novel block-based neural network (BBNN) model and the optimization of its structure and weights based on a genetic algorithm. The architecture of the BBNN consists of a 2D array of fundamental blocks with four variable input/output nodes and connection weights. Each block can have one of four different internal configurations depending on the structure settings, The BBNN model includes some restrictions such as 2D array and integer weights in order to allow easier implementation with reconfigurable hardware such as field programmable logic arrays (FPGA). The structure and weights of the BBNN are encoded with bit strings which correspond to the configuration bits of FPGA. The configuration bits are optimized globally using a genetic algorithm with 2D encoding and modified genetic operators. Simulations show that the optimized BBNN can solve engineering problems such as pattern classification and mobile robot control. PMID:18244385

  8. The use of artificial neural network (ANN) for the prediction and simulation of oil degradation in wastewater by AOP.

    PubMed

    Mustafa, Yasmen A; Jaid, Ghydaa M; Alwared, Abeer I; Ebrahim, Mothana

    2014-06-01

    The application of advanced oxidation process (AOP) in the treatment of wastewater contaminated with oil was investigated in this study. The AOP investigated is the homogeneous photo-Fenton (UV/H2O2/Fe(+2)) process. The reaction is influenced by the input concentration of hydrogen peroxide H2O2, amount of the iron catalyst Fe(+2), pH, temperature, irradiation time, and concentration of oil in the wastewater. The removal efficiency for the used system at the optimal operational parameters (H2O2 = 400 mg/L, Fe(+2) = 40 mg/L, pH = 3, irradiation time = 150 min, and temperature = 30 °C) for 1,000 mg/L oil load was found to be 72%. The study examined the implementation of artificial neural network (ANN) for the prediction and simulation of oil degradation in aqueous solution by photo-Fenton process. The multilayered feed-forward networks were trained by using a backpropagation algorithm; a three-layer network with 22 neurons in the hidden layer gave optimal results. The results show that the ANN model can predict the experimental results with high correlation coefficient (R (2) = 0.9949). The sensitivity analysis showed that all studied variables (H2O2, Fe(+2), pH, irradiation time, temperature, and oil concentration) have strong effect on the oil degradation. The pH was found to be the most influential parameter with relative importance of 20.6%. PMID:24595749

  9. Uncertainty analysis of a combined Artificial Neural Network - Fuzzy logic - Kriging system for spatial and temporal simulation of Hydraulic Head.

    NASA Astrophysics Data System (ADS)

    Tapoglou, Evdokia; Karatzas, George P.; Trichakis, Ioannis C.; Varouchakis, Emmanouil A.

    2015-04-01

    The purpose of this study is to evaluate the uncertainty, using various methodologies, in a combined Artificial Neural Network (ANN) - Fuzzy logic - Kriging system, which can simulate spatially and temporally the hydraulic head in an aquifer. This system uses ANNs for the temporal prediction of hydraulic head in various locations, one ANN for every location with available data, and Kriging for the spatial interpolation of ANN's results. A fuzzy logic is used for the interconnection of these two methodologies. The full description of the initial system and its functionality can be found in Tapoglou et al. (2014). Two methodologies were used for the calculation of uncertainty for the implementation of the algorithm in a study area. First, the uncertainty of Kriging parameters was examined using a Bayesian bootstrap methodology. In this case the variogram is calculated first using the traditional methodology of Ordinary Kriging. Using the parameters derived and the covariance function of the model, the covariance matrix is constructed. A common method for testing a statistical model is the use of artificial data. Normal random numbers generation is the first step in this procedure and by multiplying them by the decomposed covariance matrix, correlated random numbers (sample set) can be calculated. These random values are then fitted into a variogram and the value in an unknown location is estimated using Kriging. The distribution of the simulated values using the Kriging of different correlated random values can be used in order to derive the prediction intervals of the process. In this study 500 variograms were constructed for every time step and prediction point, using the method described above, and their results are presented as the 95th and 5th percentile of the predictions. The second methodology involved the uncertainty of ANNs training. In this case, for all the data points 300 different trainings were implemented having different training datasets each time

  10. Signal dispersion within a hippocampal neural network

    NASA Technical Reports Server (NTRS)

    Horowitz, J. M.; Mates, J. W. B.

    1975-01-01

    A model network is described, representing two neural populations coupled so that one population is inhibited by activity it excites in the other. Parameters and operations within the model represent EPSPs, IPSPs, neural thresholds, conduction delays, background activity and spatial and temporal dispersion of signals passing from one population to the other. Simulations of single-shock and pulse-train driving of the network are presented for various parameter values. Neuronal events from 100 to 300 msec following stimulation are given special consideration in model calculations.

  11. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  12. Classification of radar clutter using neural networks.

    PubMed

    Haykin, S; Deng, C

    1991-01-01

    A classifier that incorporates both preprocessing and postprocessing procedures as well as a multilayer feedforward network (based on the back-propagation algorithm) in its design to distinguish between several major classes of radar returns including weather, birds, and aircraft is described. The classifier achieves an average classification accuracy of 89% on generalization for data collected during a single scan of the radar antenna. The procedures of feature selection for neural network training, the classifier design considerations, the learning algorithm development, the implementation, and the experimental results of the neural clutter classifier, which is simulated on a Warp systolic computer, are discussed. A comparative evaluation of the multilayer neural network with a traditional Bayes classifier is presented. PMID:18282874

  13. Critical and resonance phenomena in neural networks

    NASA Astrophysics Data System (ADS)

    Goltsev, A. V.; Lopes, M. A.; Lee, K.-E.; Mendes, J. F. F.

    2013-01-01

    Brain rhythms contribute to every aspect of brain function. Here, we study critical and resonance phenomena that precede the emergence of brain rhythms. Using an analytical approach and simulations of a cortical circuit model of neural networks with stochastic neurons in the presence of noise, we show that spontaneous appearance of network oscillations occurs as a dynamical (non-equilibrium) phase transition at a critical point determined by the noise level, network structure, the balance between excitatory and inhibitory neurons, and other parameters. We find that the relaxation time of neural activity to a steady state, response to periodic stimuli at the frequency of the oscillations, amplitude of damped oscillations, and stochastic fluctuations of neural activity are dramatically increased when approaching the critical point of the transition.

  14. Auto-associative nanoelectronic neural network

    SciTech Connect

    Nogueira, C. P. S. M.; Guimarães, J. G.

    2014-05-15

    In this paper, an auto-associative neural network using single-electron tunneling (SET) devices is proposed and simulated at low temperature. The nanoelectronic auto-associative network is able to converge to a stable state, previously stored during training. The recognition of the pattern involves decreasing the energy of the input state until it achieves a point of local minimum energy, which corresponds to one of the stored patterns.

  15. A Physics-driven Neural Networks-based Simulation System (PhyNNeSS) for multimodal interactive virtual environments involving nonlinear deformable objects

    PubMed Central

    De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S.

    2012-01-01

    Background While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. Methods In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. Results We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. Conclusions A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal

  16. A Physics-driven Neural Networks-based Simulation System (PhyNNeSS) for multimodal interactive virtual environments involving nonlinear deformable objects.

    PubMed

    De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S

    2011-08-01

    BACKGROUND: While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. METHODS: In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. RESULTS: We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. CONCLUSIONS: A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal

  17. Design and simulation of programmable relational optoelectronic time-pulse coded processors as base elements for sorting neural networks

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.

    2010-05-01

    In the paper we show that the biologically motivated conception of time-pulse encoding usage gives a set of advantages (single methodological basis, universality, tuning simplicity, learning and programming et al) at creation and design of sensor systems with parallel input-output and processing for 2D structures hybrid and next generations neuro-fuzzy neurocomputers. We show design principles of programmable relational optoelectronic time-pulse encoded processors on the base of continuous logic, order logic and temporal waves processes. We consider a structure that execute analog signal extraction, analog and time-pulse coded variables sorting. We offer optoelectronic realization of such base relational order logic element, that consists of time-pulse coded photoconverters (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutation blocks. We make technical parameters estimations of devices and processors on such base elements by simulation and experimental research: optical input signals power 0.2 - 20 uW, processing time 1 - 10 us, supply voltage 1 - 3 V, consumption power 10 - 100 uW, extended functional possibilities, learning possibilities. We discuss some aspects of possible rules and principles of learning and programmable tuning on required function, relational operation and realization of hardware blocks for modifications of such processors. We show that it is possible to create sorting machines, neural networks and hybrid data-processing systems with untraditional numerical systems and pictures operands on the basis of such quasiuniversal hardware simple blocks with flexible programmable tuning.

  18. Multiprocessor Neural Network in Healthcare.

    PubMed

    Godó, Zoltán Attila; Kiss, Gábor; Kocsis, Dénes

    2015-01-01

    A possible way of creating a multiprocessor artificial neural network is by the use of microcontrollers. The RISC processors' high performance and the large number of I/O ports mean they are greatly suitable for creating such a system. During our research, we wanted to see if it is possible to efficiently create interaction between the artifical neural network and the natural nervous system. To achieve as much analogy to the living nervous system as possible, we created a frequency-modulated analog connection between the units. Our system is connected to the living nervous system through 128 microelectrodes. Two-way communication is provided through A/D transformation, which is even capable of testing psychopharmacons. The microcontroller-based analog artificial neural network can play a great role in medical singal processing, such as ECG, EEG etc. PMID:26152990

  19. Neural network simulations of the primate oculomotor system. II. Frames of reference.

    PubMed

    Moschovakis, A K

    1996-01-01

    Theories of motor control often assume that the location of visual stimuli is expressed in non retinotopic frames of reference. The saccadic system is known in enough detail for us to examine the evidential basis of this assumption. The organization of the neural circuit that controls saccades is first summarized. It is shown to consist of at least two interconnected modules. The first one is the burst generator, which resides in the reticular formation, and is entrusted with the tasks of impedance matching, synergist coactivation and reciprocal inhibition between antagonists. The second is a metric computer, which resides in the superior colliculus and the cerebral cortex, and computes the size and direction of the desired movement. Alternative models of the burst generator are presented and their "verisimilitude" is assessed in the light of evidence concerning saccadic trajectories, neuronal discharge patterns, interneuronal connections, as well as the results of lesion and stimulation experiments. Several models of the "metric computer" in the superior colliculus are then examined; their performance is again evaluated in the light of psychophysical, anatomical, physiological, and clinical evidence. It is demonstrated that the location of visual stimuli need not be expressed in nonretinotopic frames of reference for either the burst generator or the metric computer to issue appropriate commands to move the eyes. Instead, using information concerning intervening movements of the eyes to update the location of visual stimuli in a retinotopic frame of reference suffices for the planning and execution of correct saccades. More generally, it is proposed that the location of sensory stimuli need not be expressed in higher order frames of reference (e.g., centered in the body or even in extrapersonal space) provided that their location in a sensorium specific map is updated on the basis of effector movements. PMID:8886356

  20. Neural network ultrasound image analysis

    NASA Astrophysics Data System (ADS)

    Schneider, Alexander C.; Brown, David G.; Pastel, Mary S.

    1993-09-01

    Neural network based analysis of ultrasound image data was carried out on liver scans of normal subjects and those diagnosed with diffuse liver disease. In a previous study, ultrasound images from a group of normal volunteers, Gaucher's disease patients, and hepatitis patients were obtained by Garra et al., who used classical statistical methods to distinguish from among these three classes. In the present work, neural network classifiers were employed with the same image features found useful in the previous study for this task. Both standard backpropagation neural networks and a recently developed biologically-inspired network called Dystal were used. Classification performance as measured by the area under a receiver operating characteristic curve was generally excellent for the back propagation networks and was roughly comparable to that of classical statistical discriminators tested on the same data set and documented in the earlier study. Performance of the Dystal network was significantly inferior; however, this may be due to the choice of network parameter. Potential methods for enhancing network performance was identified.

  1. Use of Artificial Neural Network for the Simulation of Radon Emission Concentration of Granulated Blast Furnace Slag Mortar.

    PubMed

    Jang, Hong-Seok; Xing, Shuli; Lee, Malrey; Lee, Young-Keun; So, Seung-Young

    2016-05-01

    In this study, an artificial neural networks study was carried out to predict the quantity of radon of Granulated Blast Furnace Slag (GBFS) cement mortar. A data set of a laboratory work, in which a total of 3 mortars were produced, was utilized in the Artificial Neural Networks (ANNs) study. The mortar mixture parameters were three different GBFS ratios (0%, 20%, 40%). Measurement radon of moist cured specimens was measured at 3, 10, 30, 100, 365 days by sensing technology for continuous monitoring of indoor air quality (IAQ). ANN model is constructed, trained and tested using these data. The data used in the ANN model are arranged in a format of two input parameters that cover the cement, GBFS and age of samples and, an output parameter which is concentrations of radon emission of mortar. The results showed that ANN can be an alternative approach for the predicting the radon concentration of GBFS mortar using mortar ingredients as input parameters. PMID:27483913

  2. Projection of future climate change conditions using IPCC simulations, neural networks and Bayesian statistics. Part 2: Precipitation mean state and seasonal cycle in South America

    NASA Astrophysics Data System (ADS)

    Boulanger, Jean-Philippe; Martinez, Fernando; Segura, Enrique C.

    2007-02-01

    Evaluating the response of climate to greenhouse gas forcing is a major objective of the climate community, and the use of large ensemble of simulations is considered as a significant step toward that goal. The present paper thus discusses a new methodology based on neural network to mix ensemble of climate model simulations. Our analysis consists of one simulation of seven Atmosphere Ocean Global Climate Models, which participated in the IPCC Project and provided at least one simulation for the twentieth century (20c3m) and one simulation for each of three SRES scenarios: A2, A1B and B1. Our statistical method based on neural networks and Bayesian statistics computes a transfer function between models and observations. Such a transfer function was then used to project future conditions and to derive what we would call the optimal ensemble combination for twenty-first century climate change projections. Our approach is therefore based on one statement and one hypothesis. The statement is that an optimal ensemble projection should be built by giving larger weights to models, which have more skill in representing present climate conditions. The hypothesis is that our method based on neural network is actually weighting the models that way. While the statement is actually an open question, which answer may vary according to the region or climate signal under study, our results demonstrate that the neural network approach indeed allows to weighting models according to their skills. As such, our method is an improvement of existing Bayesian methods developed to mix ensembles of simulations. However, the general low skill of climate models in simulating precipitation mean climatology implies that the final projection maps (whatever the method used to compute them) may significantly change in the future as models improve. Therefore, the projection results for late twenty-first century conditions are presented as possible projections based on the “state-of-the-art” of

  3. Plant Growth Models Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  4. Experimental fault characterization of a neural network

    NASA Technical Reports Server (NTRS)

    Tan, Chang-Huong

    1990-01-01

    The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.

  5. Neural Networks for Flight Control

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  6. Parameter extraction with neural networks

    NASA Astrophysics Data System (ADS)

    Cazzanti, Luca; Khan, Mumit; Cerrina, Franco

    1998-06-01

    In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs

  7. An efficient neural network approach to dynamic robot motion planning.

    PubMed

    Yang, S X; Meng, M

    2000-03-01

    In this paper, a biologically inspired neural network approach to real-time collision-free motion planning of mobile robots or robot manipulators in a nonstationary environment is proposed. Each neuron in the topologically organized neural network has only local connections, whose neural dynamics is characterized by a shunting equation. Thus the computational complexity linearly depends on the neural network size. The real-time robot motion is planned through the dynamic activity landscape of the neural network without any prior knowledge of the dynamic environment, without explicitly searching over the free workspace or the collision paths, and without any learning procedures. Therefore it is computationally efficient. The global stability of the neural network is guaranteed by qualitative analysis and the Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies. PMID:10935758

  8. Modelling of dissolved oxygen in the Danube River using artificial neural networks and Monte Carlo Simulation uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Antanasijević, Davor; Pocajt, Viktor; Perić-Grujić, Aleksandra; Ristić, Mirjana

    2014-11-01

    This paper describes the training, validation, testing and uncertainty analysis of general regression neural network (GRNN) models for the forecasting of dissolved oxygen (DO) in the Danube River. The main objectives of this work were to determine the optimum data normalization and input selection techniques, the determination of the relative importance of uncertainty in different input variables, as well as the uncertainty analysis of model results using the Monte Carlo Simulation (MCS) technique. Min-max, median, z-score, sigmoid and tanh were validated as normalization techniques, whilst the variance inflation factor, correlation analysis and genetic algorithm were tested as input selection techniques. As inputs, the GRNN models used 19 water quality variables, measured in the river water each month at 17 different sites over a period of 9 years. The best results were obtained using min-max normalized data and the input selection based on the correlation between DO and dependent variables, which provided the most accurate GRNN model, and in combination the smallest number of inputs: Temperature, pH, HCO3-, SO42-, NO3-N, Hardness, Na, Cl-, Conductivity and Alkalinity. The results show that the correlation coefficient between measured and predicted DO values is 0.85. The inputs with the greatest effect on the GRNN model (arranged in descending order) were T, pH, HCO3-, SO42- and NO3-N. Of all inputs, variability of temperature had the greatest influence on the variability of DO content in river body, with the DO decreasing at a rate similar to the theoretical DO decreasing rate relating to temperature. The uncertainty analysis of the model results demonstrate that the GRNN can effectively forecast the DO content, since the distribution of model results are very similar to the corresponding distribution of real data.

  9. Adaptive Neural Networks for Automatic Negotiation

    SciTech Connect

    Sakas, D. P.; Vlachos, D. S.; Simos, T. E.

    2007-12-26

    The use of fuzzy logic and fuzzy neural networks has been found effective for the modelling of the uncertain relations between the parameters of a negotiation procedure. The problem with these configurations is that they are static, that is, any new knowledge from theory or experiment lead to the construction of entirely new models. To overcome this difficulty, we apply in this work, an adaptive neural topology to model the negotiation process. Finally a simple simulation is carried in order to test the new method.

  10. Artificial neural networks in medicine

    SciTech Connect

    Keller, P.E.

    1994-07-01

    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  11. Neural networks for handwriting recognition

    NASA Astrophysics Data System (ADS)

    Kelly, David A.

    1992-09-01

    The market for a product that can read handwritten forms, such as insurance applications, re- order forms, or checks, is enormous. Companies could save millions of dollars each year if they had an effective and efficient way to read handwritten forms into a computer without human intervention. Urged on by the potential gold mine that an adequate solution would yield, a number of companies and researchers have developed, and are developing, neural network-based solutions to this long-standing problem. This paper briefly outlines the current state-of-the-art in neural network-based handwriting recognition research and products. The first section of the paper examines the potential market for this technology. The next section outlines the steps in the recognition process, followed by a number of the basic issues that need to be dealt with to solve the recognition problem in a real-world setting. Next, an overview of current commercial solutions and research projects shows the different ways that neural networks are applied to the problem. This is followed by a breakdown of the current commercial market and the future outlook for neural network-based handwriting recognition technology.

  12. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  13. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  14. Fuzzy logic and neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  15. Toward modeling a dynamic biological neural network.

    PubMed

    Ross, M D; Dayhoff, J E; Mugler, D H

    1990-01-01

    Mammalian macular endorgans are linear bioaccelerometers located in the vestibular membranous labyrinth of the inner ear. In this paper, the organization of the endorgan is interpreted on physical and engineering principles. This is a necessary prerequisite to mathematical and symbolic modeling of information processing by the macular neural network. Mathematical notations that describe the functioning system were used to produce a novel, symbolic model. The model is six-tiered and is constructed to mimic the neural system. Initial simulations show that the network functions best when some of the detecting elements (type I hair cells) are excitatory and others (type II hair cells) are weakly inhibitory. The simulations also illustrate the importance of disinhibition of receptors located in the third tier in shaping nerve discharge patterns at the sixth tier in the model system. PMID:11538873

  16. Foetal ECG recovery using dynamic neural networks.

    PubMed

    Camps-Valls, Gustavo; Martínez-Sober, Marcelino; Soria-Olivas, Emilio; Magdalena-Benedito, Rafael; Calpe-Maravilla, Javier; Guerrero-Martínez, Juan

    2004-07-01

    Non-invasive electrocardiography has proven to be a very interesting method for obtaining information about the foetus state and thus to assure its well-being during pregnancy. One of the main applications in this field is foetal electrocardiogram (ECG) recovery by means of automatic methods. Evident problems found in the literature are the limited number of available registers, the lack of performance indicators, and the limited use of non-linear adaptive methods. In order to circumvent these problems, we first introduce the generation of synthetic registers and discuss the influence of different kinds of noise to the modelling. Second, a method which is based on numerical (correlation coefficient) and statistical (analysis of variance, ANOVA) measures allows us to select the best recovery model. Finally, finite impulse response (FIR) and gamma neural networks are included in the adaptive noise cancellation (ANC) scheme in order to provide highly non-linear, dynamic capabilities to the recovery model. Neural networks are benchmarked with classical adaptive methods such as the least mean squares (LMS) and the normalized LMS (NLMS) algorithms in simulated and real registers and some conclusions are drawn. For synthetic registers, the most determinant factor in the identification of the models is the foetal-maternal signal-to-noise ratio (SNR). In addition, as the electromyogram contribution becomes more relevant, neural networks clearly outperform the LMS-based algorithm. From the ANOVA test, we found statistical differences between LMS-based models and neural models when complex situations (high foetal-maternal and foetal-noise SNRs) were present. These conclusions were confirmed after doing robustness tests on synthetic registers, visual inspection of the recovered signals and calculation of the recognition rates of foetal R-peaks for real situations. Finally, the best compromise between model complexity and outcomes was provided by the FIR neural network. Both

  17. Neural Networks For Visual Telephony

    NASA Astrophysics Data System (ADS)

    Gottlieb, A. M.; Alspector, J.; Huang, P.; Hsing, T. R.

    1988-10-01

    By considering how an image is processed by the eye and brain, we may find ways to simplify the task of transmitting complex video images over a telecommunication channel. Just as the retina and visual cortex reduce the amount of information sent to other areas of the brain, electronic systems can be designed to compress visual data, encode features, and adapt to new scenes for video transmission. In this talk, we describe a system inspired by models of neural computation that may, in the future, augment standard digital processing techniques for image compression. In the next few years it is expected that a compact low-cost full motion video telephone operating over an ISDN basic access line (144 KBits/sec) will be shown to be feasible. These systems will likely be based on a standard digital signal processing approach. In this talk, we discuss an alternative method that does not use standard digital signal processing but instead uses eletronic neural networks to realize the large compression necessary for a low bit-rate video telephone. This neural network approach is not being advocated as a near term solution for visual telephony. However, low bit rate visual telephony is an area where neural network technology may, in the future, find a significant application.

  18. Validation and regulation of medical neural networks.

    PubMed

    Rodvold, D M

    2001-01-01

    Using artificial neural networks (ANNs) in medical applications can be challenging because of the often-experimental nature of ANN construction and the "black box" label that is frequently attached to them. In the US, medical neural networks are regulated by the Food and Drug Administration. This article briefly discusses the documented FDA policy on neural networks and the various levels of formal acceptance that neural network development groups might pursue. To assist medical neural network developers in creating robust and verifiable software, this paper provides a development process model targeted specifically to ANNs for critical applications. PMID:11790274

  19. Modeling and simulation of Streptomyces peucetius var. caesius N47 cultivation and epsilon-rhodomycinone production with kinetic equations and neural networks.

    PubMed

    Kiviharju, Kristiina; Salonen, Kalle; Leisola, Matti; Eerikäinen, Tero

    2006-11-10

    This study focuses on comparing different kinetic growth models and the use of neural networks in the batch cultivation of Streptomyces peucetius var. caesius producing epsilon-rhodomycinone. Contois, Monod and Teissier microbial growth models were used as well as the logistic growth modeling approach, which was found best in the simulations of growth and glucose consumption in the batch growth phase. The lag phase was included in the kinetic model with a CO2 trigger and a delay factor. Substrate consumption and product formation were included as Luedeking-Piret and logistic type equations, respectively. Biomass formation was modeled successfully with a 6-8-2 network, and the network was capable of biomass prediction with an R2-value of 0.983. Epsilon-rhodomycinone production was successfully modeled with a recursive 8-3-1 network capable of epsilon-rhodomycinone prediction with an R2-value of 0.903. The predictive power of the neural networks was superior to the kinetic models, which could not be used in predictive modeling of arbitrary batch cultivations. PMID:16797766

  20. Neural networks as a control methodology

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1990-01-01

    While conventional computers must be programmed in a logical fashion by a person who thoroughly understands the task to be performed, the motivation behind neural networks is to develop machines which can train themselves to perform tasks, using available information about desired system behavior and learning from experience. There are three goals of this fellowship program: (1) to evaluate various neural net methods and generate computer software to implement those deemed most promising on a personal computer equipped with Matlab; (2) to evaluate methods currently in the professional literature for system control using neural nets to choose those most applicable to control of flexible structures; and (3) to apply the control strategies chosen in (2) to a computer simulation of a test article, the Control Structures Interaction Suitcase Demonstrator, which is a portable system consisting of a small flexible beam driven by a torque motor and mounted on springs tuned to the first flexible mode of the beam. Results of each are discussed.

  1. On lateral competition in dynamic neural networks

    SciTech Connect

    Bellyustin, N.S.

    1995-02-01

    Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.

  2. Predicting hourly air pollutant levels using artificial neural networks coupled with uncertainty analysis by Monte Carlo simulations.

    PubMed

    Arhami, Mohammad; Kamali, Nima; Rajabi, Mohammad Mahdi

    2013-07-01

    Recent progress in developing artificial neural network (ANN) metamodels has paved the way for reliable use of these models in the prediction of air pollutant concentrations in urban atmosphere. However, improvement of prediction performance, proper selection of input parameters and model architecture, and quantification of model uncertainties remain key challenges to their practical use. This study has three main objectives: to select an ensemble of input parameters for ANN metamodels consisting of meteorological variables that are predictable by conventional weather forecast models and variables that properly describe the complex nature of pollutant source conditions in a major city, to optimize the ANN models to achieve the most accurate hourly prediction for a case study (city of Tehran), and to examine a methodology to analyze uncertainties based on ANN and Monte Carlo simulations (MCS). In the current study, the ANNs were constructed to predict criteria pollutants of nitrogen oxides (NOx), nitrogen dioxide (NO2), nitrogen monoxide (NO), ozone (O3), carbon monoxide (CO), and particulate matter with aerodynamic diameter of less than 10 μm (PM10) in Tehran based on the data collected at a monitoring station in the densely populated central area of the city. The best combination of input variables was comprehensively investigated taking into account the predictability of meteorological input variables and the study of model performance, correlation coefficients, and spectral analysis. Among numerous meteorological variables, wind speed, air temperature, relative humidity and wind direction were chosen as input variables for the ANN models. The complex nature of pollutant source conditions was reflected through the use of hour of the day and month of the year as input variables and the development of different models for each day of the week. After that, ANN models were constructed and validated, and a methodology of computing prediction intervals (PI) and

  3. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  4. UCLA PUNNS --A Neural Network Machine For Computer Vision

    NASA Astrophysics Data System (ADS)

    Gungner, David; Skrzypek, Josef

    1987-06-01

    The sequential processing paradigm limits current solutions for computer vision by restricting the number of functions which naturally map onto Von Neumann computing architectures. A variety of physical computing structures underlie the massive parallelism inherent in many visual functions. Therefore, further advances in general purpose vision must assume inseparability of function from structure. To combine function and structure we are investigating connectionist architectures using PUNNS (Perception Using Neural Network Simulation). Our approach is inspired and constrained by the analysis of visual functions that are computed in the neural networks of living things. PUNNS represents a massively parallel computer architecture which is evolving to allow the execution of certain visual functions in constant time, regardless of the size and complexity of the image. Due to the complexity and cost of building a neural net machine, a flexible neural net simulator is needed to invent, study and understand the behavior of complex vision algorithms. Some of the issues involved in building a simulator are how to compactly describe the interconnectivity of the neural network, how to input image data, how to program the neural network, and how to display the results of the network. This paper describes the implementation of PUNNS. Simulation examples and a comparison of PUNNS to other neural net simulators will be presented.

  5. Tests of track segment and vertex finding with neural networks

    SciTech Connect

    Denby, B.; Lessner, E. ); Lindsey, C.S. )

    1990-04-01

    Feed forward neural networks have been trained, using back-propagation, to find the slopes of simulated track segments in a straw chamber and to find the vertex of tracks from both simulated and real events in a more conventional drift chamber geometry. Network architectures, training, and performance are presented. 12 refs., 7 figs.

  6. The LILARTI neural network system

    SciTech Connect

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  7. The hysteretic Hopfield neural network.

    PubMed

    Bharitkar, S; Mendel, J M

    2000-01-01

    A new neuron activation function based on a property found in physical systems--hysteresis--is proposed. We incorporate this neuron activation in a fully connected dynamical system to form the hysteretic Hopfield neural network (HHNN). We then present an analog implementation of this architecture and its associated dynamical equation and energy function.We proceed to prove Lyapunov stability for this new model, and then solve a combinatorial optimization problem (i.e., the N-queen problem) using this network. We demonstrate the advantages of hysteresis by showing increased frequency of convergence to a solution, when the parameters associated with the activation function are varied. PMID:18249816

  8. Image texture segmentation using a neural network

    NASA Astrophysics Data System (ADS)

    Sayeh, Mohammed R.; Athinarayanan, Ragu; Dhali, Pushpuak

    1992-09-01

    In this paper we use a neural network called the Lyapunov associative memory (LYAM) system to segment image texture into different categories or clusters. The LYAM system is constructed by a set of ordinary differential equations which are simulated on a digital computer. The clustering can be achieved by using a single tuning parameter in the simplest model. Pattern classes are represented by the stable equilibrium states of the system. Design of the system is based on synthesizing two local energy functions, namely, the learning and recall energy functions. Before the implementation of the segmentation process, a Gauss-Markov random field (GMRF) model is applied to the raw image. This application suitably reduces the image data and prepares the texture information for the neural network process. We give a simple image example illustrating the capability of the technique. The GMRF-generated features are also used for a clustering, based on the Euclidean distance.

  9. Application of neural networks in space construction

    NASA Technical Reports Server (NTRS)

    Thilenius, Stephen C.; Barnes, Frank

    1990-01-01

    When trying to decide what task should be done by robots and what tasks should be done by humans with respect to space construction, there has been one decisive barrier which ultimately divides the tasks: can a computer do the job? Von Neumann type computers have great difficulty with problems that the human brain seems to do instantaneously and with little effort. Some of these problems are pattern recognition, speech recognition, content addressable memories, and command interpretation. In an attempt to simulate these talents of the human brain, much research was currently done into the operations and construction of artificial neural networks. The efficiency of the interface between man and machine, robots in particular, can therefore be greatly improved with the use of neural networks. For example, wouldn't it be easier to command a robot to 'fetch an object' rather then having to remotely control the entire operation with remote control?

  10. Neural Flows in Hopfield Network Approach

    NASA Astrophysics Data System (ADS)

    Ionescu, Carmen; Panaitescu, Emilian; Stoicescu, Mihai

    2013-12-01

    In most of the applications involving neural networks, the main problem consists in finding an optimal procedure to reduce the real neuron to simpler models which still express the biological complexity but allow highlighting the main characteristics of the system. We effectively investigate a simple reduction procedure which leads from complex models of Hodgkin-Huxley type to very convenient binary models of Hopfield type. The reduction will allow to describe the neuron interconnections in a quite large network and to obtain information concerning its symmetry and stability. Both cases, on homogeneous voltage across the membrane and inhomogeneous voltage along the axon will be tackled out. Few numerical simulations of the neural flow based on the cable-equation will be also presented.

  11. VLSI implementable neural networks for target tracking

    NASA Astrophysics Data System (ADS)

    Himes, Glenn S.; Inigo, Rafael M.; Narathong, Chiewcharn

    1991-08-01

    This paper describes part of an integrated system for target tracking. The image is acquired, edge detected, and segmented by a subsystem not discussed in this paper. Algorithms to determine the centroid of a windowed target using neural networks are developed. Further, once the target centroid is determined, it is continuously updated in order to track the trajectory, since the centroid location is not dependent on scaling or rotation on the optical axis. The image is then mapped to a log-spiral grid. A conformal transformation is used to map the log-spiral grid to a computation plane in which rotations and scalings are transformed to displacements along the vertical and horizonal axes, respectively. The images in this plane are used for recognition. The recognition algorithms are the subject of another paper. A second neural network, also described in this paper, is then used to determine object rotation and scaling. The algorithm used by this network is an original line correlator tracker which, as the name indicates, uses linear instead of 2D correlations. Simulation results using ICBM images are presented for both the centroid neural net and the rotation-scaling detection network.

  12. Inversion of surface parameters using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Olvera, J.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A neural network approach to the inversion of surface scattering parameters is presented. Simulated data sets based on a surface scattering model are used so that the data may be viewed as taken from a completely known randomly rough surface. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) are tested on the simulated backscattering data. The RMS error of training the FL network is found to be less than one half the error of the BP network while requiring one to two orders of magnitude less CPU time. When applied to inversion of parameters from a statistically rough surface, the FL method is successful at recovering the surface permittivity, the surface correlation length, and the RMS surface height in less time and with less error than the BP network. Further applications of the FL neural network to the inversion of parameters from backscatter measurements of an inhomogeneous layer above a half space are shown.

  13. Load forecasting using artificial neural networks

    SciTech Connect

    Pham, K.D.

    1995-12-31

    Artificial neural networks, modeled after their biological counterpart, have been successfully applied in many diverse areas including speech and pattern recognition, remote sensing, electrical power engineering, robotics and stock market forecasting. The most commonly used neural networks are those that gained knowledge from experience. Experience is presented to the network in form of the training data. Once trained, the neural network can recognized data that it has not seen before. This paper will present a fundamental introduction to the manner in which neural networks work and how to use them in load forecasting.

  14. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  15. Nonlinear PLS modeling using neural networks

    SciTech Connect

    Qin, S.J.; McAvoy, T.J.

    1994-12-31

    This paper discusses the embedding of neural networks into the framework of the PLS (partial least squares) modeling method resulting in a neural net PLS modeling approach. By using the universal approximation property of neural networks, the PLS modeling method is genealized to a nonlinear framework. The resulting model uses neural networks to capture the nonlinearity and keeps the PLS projection to attain robust generalization property. In this paper, the standard PLS modeling method is briefly reviewed. Then a neural net PLS (NNPLS) modeling approach is proposed which incorporates feedforward networks into the PLS modeling. A multi-input-multi-output nonlinear modeling task is decomposed into linear outer relations and simple nonlinear inner relations which are performed by a number of single-input-single-output networks. Since only a small size network is trained at one time, the over-parametrized problem of the direct neural network approach is circumvented even when the training data are very sparse. A conjugate gradient learning method is employed to train the network. It is shown that, by analyzing the NNPLS algorithm, the global NNPLS model is equivalent to a multilayer feedforward network. Finally, applications of the proposed NNPLS method are presented with comparison to the standard linear PLS method and the direct neural network approach. The proposed neural net PLS method gives better prediction results than the PLS modeling method and the direct neural network approach.

  16. Learning evasive maneuvers using evolutionary algorithms and neural networks

    NASA Astrophysics Data System (ADS)

    Kang, Moung Hung

    In this research, evolutionary algorithms and recurrent neural networks are combined to evolve control knowledge to help pilots avoid being struck by a missile, based on a two-dimensional air combat simulation model. The recurrent neural network is used for representing the pilot's control knowledge and evolutionary algorithms (i.e., Genetic Algorithms, Evolution Strategies, and Evolutionary Programming) are used for optimizing the weights and/or topology of the recurrent neural network. The simulation model of the two-dimensional evasive maneuver problem evolved is used for evaluating the performance of the recurrent neural network. Five typical air combat conditions were selected to evaluate the performance of the recurrent neural networks evolved by the evolutionary algorithms. Analysis of Variance (ANOVA) tests and response graphs were used to analyze the results. Overall, there was little difference in the performance of the three evolutionary algorithms used to evolve the control knowledge. However, the number of generations of each algorithm required to obtain the best performance was significantly different. ES converges the fastest, followed by EP and then by GA. The recurrent neural networks evolved by the evolutionary algorithms provided better performance than the traditional recommendations for evasive maneuvers, maximum gravitational turn, for each air combat condition. Furthermore, the recommended actions of the recurrent neural networks are reasonable and can be used for pilot training.

  17. A classifier neural network for rotordynamic systems

    NASA Astrophysics Data System (ADS)

    Ganesan, R.; Jionghua, Jin; Sankar, T. S.

    1995-07-01

    A feedforward backpropagation neural network is formed to identify the stability characteristic of a high speed rotordynamic system. The principal focus resides in accounting for the instability due to the bearing clearance effects. The abnormal operating condition of 'normal-loose' Coulomb rub, that arises in units supported by hydrodynamic bearings or rolling element bearings, is analysed in detail. The multiple-parameter stability problem is formulated and converted to a set of three-parameter algebraic inequality equations. These three parameters map the wider range of physical parameters of commonly-used rotordynamic systems into a narrow closed region, that is used in the supervised learning of the neural network. A binary-type state of the system is expressed through these inequalities that are deduced from the analytical simulation of the rotor system. Both the hidden layer as well as functional-link networks are formed and the superiority of the functional-link network is established. Considering the real time interpretation and control of the rotordynamic system, the network reliability and the learning time are used as the evaluation criteria to assess the superiority of the functional-link network. This functional-link network is further trained using the parameter values of selected rotor systems, and the classifier network is formed. The success rate of stability status identification is obtained to assess the potentials of this classifier network. The classifier network is shown that it can also be used, for control purposes, as an 'advisory' system that suggests the optimum way of parameter adjustment.

  18. On-line lower-order modeling via neural networks.

    PubMed

    Ho, H F; Rad, A B; Wong, Y K; Lo, W L

    2003-10-01

    This paper presents a novel method to determine the parameters of a first-order plus dead-time model using neural networks. The outputs of the neural networks are the gain, dominant time constant, and apparent time delay. By combining this algorithm with a conventional PI or PID controller, we also present an adaptive controller which requires very little a priori knowledge about the plant under control. The simplicity of the scheme for real-time control provides a new approach for implementing neural network applications for a variety of on-line industrial control problems. Simulation and experimental results demonstrate the feasibility and adaptive property of the proposed scheme. PMID:14582882

  19. Robust neural network with applications to credit portfolio data analysis

    PubMed Central

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2011-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure. PMID:21687821

  20. Prediction of Aerodynamic Coefficients using Neural Networks for Sparse Data

    NASA Technical Reports Server (NTRS)

    Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Basic aerodynamic coefficients are modeled as functions of angles of attack and sideslip with vehicle lateral symmetry and compressibility effects. Most of the aerodynamic parameters can be well-fitted using polynomial functions. In this paper a fast, reliable way of predicting aerodynamic coefficients is produced using a neural network. The training data for the neural network is derived from wind tunnel test and numerical simulations. The coefficients of lift, drag, pitching moment are expressed as a function of alpha (angle of attack) and Mach number. The results produced from preliminary neural network analysis are very good.

  1. Neural networks for aircraft system identification

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.

    1991-01-01

    Artificial neural networks offer some interesting possibilities for use in control. Our current research is on the use of neural networks on an aircraft model. The model can then be used in a nonlinear control scheme. The effectiveness of network training is demonstrated.

  2. Neural networks and MIMD-multiprocessors

    NASA Technical Reports Server (NTRS)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  3. Neural Networks in Nonlinear Aircraft Control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.

    1990-01-01

    Recent research indicates that artificial neural networks offer interesting learning or adaptive capabilities. The current research focuses on the potential for application of neural networks in a nonlinear aircraft control law. The current work has been to determine which networks are suitable for such an application and how they will fit into a nonlinear control law.

  4. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  5. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  6. Sea ice classification using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.

  7. Adaptive optimization and control using neural networks

    SciTech Connect

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  8. File access prediction using neural networks.

    PubMed

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors. PMID:20421183

  9. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  10. Complexity matching in neural networks

    NASA Astrophysics Data System (ADS)

    Usefie Mafahim, Javad; Lambert, David; Zare, Marzieh; Grigolini, Paolo

    2015-01-01

    In the wide literature on the brain and neural network dynamics the notion of criticality is being adopted by an increasing number of researchers, with no general agreement on its theoretical definition, but with consensus that criticality makes the brain very sensitive to external stimuli. We adopt the complexity matching principle that the maximal efficiency of communication between two complex networks is realized when both of them are at criticality. We use this principle to establish the value of the neuronal interaction strength at which criticality occurs, yielding a perfect agreement with the adoption of temporal complexity as criticality indicator. The emergence of a scale-free distribution of avalanche size is proved to occur in a supercritical regime. We use an integrate-and-fire model where the randomness of each neuron is only due to the random choice of a new initial condition after firing. The new model shares with that proposed by Izikevich the property of generating excessive periodicity, and with it the annihilation of temporal complexity at supercritical values of the interaction strength. We find that the concentration of inhibitory links can be used as a control parameter and that for a sufficiently large concentration of inhibitory links criticality is recovered again. Finally, we show that the response of a neural network at criticality to a harmonic stimulus is very weak, in accordance with the complexity matching principle.

  11. Advances in neural networks research: an introduction.

    PubMed

    Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar

    2009-01-01

    The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications. PMID:19632811

  12. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  13. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  14. Neural network modeling of distillation columns

    SciTech Connect

    Baratti, R.; Vacca, G.; Servida, A.

    1995-06-01

    Neural network modeling (NNM) was implemented for monitoring and control applications on two actual distillation columns: the butane splitter tower and the gasoline stabilizer. The two distillation columns are in operation at the SARAS refinery. Results show that with proper implementation techniques NNM can significantly improve column operation. The common belief that neural networks can be used as black-box process models is not completely true. Effective implementation always requires a minimum degree of process knowledge to identify the relevant inputs to the net. After background and generalities on neural network modeling, the paper describes efforts on the development of neural networks for the two distillation units.

  15. The Neural Network Method of Corrosion Diagnosis for Grounding Grid

    SciTech Connect

    Hou Zaien; Duan Fujian; Zhang Kecun

    2008-11-06

    Safety of persons, protection of equipment and continuity of power supply are the main objectives of the grounding system of a large electrical installation. For its accurate working status, it is essential to determine every branch resistance in the system. In this paper, we present a neural network method of corrosion diagnosis for the grounding grid based on the neural network theory. The feasibility of this method is discussed by means of its application to a simulant grounding grid.

  16. An efficient annealing in Boltzmann machine in Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Kin, Teoh Yeong; Hasan, Suzanawati Abu; Bulot, Norhisam; Ismail, Mohammad Hafiz

    2012-09-01

    This paper proposes and implements Boltzmann machine in Hopfield neural network doing logic programming based on the energy minimization system. The temperature scheduling in Boltzmann machine enhancing the performance of doing logic programming in Hopfield neural network. The finest temperature is determined by observing the ratio of global solution and final hamming distance using computer simulations. The study shows that Boltzmann Machine model is more stable and competent in term of representing and solving difficult combinatory problems.

  17. Wear Scar Similarities between Retrieved and Simulator-Tested Polyethylene TKR Components: An Artificial Neural Network Approach.

    PubMed

    Orozco Villaseñor, Diego A; Wimmer, Markus A

    2016-01-01

    The aim of this study was to determine how representative wear scars of simulator-tested polyethylene (PE) inserts compare with retrieved PE inserts from total knee replacement (TKR). By means of a nonparametric self-organizing feature map (SOFM), wear scar images of 21 postmortem- and 54 revision-retrieved components were compared with six simulator-tested components that were tested either in displacement or in load control according to ISO protocols. The SOFM network was then trained with the wear scar images of postmortem-retrieved components since those are considered well-functioning at the time of retrieval. Based on this training process, eleven clusters were established, suggesting considerable variability among wear scars despite an uncomplicated loading history inside their hosts. The remaining components (revision-retrieved and simulator-tested) were then assigned to these established clusters. Six out of five simulator components were clustered together, suggesting that the network was able to identify similarities in loading history. However, the simulator-tested components ended up in a cluster at the fringe of the map containing only 10.8% of retrieved components. This may suggest that current ISO testing protocols were not fully representative of this TKR population, and protocols that better resemble patients' gait after TKR containing activities other than walking may be warranted. PMID:27597955

  18. Wear Scar Similarities between Retrieved and Simulator-Tested Polyethylene TKR Components: An Artificial Neural Network Approach

    PubMed Central

    2016-01-01

    The aim of this study was to determine how representative wear scars of simulator-tested polyethylene (PE) inserts compare with retrieved PE inserts from total knee replacement (TKR). By means of a nonparametric self-organizing feature map (SOFM), wear scar images of 21 postmortem- and 54 revision-retrieved components were compared with six simulator-tested components that were tested either in displacement or in load control according to ISO protocols. The SOFM network was then trained with the wear scar images of postmortem-retrieved components since those are considered well-functioning at the time of retrieval. Based on this training process, eleven clusters were established, suggesting considerable variability among wear scars despite an uncomplicated loading history inside their hosts. The remaining components (revision-retrieved and simulator-tested) were then assigned to these established clusters. Six out of five simulator components were clustered together, suggesting that the network was able to identify similarities in loading history. However, the simulator-tested components ended up in a cluster at the fringe of the map containing only 10.8% of retrieved components. This may suggest that current ISO testing protocols were not fully representative of this TKR population, and protocols that better resemble patients' gait after TKR containing activities other than walking may be warranted. PMID:27597955

  19. Neural Network Classifies Teleoperation Data

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Giancaspro, Antonio; Losito, Sergio; Pasquariello, Guido

    1994-01-01

    Prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on manipulator hand. Prototype is early, subsystem-level product of continuing effort to develop automated system that assists in training and supervising human control operator: provides symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to operator in real time during successive executions of same task. Also simplifies transition between teleoperation and autonomous modes of telerobotic system.

  20. Storage capacity and retrieval time of small-world neural networks

    SciTech Connect

    Oshima, Hiraku; Odagaki, Takashi

    2007-09-15

    To understand the influence of structure on the function of neural networks, we study the storage capacity and the retrieval time of Hopfield-type neural networks for four network structures: regular, small world, random networks generated by the Watts-Strogatz (WS) model, and the same network as the neural network of the nematode Caenorhabditis elegans. Using computer simulations, we find that (1) as the randomness of network is increased, its storage capacity is enhanced; (2) the retrieval time of WS networks does not depend on the network structure, but the retrieval time of C. elegans's neural network is longer than that of WS networks; (3) the storage capacity of the C. elegans network is smaller than that of networks generated by the WS model, though the neural network of C. elegans is considered to be a small-world network.

  1. The Laplacian spectrum of neural networks

    PubMed Central

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  2. Three dimensional living neural networks

    NASA Astrophysics Data System (ADS)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  3. DC motor speed control using neural networks

    NASA Astrophysics Data System (ADS)

    Tai, Heng-Ming; Wang, Junli; Kaveh, Ashenayi

    1990-08-01

    This paper presents a scheme that uses a feedforward neural network for the learning and generalization of the dynamic characteristics for the starting of a dc motor. The goal is to build an intelligent motor starter which has a versatility equivalent to that possessed by a human operator. To attain a fast and safe starting from stall for a dc motor a maximum armature current should be maintained during the starting period. This can be achieved by properly adjusting the armature voltage. The network is trained to learn the inverse dynamics of the motor starting characteristics and outputs a proper armature voltage. Simulation was performed to demonstrate the feasibility and effectiveness of the model. This study also addresses the network performance as a function of the number of hidden units and the number of training samples. 1.

  4. Dynamic Artificial Neural Networks with Affective Systems

    PubMed Central

    Schuman, Catherine D.; Birdwell, J. Douglas

    2013-01-01

    Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance. PMID:24303015

  5. Finite time stabilization of delayed neural networks.

    PubMed

    Wang, Leimin; Shen, Yi; Ding, Zhixia

    2015-10-01

    In this paper, the problem of finite time stabilization for a class of delayed neural networks (DNNs) is investigated. The general conditions on the feedback control law are provided to ensure the finite time stabilization of DNNs. Then some specific conditions are derived by designing two different controllers which include the delay-dependent and delay-independent ones. In addition, the upper bound of the settling time for stabilization is estimated. Under fixed control strength, discussions of the extremum of settling time functional are made and a switched controller is designed to optimize the settling time. Finally, numerical simulations are carried out to demonstrate the effectiveness of the obtained results. PMID:26264170

  6. Artificial neural networks in neurosurgery.

    PubMed

    Azimi, Parisa; Mohammadi, Hasan Reza; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad; Montazeri, Ali

    2015-03-01

    Artificial neural networks (ANNs) effectively analyze non-linear data sets. The aimed was A review of the relevant published articles that focused on the application of ANNs as a tool for assisting clinical decision-making in neurosurgery. A literature review of all full publications in English biomedical journals (1993-2013) was undertaken. The strategy included a combination of key words 'artificial neural networks', 'prognostic', 'brain', 'tumor tracking', 'head', 'tumor', 'spine', 'classification' and 'back pain' in the title and abstract of the manuscripts using the PubMed search engine. The major findings are summarized, with a focus on the application of ANNs for diagnostic and prognostic purposes. Finally, the future of ANNs in neurosurgery is explored. A total of 1093 citations were identified and screened. In all, 57 citations were found to be relevant. Of these, 50 articles were eligible for inclusion in this review. The synthesis of the data showed several applications of ANN in neurosurgery, including: (1) diagnosis and assessment of disease progression in low back pain, brain tumours and primary epilepsy; (2) enhancing clinically relevant information extraction from radiographic images, intracranial pressure processing, low back pain and real-time tumour tracking; (3) outcome prediction in epilepsy, brain metastases, lumbar spinal stenosis, lumbar disc herniation, childhood hydrocephalus, trauma mortality, and the occurrence of symptomatic cerebral vasospasm in patients with aneurysmal subarachnoid haemorrhage; (4) the use in the biomechanical assessments of spinal disease. ANNs can be effectively employed for diagnosis, prognosis and outcome prediction in neurosurgery. PMID:24987050

  7. Computational acceleration using neural networks

    NASA Astrophysics Data System (ADS)

    Cadaret, Paul

    2008-04-01

    The author's recent participation in the Small Business Innovative Research (SBIR) program has resulted in the development of a patent pending technology that enables the construction of very large and fast artificial neural networks. Through the use of UNICON's CogniMax pattern recognition technology we believe that systems can be constructed that exploit the power of "exhaustive learning" for the benefit of certain types of complex and slow computational problems. This paper presents a theoretical study that describes one potentially beneficial application of exhaustive learning. It describes how a very large and fast Radial Basis Function (RBF) artificial Neural Network (NN) can be used to implement a useful computational system. Viewed another way, it presents an unusual method of transforming a complex, always-precise, and slow computational problem into a fuzzy pattern recognition problem where other methods are available to effectively improve computational performance. The method described recognizes that the need for computational precision in a problem domain sometimes varies throughout the domain's Feature Space (FS) and high precision may only be needed in limited areas. These observations can then be exploited to the benefit of overall computational performance. Addressing computational reliability, we describe how existing always-precise computational methods can be used to reliably train the NN to perform the computational interpolation function. The author recognizes that the method described is not applicable to every situation, but over the last 8 months we have been surprised at how often this method can be applied to enable interesting and effective solutions.

  8. Marginalization in Random Nonlinear Neural Networks

    NASA Astrophysics Data System (ADS)

    Vasudeva Raju, Rajkumar; Pitkow, Xaq

    2015-03-01

    Computations involved in tasks like causal reasoning in the brain require a type of probabilistic inference known as marginalization. Marginalization corresponds to averaging over irrelevant variables to obtain the probability of the variables of interest. This is a fundamental operation that arises whenever input stimuli depend on several variables, but only some are task-relevant. Animals often exhibit behavior consistent with marginalizing over some variables, but the neural substrate of this computation is unknown. It has been previously shown (Beck et al. 2011) that marginalization can be performed optimally by a deterministic nonlinear network that implements a quadratic interaction of neural activity with divisive normalization. We show that a simpler network can perform essentially the same computation. These Random Nonlinear Networks (RNN) are feedforward networks with one hidden layer, sigmoidal activation functions, and normally-distributed weights connecting the input and hidden layers. We train the output weights connecting the hidden units to an output population, such that the output model accurately represents a desired marginal probability distribution without significant information loss compared to optimal marginalization. Simulations for the case of linear coordinate transformations show that the RNN model has good marginalization performance, except for highly uncertain inputs that have low amplitude population responses. Behavioral experiments, based on these results, could then be used to identify if this model does indeed explain how the brain performs marginalization.

  9. A new formulation for feedforward neural networks.

    PubMed

    Razavi, Saman; Tolson, Bryan A

    2011-10-01

    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization. PMID:21859600

  10. Apparent damage accumulation in cancellous bone using neural networks.

    PubMed

    Hambli, Ridha

    2011-08-01

    In this paper, a neural network model is developed to simulate the accumulation of apparent fatigue damage of 3D trabecular bone architecture at a given bone site during cyclic loading. The method is based on five steps: (i) performing suitable numerical experiments to simulate fatigue accumulation of a 3D micro-CT trabecular bone samples taken from proximal femur for different combinations of loading conditions; (ii) averaging the sample outputs in terms of apparent damage at whole specimen level based on local tissue damage; (iii) preparation of a proper set of corresponding input-output data to train the network to identify apparent damage evolution; (iv) training the neural network based on the results of step (iii); (v) application of the neural network as a tool to estimate rapidly the apparent damage evolution at a given bone site. The proposed NN model can be incorporated into finite element codes to perform fatigue damage simulation at continuum level including some morphological factors and some bone material properties. The proposed neural network based multiscale approach is the first model, to the author's knowledge, that incorporates both finite element analysis and neural network computation to rapidly simulate multilevel fatigue of bone. This is beneficial to develop enhanced finite element models to investigate the role of damage accumulation on bone damage repair during remodelling. PMID:21616468

  11. Drift chamber tracking with neural networks

    SciTech Connect

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  12. Extrapolation limitations of multilayer feedforward neural networks

    NASA Technical Reports Server (NTRS)

    Haley, Pamela J.; Soloway, Donald

    1992-01-01

    The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.

  13. Neural Network Modeling of Developmental Effects in Discrimination Shifts.

    ERIC Educational Resources Information Center

    Sirois, Sylvain; Shultz, Thomas R.

    1998-01-01

    Presents a theoretical account of human shift learning with the use of neural network tools. Details how simulations using the cascade-correlation algorithm which show that networks can capture the regularities of the discrimination shift literature better than existing psychological theories. Suggests that human developmental differences in shift…

  14. From Classical Neural Networks to Quantum Neural Networks

    NASA Astrophysics Data System (ADS)

    Tirozzi, B.

    2013-09-01

    First I give a brief description of the classical Hopfield model introducing the fundamental concepts of patterns, retrieval, pattern recognition, neural dynamics, capacity and describe the fundamental results obtained in this field by Amit, Gutfreund and Sompolinsky,1 using the non rigorous method of replica and the rigorous version given by Pastur, Shcherbina, Tirozzi2 using the cavity method. Then I give a formulation of the theory of Quantum Neural Networks (QNN) in terms of the XY model with Hebbian interaction. The problem of retrieval and storage is discussed. The retrieval states are the states of the minimum energy. I apply the estimates found by Lieb3 which give lower and upper bound of the free-energy and expectation of the observables of the quantum model. I discuss also some experiment and the search of ground state using Monte Carlo Dynamics applied to the equivalent classical two dimensional Ising model constructed by Suzuki et al.6 At the end there is a list of open problems.

  15. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  16. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  17. Radiation Behavior of Analog Neural Network Chip

    NASA Technical Reports Server (NTRS)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  18. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. PMID:20713305

  19. Creativity in design and artificial neural networks

    SciTech Connect

    Neocleous, C.C.; Esat, I.I.; Schizas, C.N.

    1996-12-31

    The creativity phase is identified as an integral part of the design phase. The characteristics of creative persons which are relevant to designing artificial neural networks manifesting aspects of creativity, are identified. Based on these identifications, a general framework of artificial neural network characteristics to implement such a goal are proposed.

  20. Neural network based architectures for aerospace applications

    NASA Technical Reports Server (NTRS)

    Ricart, Richard

    1987-01-01

    A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed.

  1. Applications of Neural Networks in Finance.

    ERIC Educational Resources Information Center

    Crockett, Henry; Morrison, Ronald

    1994-01-01

    Discusses research with neural networks in the area of finance. Highlights include bond pricing, theoretical exposition of primary bond pricing, bond pricing regression model, and an example that created networks with corporate bonds and NeuralWare Neuralworks Professional H software using the back-propagation technique. (LRW)

  2. A Survey of Neural Network Publications.

    ERIC Educational Resources Information Center

    Vijayaraman, Bindiganavale S.; Osyk, Barbara

    This paper is a survey of publications on artificial neural networks published in business journals for the period ending July 1996. Its purpose is to identify and analyze trends in neural network research during that period. This paper shows which topics have been heavily researched, when these topics were researched, and how that research has…

  3. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  4. Artificial neural network model for material characterization by indentation

    NASA Astrophysics Data System (ADS)

    Tho, K. K.; Swaddiwudhipong, S.; Liu, Z. S.; Hua, J.

    2004-09-01

    Analytical methods to interpret the indentation load-displacement curves are difficult to formulate and solve due to material and geometric nonlinearities as well as complex contact interactions. In this study, large strain-large deformation finite element analyses were carried out to simulate indentation experiments. An artificial neural network model was constructed for the interpretation of indentation load-displacement curves. The data from finite element analyses were used to train and validate the artificial neural network model. The artificial neural network model was able to accurately determine the material properties when presented with the load-displacement curves that were not used in the training process. The proposed artificial neural network model is robust and directly relates the characteristics of the indentation load-displacement curve to the elasto-plastic material properties.

  5. Blur identification by multilayer neural network based on multivalued neurons.

    PubMed

    Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T

    2008-05-01

    A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones. PMID:18467216

  6. Comparative evaluation of numerical model and artificial neural network for simulating groundwater flow in Kathajodi-Surua Inter-basin of Odisha, India

    NASA Astrophysics Data System (ADS)

    Mohanty, S.; Jha, Madan K.; Kumar, Ashwani; Panda, D. K.

    2013-07-01

    In view of worldwide concern for the sustainability of groundwater resources, basin-wide modeling of groundwater flow is essential for the efficient planning and management of groundwater resources in a groundwater basin. The objective of the present study is to evaluate the performance of finite difference-based numerical model MODFLOW and the artificial neural network (ANN) model developed in this study in simulating groundwater levels in an alluvial aquifer system. Calibration of the MODFLOW was done by using weekly groundwater level data of 2 years and 4 months (February 2004 to May 2006) and validation of the model was done using 1 year of groundwater level data (June 2006 to May 2007). Calibration of the model was performed by a combination of trial-and-error method and automated calibration code PEST with a mean RMSE (root mean squared error) value of 0.62 m and a mean NSE (Nash-Sutcliffe efficiency) value of 0.915. Groundwater levels at 18 observation wells were simulated for the validation period. Moreover, artificial neural network models were developed to predict groundwater levels in 18 observation wells in the basin one time step (i.e., week) ahead. The inputs to the ANN model consisted of weekly rainfall, evaporation, river stage, water level in the drain, pumping rate of the tubewells and groundwater levels in these wells at the previous time step. The time periods used in the MODFLOW were also considered for the training and testing of the developed ANN models. Out of the 174 data sets, 122 data sets were used for training and 52 data sets were used for testing. The simulated groundwater levels by MODFLOW and ANN model were compared with the observed groundwater levels. It was found that the ANN model provided better prediction of groundwater levels in the study area than the numerical model for short time-horizon predictions.

  7. Phase diagram of spiking neural networks

    PubMed Central

    Seyed-allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters – excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli. PMID:25788885

  8. Enhancing neural-network performance via assortativity.

    PubMed

    de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J

    2011-03-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information. PMID:21517565

  9. Enhancing neural-network performance via assortativity

    SciTech Connect

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-03-15

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  10. Neural network and letter recognition

    SciTech Connect

    Lee, Hue Yeon.

    1989-01-01

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C-layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the Gabor transform. Pattern dependent choice of center and wavelengths of Gabor filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets.

  11. The use of artificial neural networks in experimental data acquisition and aerodynamic design

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1991-01-01

    It is proposed that an artificial neural network be used to construct an intelligent data acquisition system. The artificial neural networks (ANN) model has a potential for replacing traditional procedures as well as for use in computational fluid dynamics validation. Potential advantages of the ANN model are listed. As a proof of concept, the author modeled a NACA 0012 airfoil at specific conditions, using the neural network simulator NETS, developed by James Baffes of the NASA Johnson Space Center. The neural network predictions were compared to the actual data. It is concluded that artificial neural networks can provide an elegant and valuable class of mathematical tools for data analysis.

  12. Neural network model for extracting optic flow.

    PubMed

    Tohyama, Kazuya; Fukushima, Kunihiko

    2005-01-01

    When we travel in an environment, we have an optic flow on the retina. Neurons in the area MST of macaque monkeys are reported to have a very large receptive field and analyze optic flows on the retina. Many MST-cells respond selectively to rotation, expansion/contraction and planar motion of the optic flow. Many of them show position-invariant responses to optic flow, that is, their responses are maintained during the shift of the center of the optic flow. It has long been suggested mathematically that vector-field calculus is useful for analyzing optic flow field. Biologically, plausible neural network models based on this idea, however, have little been proposed so far. This paper, based on vector-field hypothesis, proposes a neural network model for extracting optic flows. Our model consists of hierarchically connected layers: retina, V1, MT and MST. V1-cells measure local velocity. There are two kinds of MT-cell: one is for extracting absolute velocities, the other for extracting relative velocities with their antagonistic inputs. Collecting signals from MT-cells, MST-cells respond selectively to various types of optic flows. We demonstrate through a computer simulation that this simple network is enough to explain a variety of results of neurophysiological experiments. PMID:16112546

  13. Sunspot prediction using neural networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Baffes, Paul

    1990-01-01

    The earliest systematic observance of sunspot activity is known to have been discovered by the Chinese in 1382 during the Ming Dynasty (1368 to 1644) when spots on the sun were noticed by looking at the sun through thick, forest fire smoke. Not until after the 18th century did sunspot levels become more than a source of wonderment and curiosity. Since 1834 reliable sunspot data has been collected by the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Naval Observatory. Recently, considerable effort has been placed upon the study of the effects of sunspots on the ecosystem and the space environment. The efforts of the Artificial Intelligence Section of the Mission Planning and Analysis Division of the Johnson Space Center involving the prediction of sunspot activity using neural network technologies are described.

  14. Neural networks: a biased overview

    SciTech Connect

    Domany, E.

    1988-06-01

    An overview of recent activity in the field of neural networks is presented. The long-range aim of this research is to understand how the brain works. First some of the problems are stated and terminology defined; then an attempt is made to explain why physicists are drawn to the field, and their main potential contribution. In particular, in recent years some interesting models have been introduced by physicists. A small subset of these models is described, with particular emphasis on those that are analytically soluble. Finally a brief review of the history and recent developments of single- and multilayer perceptrons is given, bringing the situation up to date regarding the central immediate problem of the field: search for a learning algorithm that has an associated convergence theorem.

  15. Wavelet differential neural network observer.

    PubMed

    Chairez, Isaac

    2009-09-01

    State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown. PMID:19674951

  16. Introduction to artificial neural networks.

    PubMed

    Grossi, Enzo; Buscema, Massimo

    2007-12-01

    The coupling of computer science and theoretical bases such as nonlinear dynamics and chaos theory allows the creation of 'intelligent' agents, such as artificial neural networks (ANNs), able to adapt themselves dynamically to problems of high complexity. ANNs are able to reproduce the dynamic interaction of multiple factors simultaneously, allowing the study of complexity; they can also draw conclusions on individual basis and not as average trends. These tools can offer specific advantages with respect to classical statistical techniques. This article is designed to acquaint gastroenterologists with concepts and paradigms related to ANNs. The family of ANNs, when appropriately selected and used, permits the maximization of what can be derived from available data and from complex, dynamic, and multidimensional phenomena, which are often poorly predictable in the traditional 'cause and effect' philosophy. PMID:17998827

  17. Simulative and experimental investigation on stator winding turn and unbalanced supply voltage fault diagnosis in induction motors using Artificial Neural Networks.

    PubMed

    Lashkari, Negin; Poshtan, Javad; Azgomi, Hamid Fekri

    2015-11-01

    The three-phase shift between line current and phase voltage of induction motors can be used as an efficient fault indicator to detect and locate inter-turn stator short-circuit (ITSC) fault. However, unbalanced supply voltage is one of the contributing factors that inevitably affect stator currents and therefore the three-phase shift. Thus, it is necessary to propose a method that is able to identify whether the unbalance of three currents is caused by ITSC or supply voltage fault. This paper presents a feedforward multilayer-perceptron Neural Network (NN) trained by back propagation, based on monitoring negative sequence voltage and the three-phase shift. The data which are required for training and test NN are generated using simulated model of stator. The experimental results are presented to verify the superior accuracy of the proposed method. PMID:26412499

  18. Facial expression recognition using constructive neural networks

    NASA Astrophysics Data System (ADS)

    Ma, Liying; Khorasani, Khashayar

    2001-08-01

    The computer-based recognition of facial expressions has been an active area of research for quite a long time. The ultimate goal is to realize intelligent and transparent communications between human beings and machines. The neural network (NN) based recognition methods have been found to be particularly promising, since NN is capable of implementing mapping from the feature space of face images to the facial expression space. However, finding a proper network size has always been a frustrating and time consuming experience for NN developers. In this paper, we propose to use the constructive one-hidden-layer feed forward neural networks (OHL-FNNs) to overcome this problem. The constructive OHL-FNN will obtain in a systematic way a proper network size which is required by the complexity of the problem being considered. Furthermore, the computational cost involved in network training can be considerably reduced when compared to standard back- propagation (BP) based FNNs. In our proposed technique, the 2-dimensional discrete cosine transform (2-D DCT) is applied over the entire difference face image for extracting relevant features for recognition purpose. The lower- frequency 2-D DCT coefficients obtained are then used to train a constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive learning process to reduce the network size without sacrificing the performance of the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having 5 facial expression images (neutral, smile, anger, sadness, and surprise). Images of 40 men are used for network training, and the remaining images are used for generalization and

  19. Neural networks for damage identification

    SciTech Connect

    Paez, T.L.; Klenke, S.E.

    1997-11-01

    Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  20. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2003-12-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  1. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  2. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-03-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  3. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2002-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, online, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce {sub x} emissions and improve heat rate

  4. Tuning of power system stabilizers using an artificial neural network

    SciTech Connect

    Hsu, Y.Y.; Chen, C.R. )

    1991-12-01

    This paper reports on tuning of power system stabilizers (PSS) which is investigated using an artificial neural network (ANN). To have good damping characteristics over a wide range of operating conditions, it is desirable to adapt the PSS parameters in real-time based on generator loading conditions. To do this, a pair of on-line measurements, i.e. generator real power output (P) and power factor (PF), which are representative of generator operating condition, are chosen as the input signals to the neural net. The outputs of the neural net are the desired PSS parameters. The neural net, once trained by a set of input-output patterns in the training set, can yield proper PSS parameters under any generator loading condition. Digital simulations of a synchronous machine subject to a major disturbance of three-phase fault under different operating conditions are performed to demonstrate the effectiveness of the proposed neural network.

  5. VLSI Cells Placement Using the Neural Networks

    SciTech Connect

    Azizi, Hacene; Zouaoui, Lamri; Mokhnache, Salah

    2008-06-12

    The artificial neural networks have been studied for several years. Their effectiveness makes it possible to expect high performances. The privileged fields of these techniques remain the recognition and classification. Various applications of optimization are also studied under the angle of the artificial neural networks. They make it possible to apply distributed heuristic algorithms. In this article, a solution to placement problem of the various cells at the time of the realization of an integrated circuit is proposed by using the KOHONEN network.

  6. Neural networks and orbit control in accelerators

    SciTech Connect

    Bozoki, E.; Friedman, A.

    1994-07-01

    An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to `kicks` and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given.

  7. Stochastic cellular automata model of neural networks.

    PubMed

    Goltsev, A V; de Abreu, F V; Dorogovtsev, S N; Mendes, J F F

    2010-06-01

    We propose a stochastic dynamical model of noisy neural networks with complex architectures and discuss activation of neural networks by a stimulus, pacemakers, and spontaneous activity. This model has a complex phase diagram with self-organized active neural states, hybrid phase transitions, and a rich array of behaviors. We show that if spontaneous activity (noise) reaches a threshold level then global neural oscillations emerge. Stochastic resonance is a precursor of this dynamical phase transition. These oscillations are an intrinsic property of even small groups of 50 neurons. PMID:20866454

  8. Genetic-algorithm-based tri-state neural networks

    NASA Astrophysics Data System (ADS)

    Uang, Chii-Maw; Chen, Wen-Gong; Horng, Ji-Bin

    2002-09-01

    A new method, using genetic algorithms, for constructing a tri-state neural network is presented. The global searching features of the genetic algorithms are adopted to help us easily find the interconnection weight matrix of a bipolar neural network. The construction method is based on the biological nervous systems, which evolve the parameters encoded in genes. Taking the advantages of conventional (binary) genetic algorithms, a two-level chromosome structure is proposed for training the tri-state neural network. A Matlab program is developed for simulating the network performances. The results show that the proposed genetic algorithms method not only has the features of accurate of constructing the interconnection weight matrix, but also has better network performance.

  9. Neural network regulation driven by autonomous neural firings

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  10. Coronary Artery Diagnosis Aided by Neural Network

    NASA Astrophysics Data System (ADS)

    Stefko, Kamil

    2007-01-01

    Coronary artery disease is due to atheromatous narrowing and subsequent occlusion of the coronary vessel. Application of optimised feed forward multi-layer back propagation neural network (MLBP) for detection of narrowing in coronary artery vessels is presented in this paper. The research was performed using 580 data records from traditional ECG exercise test confirmed by coronary arteriography results. Each record of training database included description of the state of a patient providing input data for the neural network. Level and slope of ST segment of a 12 lead ECG signal recorded at rest and after effort (48 floating point values) was the main component of input data for neural network was. Coronary arteriography results (verified the existence or absence of more than 50% stenosis of the particular coronary vessels) were used as a correct neural network training output pattern. More than 96% of cases were correctly recognised by especially optimised and a thoroughly verified neural network. Leave one out method was used for neural network verification so 580 data records could be used for training as well as for verification of neural network.

  11. General regression neural network and Monte Carlo simulation model for survival and growth of Salmonella on raw chicken skin as a function of serotype, temperature and time for use in risk assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A general regression neural network and Monte Carlo simulation model for predicting survival and growth of Salmonella on raw chicken skin as a function of serotype (Typhimurium, Kentucky, Hadar), temperature (5 to 50C) and time (0 to 8 h) was developed. Poultry isolates of Salmonella with natural r...

  12. Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.

    PubMed

    Zhang, Yanjun; Tao, Gang; Chen, Mou

    2016-09-01

    This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method. PMID:26285223

  13. Artificial neural network for multifunctional areas.

    PubMed

    Riccioli, Francesco; El Asmar, Toufic; El Asmar, Jean-Pierre; Fagarazzi, Claudio; Casini, Leonardo

    2016-01-01

    The issues related to the appropriate planning of the territory are particularly pronounced in highly inhabited areas (urban areas), where in addition to protecting the environment, it is important to consider an anthropogenic (urban) development placed in the context of sustainable growth. This work aims at mathematically simulating the changes in the land use, by implementing an artificial neural network (ANN) model. More specifically, it will analyze how the increase of urban areas will develop and whether this development would impact on areas with particular socioeconomic and environmental value, defined as multifunctional areas. The simulation is applied to the Chianti Area, located in the province of Florence, in Italy. Chianti is an area with a unique landscape, and its territorial planning requires a careful examination of the territory in which it is inserted. PMID:26718948

  14. Neural network chips for trigger purposes in high energy physics

    SciTech Connect

    Gemmeke, H.; Eppler, W.; Fischer, T.

    1996-12-31

    Two novel neural chips SAND (Simple Applicable Neural Device) and SIOP (Serial Input - Operating Parallel) are described. Both are highly usable for hardware triggers in particle physics. The chips are optimized for a high input data rate at a very low cost basis. The performance of a single SAND chip is 200 MOPS due to four parallel 16 bit multipliers and 40 bit adders working in one clock cycle. The chip is able to implement feedforward neural networks, Kohonen feature maps and radial basis function networks. Four chips will be implemented on a PCI-board for simulation and on a VUE board for trigger and on- and off-line analysis. For small sized feedforward neural networks the bit-serial neuro-chip SIOP may lead to even smaller latencies because each synaptic connection is implemented by its own bit serial multiplier and adder.

  15. Damselfly Network Simulator

    Energy Science and Technology Software Center (ESTSC)

    2014-04-01

    Damselfly is a model-based parallel network simulator. It can simulate communication patterns of High Performance Computing applications on different network topologies. It outputs steady-state network traffic for a communication pattern, which can help in studying network congestion and its impact on performance.

  16. VLSI synthesis of digital application specific neural networks

    NASA Technical Reports Server (NTRS)

    Beagles, Grant; Winters, Kel

    1991-01-01

    Neural networks tend to fall into two general categories: (1) software simulations, or (2) custom hardware that must be trained. The scope of this project is the merger of these two classifications into a system whereby a software model of a network is trained to perform a specific task and the results used to synthesize a standard cell realization of the network using automated tools.

  17. Neural networks within multi-core optic fibers

    PubMed Central

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-01-01

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks. PMID:27383911

  18. Neural networks within multi-core optic fibers.

    PubMed

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-01-01

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks. PMID:27383911

  19. Neural networks within multi-core optic fibers

    NASA Astrophysics Data System (ADS)

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-01

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  20. Data compression using artificial neural networks

    SciTech Connect

    Watkins, B.E.

    1991-09-01

    This thesis investigates the application of artificial neural networks for the compression of image data. An algorithm is developed using the competitive learning paradigm which takes advantage of the parallel processing and classification capability of neural networks to produce an efficient implementation of vector quantization. Multi-Stage, tree searched, and classification vector quantization codebook design are adapted to the neural network design to reduce the computational cost and hardware requirements. The results show that the new algorithm provides a substantial reduction in computational costs and an improvement in performance.

  1. Description of interatomic interactions with neural networks

    NASA Astrophysics Data System (ADS)

    Hajinazar, Samad; Shao, Junping; Kolmogorov, Aleksey N.

    Neural networks are a promising alternative to traditional classical potentials for describing interatomic interactions. Recent research in the field has demonstrated how arbitrary atomic environments can be represented with sets of general functions which serve as an input for the machine learning tool. We have implemented a neural network formalism in the MAISE package and developed a protocol for automated generation of accurate models for multi-component systems. Our tests illustrate the performance of neural networks and known classical potentials for a range of chemical compositions and atomic configurations. Supported by NSF Grant DMR-1410514.

  2. Neural network with formed dynamics of activity

    SciTech Connect

    Dunin-Barkovskii, V.L.; Osovets, N.B.

    1995-03-01

    The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.

  3. Stock market index prediction using neural networks

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  4. A neural network prototyping package within IRAF

    NASA Technical Reports Server (NTRS)

    Bazell, D.; Bankman, I.

    1992-01-01

    We outline our plans for incorporating a Neural Network Prototyping Package into the IRAF environment. The package we are developing will allow the user to choose between different types of networks and to specify the details of the particular architecture chosen. Neural networks consist of a highly interconnected set of simple processing units. The strengths of the connections between units are determined by weights which are adaptively set as the network 'learns'. In some cases, learning can be a separate phase of the user cycle of the network while in other cases the network learns continuously. Neural networks have been found to be very useful in pattern recognition and image processing applications. They can form very general 'decision boundaries' to differentiate between objects in pattern space and they can be used for associative recall of patterns based on partial cures and for adaptive filtering. We discuss the different architectures we plan to use and give examples of what they can do.

  5. Nonequilibrium landscape theory of neural networks

    PubMed Central

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  6. Neural Networks for Signal Processing and Control

    NASA Astrophysics Data System (ADS)

    Hesselroth, Ted Daniel

    Neural networks are developed for controlling a robot-arm and camera system and for processing images. The networks are based upon computational schemes that may be found in the brain. In the first network, a neural map algorithm is employed to control a five-joint pneumatic robot arm and gripper through feedback from two video cameras. The pneumatically driven robot arm employed shares essential mechanical characteristics with skeletal muscle systems. To control the position of the arm, 200 neurons formed a network representing the three-dimensional workspace embedded in a four-dimensional system of coordinates from the two cameras, and learned a set of pressures corresponding to the end effector positions, as well as a set of Jacobian matrices for interpolating between these positions. Because of the properties of the rubber-tube actuators of the arm, the position as a function of supplied pressure is nonlinear, nonseparable, and exhibits hysteresis. Nevertheless, through the neural network learning algorithm the position could be controlled to an accuracy of about one pixel (~3 mm) after two hundred learning steps. Applications of repeated corrections in each step via the Jacobian matrices leads to a very robust control algorithm since the Jacobians learned by the network have to satisfy the weak requirement that they yield a reduction of the distance between gripper and target. The second network is proposed as a model for the mammalian vision system in which backward connections from the primary visual cortex (V1) to the lateral geniculate nucleus play a key role. The application of hebbian learning to the forward and backward connections causes the formation of receptive fields which are sensitive to edges, bars, and spatial frequencies of preferred orientations. The receptive fields are learned in such a way as to maximize the rate of transfer of information from the LGN to V1. Orientational preferences are organized into a feature map in the primary visual

  7. Microarray data classified by artificial neural networks.

    PubMed

    Linder, Roland; Richards, Tereza; Wagner, Mathias

    2007-01-01

    Systems biology has enjoyed explosive growth in both the number of people participating in this area of research and the number of publications on the topic. The field of systems biology encompasses the in silico analysis of high-throughput data as provided by DNA or protein microarrays. Along with the increasing availability of microarray data, attention is focused on methods of analyzing the expression rates. One important type of analysis is the classification task, for example, distinguishing different types of cell functions or tumors. Recently, interest has been awakened toward artificial neural networks (ANN), which have many appealing characteristics such as an exceptional degree of accuracy. Nonlinear relationships or independence from certain assumptions regarding the data distribution are also considered. The current work reviews advantages as well as disadvantages of neural networks in the context of microarray analysis. Comparisons are drawn to alternative methods. Selected solutions are discussed, and finally algorithms for the effective combination of multiple ANNs are presented. The development of approaches to use ANN-processed microarray data applicable to run cell and tissue simulations may be slated for future investigation. PMID:18220242

  8. A neural network model of harmonic detection

    NASA Astrophysics Data System (ADS)

    Lewis, Clifford F.

    2003-04-01

    Harmonic detection theories postulate that a virtual pitch is perceived when a sufficient number of harmonics is present. The harmonics need not be consecutive, but higher harmonics contribute less than lower harmonics [J. Raatgever and F. A. Bilsen, in Auditory Physiology and Perception, edited by Y. Cazals, K. Horner, and L. Demany (Pergamon, Oxford, 1992), pp. 215-222 M. K. McBeath and J. F. Wayand, Abstracts of the Psychonom. Soc. 3, 55 (1998)]. A neural network model is presented that has the potential to simulate this operation. Harmonics are first passed through a bank of rounded exponential filters with lateral inhibition. The results are used as inputs for an autoassociator neural network. The model is trained using harmonic data for symphonic musical instruments, in order to test whether it can self-organize by learning associations between co-occurring harmonics. It is shown that the trained model can complete the pattern for missing-fundamental sounds. The Performance of the model in harmonic detection will be compared with experimental results for humans.

  9. Network dissection of neural networks used in optimal groundwater remediation

    SciTech Connect

    Rogers, L.L.; Johnson, V.M.; Dowla, F.U.

    1992-12-01

    We have been using an innovative computational approach for optimal groundwater management which involves use of artificial neural networks (ANNs) and the genetic algorithm (GA). In this approach, the ANN is trained to predict a particular aspect of the outcome of the flow and transport simulation. Then the.GA directs a search, based on the mechanics of genetics and natural selection, through possible management solutions, in this case patterns or realizations of pumping. These pumping realizations are presented to the trained ANN which predicts the outcome of the pumping realizations. The primary advantages of the ANN approach are parallel processing for the flow and transport simulations and the ability to ``recycle`` or reuse the base of knowledge formed by these flow and transport simulations.

  10. Network dissection of neural networks used in optimal groundwater remediation

    SciTech Connect

    Rogers, L.L.; Johnson, V.M.; Dowla, F.U.

    1992-12-01

    We have been using an innovative computational approach for optimal groundwater management which involves use of artificial neural networks (ANNs) and the genetic algorithm (GA). In this approach, the ANN is trained to predict a particular aspect of the outcome of the flow and transport simulation. Then the.GA directs a search, based on the mechanics of genetics and natural selection, through possible management solutions, in this case patterns or realizations of pumping. These pumping realizations are presented to the trained ANN which predicts the outcome of the pumping realizations. The primary advantages of the ANN approach are parallel processing for the flow and transport simulations and the ability to recycle'' or reuse the base of knowledge formed by these flow and transport simulations.

  11. Wavelet neural network for detection of signals in communications

    NASA Astrophysics Data System (ADS)

    Gomez-Sanchez, Raquel; Andina, Diego

    1998-03-01

    Our objective is the design and simulation of an efficient system for detection of signals in communications in terms of speed and computational complexity. The proposed scheme takes advantage of two powerful frameworks in signal processing: wavelets and neural networks. The decision system will take a decision based on the computation of the a prior probabilities of the input signal. For the estimation of such probability density functions, a wavelet neural network has been chosen. The election has risen under the following considerations: (a) neural networks have been established as a general approximation tool for fitting nonlinear models from input/output data and (b) the increasing popularity of the wavelet decomposition as a powerful tool for approximation. The integration of the above factors leads to the wavelet neural network concept. This network preserves the universal approximation property of wavelet series, with the advantage of the speed and efficient computation of a neural network architecture. The topology and learning algorithm of the network will provide an efficient approximation to the required probability density functions.

  12. Reconstructing input for artificial neural networks based on embedding theory and mutual information to simulate soil pore water salinity in tidal floodplain

    NASA Astrophysics Data System (ADS)

    Zheng, Fawen; Wan, Yongshan; Song, Keunyea; Sun, Detong; Hedgepeth, Marion

    2016-01-01

    Soil pore water salinity plays an important role in the distribution of vegetation and biogeochemical processes in coastal floodplain ecosystems. In this study, artificial neural networks (ANNs) were applied to simulate the pore water salinity of a tidal floodplain in Florida. We present an approach based on embedding theory with mutual information to reconstruct ANN model input time series from one system state variable. Mutual information between system output and input was computed and the local minimum mutual information points were used to determine a time lag vector for time series embedding and reconstruction, with which the mutual information weighted average method was developed to compute the components of reconstructed time series. The optimal embedding dimension was obtained by optimizing model performance. The method was applied to simulate soil pore water salinity dynamics at 12 probe locations in the tidal floodplain influenced by saltwater intrusion using 4 years (2005-2008) data, in which adjacent river water salinity was used to reconstruct model input. The simulated electrical conductivity of the pore water showed close agreement with field observations (RMSE and ), suggesting the reconstructed input by the proposed approach provided adequate input information for ANN modeling. Multiple linear regression model, partial mutual information algorithm for input variable selection, k-NN algorithm, and simple time delay embedding were also used to further verify the merit of the proposed approach.

  13. An artificial neural network controller for intelligent transportation systems applications

    SciTech Connect

    Vitela, J.E.; Hanebutte, U.R.; Reifman, J.

    1996-04-01

    An Autonomous Intelligent Cruise Control (AICC) has been designed using a feedforward artificial neural network, as an example for utilizing artificial neural networks for nonlinear control problems arising in intelligent transportation systems applications. The AICC is based on a simple nonlinear model of the vehicle dynamics. A Neural Network Controller (NNC) code developed at Argonne National Laboratory to control discrete dynamical systems was used for this purpose. In order to test the NNC, an AICC-simulator containing graphical displays was developed for a system of two vehicles driving in a single lane. Two simulation cases are shown, one involving a lead vehicle with constant velocity and the other a lead vehicle with varying acceleration. More realistic vehicle dynamic models will be considered in future work.

  14. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the characteristics…

  15. Results of the neural network investigation

    NASA Astrophysics Data System (ADS)

    Uvanni, Lee A.

    1992-04-01

    Rome Laboratory has designed and implemented a neural network based automatic target recognition (ATR) system under contract F30602-89-C-0079 with Booz, Allen & Hamilton (BAH), Inc., of Arlington, Virginia. The system utilizes a combination of neural network paradigms and conventional image processing techniques in a parallel environment on the IE- 2000 SUN 4 workstation at Rome Laboratory. The IE-2000 workstation was designed to assist the Air Force and Department of Defense to derive the needs for image exploitation and image exploitation support for the late 1990s - year 2000 time frame. The IE-2000 consists of a developmental testbed and an applications testbed, both with the goal of solving real world problems on real-world facilities for image exploitation. To fully exploit the parallel nature of neural networks, 18 Inmos T800 transputers were utilized, in an attempt to provide a near- linear speed-up for each subsystem component implemented on them. The initial design contained three well-known neural network paradigms, each modified by BAH to some extent: the Selective Attention Neocognitron (SAN), the Binary Contour System/Feature Contour System (BCS/FCS), and Adaptive Resonance Theory 2 (ART-2), and one neural network designed by BAH called the Image Variance Exploitation Network (IVEN). Through rapid prototyping, the initial system evolved into a completely different final design, called the Neural Network Image Exploitation System (NNIES), where the final system consists of two basic components: the Double Variance (DV) layer and the Multiple Object Detection And Location System (MODALS). A rapid prototyping neural network CAD Tool, designed by Booz, Allen & Hamilton, was used to rapidly build and emulate the neural network paradigms. Evaluation of the completed ATR system included probability of detections and probability of false alarms among other measures.

  16. Adaptive Optimization of Aircraft Engine Performance Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Long, Theresa W.

    1995-01-01

    Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.

  17. Neural dynamics in superconducting networks

    NASA Astrophysics Data System (ADS)

    Segall, Kenneth; Schult, Dan; Crotty, Patrick; Miller, Max

    2012-02-01

    We discuss the use of Josephson junction networks as analog models for simulating neuron behaviors. A single unit called a ``Josephson Junction neuron'' composed of two Josephson junctions [1] displays behavior that shows characteristics of single neurons such as action potentials, thresholds and refractory periods. Synapses can be modeled as passive filters and can be used to connect neurons together. The sign of the bias current to the Josephson neuron can be used to determine if the neuron is excitatory or inhibitory. Due to the intrinsic speed of Josephson junctions and their scaling properties as analog models, a large network of Josephson neurons measured over typical lab times contains dynamics which would essentially be impossible to calculate on a computer We discuss the operating principle of the Josephson neuron, coupling Josephson neurons together to make large networks, and the Kuramoto-like synchronization of a system of disordered junctions.[4pt] [1] ``Josephson junction simulation of neurons,'' P. Crotty, D. Schult and K. Segall, Physical Review E 82, 011914 (2010).

  18. Decision-making differences: novices, experts, and a neural network

    NASA Astrophysics Data System (ADS)

    Manning, David; Bunting, Sam; Leach, John

    2000-04-01

    We investigated the decision making performance of trained radiographers, novice radiographers and a neural network in the detection of fractures. Ground truth was established by the independent agreement of experienced radiologists for 740 single view digitized radiographs of the wrist. The images were categorized into negatives and positives; 520 of these were used to train the back propagation, three layer neural network in a supervised mode, and the remainder were used to create a test bank. The test was presented to 20 novice observers, 12 experienced radiographers trained in the detection of skeletal trauma and then to the trained neural network. ROC Az values for all the decision makers were not significantly different (p > 0.1) but there were significant differences in the values of True Positive and True Negative Fractions. The neural network showed a greater aptitude for distinguishing the normals. By filtering the neural net decisions through the human data we simulated the effect of assisted reporting. Results suggest that if fracture prevalence is very low in a population, a neural network demonstrating high specificity may have utility in reducing the number of images which must be reviewed by human experts.

  19. Supervised learning in hierarchical neural networks for edge enhancement

    NASA Astrophysics Data System (ADS)

    Lu, Si W.; Szeto, Anthony

    1992-09-01

    Hierarchical artificial neural networks are designed to enhance edge measurement. The neural network comprises four subnets: the Edge Contour Detection subnet, the Maximum Detection subnet, the Gradient Adjustment subnet, and the Orientation Determination subnet. The interconnections between these subnets are fashioned in a hierarchical manner. In order for the neural network system to perform correctly and accurately, each of the neural subnets must be given suitable weights by learning. The learning is very difficult for the hierarchical neural networks because of the complicated hierarchical structure. In our learning algorithm the modularity is introduced for fast learning and good generalization, based on the analysis of the local concept and the distributed concept represented by the module. The amount of information which the nets need to learn is drastically reduced. Therefore, only a small number of training patterns are required to train the nets and still derive suitable weights for the nets to perform accurately and efficiently. The neural network is simulated on a MIPS M120-S machine running UNIX. For the test images degraded by random noise up to 20%, the true edges are detected and enhanced, the false edges are suppressed, the noise is eliminated, the weak edges are reinforced, and the missing edge elements are interpolated.

  20. Artificial neural network modeling of dissolved oxygen in reservoir.

    PubMed

    Chen, Wei-Bo; Liu, Wen-Cheng

    2014-02-01

    The water quality of reservoirs is one of the key factors in the operation and water quality management of reservoirs. Dissolved oxygen (DO) in water column is essential for microorganisms and a significant indicator of the state of aquatic ecosystems. In this study, two artificial neural network (ANN) models including back propagation neural network (BPNN) and adaptive neural-based fuzzy inference system (ANFIS) approaches and multilinear regression (MLR) model were developed to estimate the DO concentration in the Feitsui Reservoir of northern Taiwan. The input variables of the neural network are determined as water temperature, pH, conductivity, turbidity, suspended solids, total hardness, total alkalinity, and ammonium nitrogen. The performance of the ANN models and MLR model was assessed through the mean absolute error, root mean square error, and correlation coefficient computed from the measured and model-simulated DO values. The results reveal that ANN estimation performances were superior to those of MLR. Comparing to the BPNN and ANFIS models through the performance criteria, the ANFIS model is better than the BPNN model for predicting the DO values. Study results show that the neural network particularly using ANFIS model is able to predict the DO concentrations with reasonable accuracy, suggesting that the neural network is a valuable tool for reservoir management in Taiwan. PMID:24078053

  1. Imbibition well stimulation via neural network design

    DOEpatents

    Weiss, William

    2007-08-14

    A method for stimulation of hydrocarbon production via imbibition by utilization of surfactants. The method includes use of fuzzy logic and neural network architecture constructs to determine surfactant use.

  2. Constructive Autoassociative Neural Network for Facial Recognition

    PubMed Central

    Fernandes, Bruno J. T.; Cavalcanti, George D. C.; Ren, Tsang I.

    2014-01-01

    Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature. PMID:25542018

  3. Sensor failure detection and recovery by neural networks

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei; Nurre, J.

    1991-01-01

    A new method of sensor failure detection, isolation, and accommodation is described using a neural network approach. In a propulsion system such as the Space Shuttle Main Engine, the dynamics are usually much higher than the order of the system. This built-in redundancy of the sensors can be utilized to detect and correct sensor failure problems. The goal of the proposed scheme is to train a neural network to identify the sensor whose measurement is not consistent with other sensor outputs. Another neural network is trained to recover the value of critical variables when their measurements fail. Techniques for training the network with a limited amount of data are developed. The proposed scheme is tested using the simulated data of the Space Shuttle Main Engine (SSME) inflight sensor group.

  4. Vein matching using artificial neural network in vein authentication systems

    NASA Astrophysics Data System (ADS)

    Noori Hoshyar, Azadeh; Sulaiman, Riza

    2011-10-01

    Personal identification technology as security systems is developing rapidly. Traditional authentication modes like key; password; card are not safe enough because they could be stolen or easily forgotten. Biometric as developed technology has been applied to a wide range of systems. According to different researchers, vein biometric is a good candidate among other biometric traits such as fingerprint, hand geometry, voice, DNA and etc for authentication systems. Vein authentication systems can be designed by different methodologies. All the methodologies consist of matching stage which is too important for final verification of the system. Neural Network is an effective methodology for matching and recognizing individuals in authentication systems. Therefore, this paper explains and implements the Neural Network methodology for finger vein authentication system. Neural Network is trained in Matlab to match the vein features of authentication system. The Network simulation shows the quality of matching as 95% which is a good performance for authentication system matching.

  5. HVAC pipe/duct sizing using artificial neural networks

    SciTech Connect

    Yeh, S.J.D.; Wong, K.F.V.

    1995-12-31

    The main objective of this study is to demonstrate that artificial neural networks (ANN`s) serve as useful aids to Heating, Ventilating and Air-Conditioning (HVAC) system design. In the present work, the design process for sizing fluid systems in HVAC is simulated by using ANN`S. Four ANN`s have been constructed in a personal computer, one for air duct sizing and three for pipe sizing. The air duct network was trained to output the friction rate and duct size. The three pipe sizing neural networks product pressure drops and pipe diameters. By using the trained artificial neural networks, data can be obtained instantly with errors less than 3%. Thus, ANN`s have been shown to simplify traditional methods and procedures in HVAC pipe and air duct sizing.

  6. Speech transmission index from running speech: A neural network approach

    NASA Astrophysics Data System (ADS)

    Li, F. F.; Cox, T. J.

    2003-04-01

    Speech transmission index (STI) is an important objective parameter concerning speech intelligibility for sound transmission channels. It is normally measured with specific test signals to ensure high accuracy and good repeatability. Measurement with running speech was previously proposed, but accuracy is compromised and hence applications limited. A new approach that uses artificial neural networks to accurately extract the STI from received running speech is developed in this paper. Neural networks are trained on a large set of transmitted speech examples with prior knowledge of the transmission channels' STIs. The networks perform complicated nonlinear function mappings and spectral feature memorization to enable accurate objective parameter extraction from transmitted speech. Validations via simulations demonstrate the feasibility of this new method on a one-net-one-speech extract basis. In this case, accuracy is comparable with normal measurement methods. This provides an alternative to standard measurement techniques, and it is intended that the neural network method can facilitate occupied room acoustic measurements.

  7. ANNarchy: a code generation approach to neural simulations on parallel hardware

    PubMed Central

    Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.

    2015-01-01

    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957

  8. ANNarchy: a code generation approach to neural simulations on parallel hardware.

    PubMed

    Vitay, Julien; Dinkelbach, Helge Ü; Hamker, Fred H

    2015-01-01

    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957

  9. Using neural networks in software repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  10. Limitations of opto-electronic neural networks

    NASA Technical Reports Server (NTRS)

    Yu, Jeffrey; Johnston, Alan; Psaltis, Demetri; Brady, David

    1989-01-01

    Consideration is given to the limitations of implementing neurons, weights, and connections in neural networks for electronics and optics. It is shown that the advantages of each technology are utilized when electronically fabricated neurons are included and a combination of optics and electronics are employed for the weights and connections. The relationship between the types of neural networks being constructed and the choice of technologies to implement the weights and connections is examined.

  11. Neural-Network Controller For Vibration Suppression

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Wang, Shyh Jong

    1995-01-01

    Neural-network-based adaptive-control system proposed for vibration suppression of flexible space structures. Controller features three-layer neural network and utilizes output feedback. Measurements generated by various sensors on structure. Feed forward path also included to speed up response in case plant exhibits predominantly linear dynamic behavior. System applicable to single-input single-output systems. Work extended to multiple-input multiple-output systems as well.

  12. Simulation of an array-based neural net model

    NASA Technical Reports Server (NTRS)

    Barnden, John A.

    1987-01-01

    Research in cognitive science suggests that much of cognition involves the rapid manipulation of complex data structures. However, it is very unclear how this could be realized in neural networks or connectionist systems. A core question is: how could the interconnectivity of items in an abstract-level data structure be neurally encoded? The answer appeals mainly to positional relationships between activity patterns within neural arrays, rather than directly to neural connections in the traditional way. The new method was initially devised to account for abstract symbolic data structures, but it also supports cognitively useful spatial analogue, image-like representations. As the neural model is based on massive, uniform, parallel computations over 2D arrays, the massively parallel processor is a convenient tool for simulation work, although there are complications in using the machine to the fullest advantage. An MPP Pascal simulation program for a small pilot version of the model is running.

  13. Neural networks using two-component Bose-Einstein condensates

    PubMed Central

    Byrnes, Tim; Koyama, Shinsuke; Yan, Kai; Yamamoto, Yoshihisa

    2013-01-01

    The authors previously considered a method of solving optimization problems by using a system of interconnected network of two component Bose-Einstein condensates (Byrnes, Yan, Yamamoto New J. Phys. 13, 113025 (2011)). The use of bosonic particles gives a reduced time proportional to the number of bosons N for solving Ising model Hamiltonians by taking advantage of enhanced bosonic cooling rates. Here we consider the same system in terms of neural networks. We find that up to the accelerated cooling of the bosons the previously proposed system is equivalent to a stochastic continuous Hopfield network. This makes it clear that the BEC network is a physical realization of a simulated annealing algorithm, with an additional speedup due to bosonic enhancement. We discuss the BEC network in terms of neural network tasks such as learning and pattern recognition and find that the latter process may be accelerated by a factor of N. PMID:23989391

  14. Speech synthesis with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Weijters, Ton; Thole, Johan

    1992-10-01

    The application of neural nets to speech synthesis is considered. In speech synthesis, the main efforts so far have been to master the grapheme to phoneme conversion. During this conversion symbols (graphemes) are converted into other symbols (phonemes). Neural networks, however, are especially competitive for tasks in which complex nonlinear transformations are needed and sufficient domain specific knowledge is not available. The conversion of text into speech parameters appropriate as input for a speech generator seems such a task. Results of a pilot study in which an attempt is made to train a neural network for this conversion are presented.

  15. Continuous neural network with windowed Hebbian learning.

    PubMed

    Fotouhi, M; Heidari, M; Sharifitabar, M

    2015-06-01

    We introduce an extension of the classical neural field equation where the dynamics of the synaptic kernel satisfies the standard Hebbian type of learning (synaptic plasticity). Here, a continuous network in which changes in the weight kernel occurs in a specified time window is considered. A novelty of this model is that it admits synaptic weight decrease as well as the usual weight increase resulting from correlated activity. The resulting equation leads to a delay-type rate model for which the existence and stability of solutions such as the rest state, bumps, and traveling fronts are investigated. Some relations between the length of the time window and the bump width is derived. In addition, the effect of the delay parameter on the stability of solutions is shown. Also numerical simulations for solutions and their stability are presented. PMID:25677526

  16. Microturbine control based on fuzzy neural network

    NASA Astrophysics Data System (ADS)

    Yan, Shijie; Bian, Chunyuan; Wang, Zhiqiang

    2006-11-01

    As microturbine generator (MTG) is a clean, efficient, low cost and reliable energy supply system. From outside characteristics of MTG, it is multi-variable, time-varying and coupling system, so it is difficult to be identified on-line and conventional control law adopted before cannot achieve desirable result. A novel fuzzy-neural networks (FNN) control algorithm was proposed in combining with the conventional PID control. In the paper, IF-THEN rules for tuning were applied by a first-order Sugeno fuzzy model with seven fuzzy rules and the membership function was given as the continuous GAUSSIAN function. Some sample data were utilized to train FNN. Through adjusting shape of membership function and weight continually, objective of auto-tuning fuzzy-rules can be achieved. The FNN algorithm had been applied to "100kW Microturbine control and power converter system". The results of simulation and experiment are shown that the algorithm can work very well.

  17. Delayed switching applied to memristor neural networks

    NASA Astrophysics Data System (ADS)

    Wang, Frank Z.; Helian, Na; Wu, Sining; Yang, Xiao; Guo, Yike; Lim, Guan; Rashid, Md Mamunur

    2012-04-01

    Magnetic flux and electric charge are linked in a memristor. We reported recently that a memristor has a peculiar effect in which the switching takes place with a time delay because a memristor possesses a certain inertia. This effect was named the "delayed switching effect." In this work, we elaborate on the importance of delayed switching in a brain-like computer using memristor neural networks. The effect is used to control the switching of a memristor synapse between two neurons that fire together (the Hebbian rule). A theoretical formula is found, and the design is verified by a simulation. We have also built an experimental setup consisting of electronic memristive synapses and electronic neurons.

  18. Associated neural network independent component analysis structure

    NASA Astrophysics Data System (ADS)

    Kim, Keehoon; Kostrzweski, Andrew

    2006-05-01

    Detection, classification, and localization of potential security breaches in extremely high-noise environments are important for perimeter protection and threat detection both for homeland security and for military force protection. Physical Optics Corporation has developed a threat detection system to separate acoustic signatures from unknown, mixed sources embedded in extremely high-noise environments where signal-to-noise ratios (SNRs) are very low. Associated neural network structures based on independent component analysis are designed to detect/separate new acoustic sources and to provide reliability information. The structures are tested through computer simulations for each critical component, including a spontaneous detection algorithm for potential threat detection without a predefined knowledge base, a fast target separation algorithm, and nonparametric methodology for quantified confidence measure. The results show that the method discussed can separate hidden acoustic sources of SNR in 5 dB noisy environments with an accuracy of 80%.

  19. Hopf bifurcation stability in Hopfield neural networks.

    PubMed

    Marichal, R L; González, E J; Marichal, G N

    2012-12-01

    In this paper we consider a simple discrete Hopfield neural network model and analyze local stability using the associated characteristic model. In order to study the dynamic behavior of the quasi-periodic orbit, the Hopf bifurcation must be determined. For the case of two neurons, we find one necessary condition that yields the Hopf bifurcation. In addition, we determine the stability and direction of the Hopf bifurcation by applying normal form theory and the center manifold theorem. An example is given and a numerical simulation is performed to illustrate the results. We analyze the influence of bias weights on the stability of the quasi-periodic orbit and study the phase-locking phenomena for certain experimental results with Arnold Tongues in a particular weight configuration. PMID:23037776

  20. Delayed switching applied to memristor neural networks

    SciTech Connect

    Wang, Frank Z.; Yang Xiao; Lim Guan; Helian Na; Wu Sining; Guo Yike; Rashid, Md Mamunur

    2012-04-01

    Magnetic flux and electric charge are linked in a memristor. We reported recently that a memristor has a peculiar effect in which the switching takes place with a time delay because a memristor possesses a certain inertia. This effect was named the ''delayed switching effect.'' In this work, we elaborate on the importance of delayed switching in a brain-like computer using memristor neural networks. The effect is used to control the switching of a memristor synapse between two neurons that fire together (the Hebbian rule). A theoretical formula is found, and the design is verified by a simulation. We have also built an experimental setup consisting of electronic memristive synapses and electronic neurons.

  1. Optimization of a polymer composite employing molecular mechanic simulations and artificial neural networks for a novel intravaginal bioadhesive drug delivery device.

    PubMed

    Ndesendo, Valence M K; Pillay, Viness; Choonara, Yahya E; du Toit, Lisa C; Kumar, Pradeep; Buchmann, Eckhart; Meyer, Leith C R; Khan, Riaz A

    2012-01-01

    This study aimed at elucidating an optimal synergistic polymer composite for achieving a desirable molecular bioadhesivity and Matrix Erosion of a bioactive-loaded Intravaginal Bioadhesive Polymeric Device (IBPD) employing Molecular Mechanic Simulations and Artificial Neural Networks (ANN). Fifteen lead caplet-shaped devices were formulated by direct compression with the model bioactives zidovudine and polystyrene sulfonate. The Matrix Erosion was analyzed in simulated vaginal fluid to assess the critical integrity. Blueprinting the molecular mechanics of bioadhesion between vaginal epithelial glycoprotein (EGP), mucin (MUC) and the IBPD were performed on HyperChem 8.0.8 software (MM+ and AMBER force fields) for the quantification and characterization of correlative molecular interactions during molecular bioadhesion. Results proved that the IBPD bioadhesivity was pivoted on the conformation, orientation, and poly(acrylic acid) (PAA) composition that interacted with EGP and MUC present on the vaginal epithelium due to heterogeneous surface residue distributions (free energy= -46.33 kcalmol(-1)). ANN sensitivity testing as a connectionist model enabled strategic polymer selection for developing an IBPD with an optimally prolonged Matrix Erosion and superior molecular bioadhesivity (ME = 1.21-7.68%; BHN = 2.687-4.981 N/mm(2)). Molecular modeling aptly supported the EGP-MUC-PAA molecular interaction at the vaginal epithelium confirming the role of PAA in bioadhesion of the IBPD once inserted into the posterior fornix of the vagina. PMID:21231902

  2. Modelling personal exposure to particulate air pollution: an assessment of time-integrated activity modelling, Monte Carlo simulation & artificial neural network approaches.

    PubMed

    McCreddin, A; Alam, M S; McNabola, A

    2015-01-01

    An experimental assessment of personal exposure to PM10 in 59 office workers was carried out in Dublin, Ireland. 255 samples of 24-h personal exposure were collected in real time over a 28 month period. A series of modelling techniques were subsequently assessed for their ability to predict 24-h personal exposure to PM10. Artificial neural network modelling, Monte Carlo simulation and time-activity based models were developed and compared. The results of the investigation showed that using the Monte Carlo technique to randomly select concentrations from statistical distributions of exposure concentrations in typical microenvironments encountered by office workers produced the most accurate results, based on 3 statistical measures of model performance. The Monte Carlo simulation technique was also shown to have the greatest potential utility over the other techniques, in terms of predicting personal exposure without the need for further monitoring data. Over the 28 month period only a very weak correlation was found between background air quality and personal exposure measurements, highlighting the need for accurate models of personal exposure in epidemiological studies. PMID:25260856

  3. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  4. Linear circuits for neural networks and affective computing.

    PubMed

    Frenger, P

    1999-01-01

    Biological phenomena are often modeled with software on digital computers, even though the events may be analog in nature. The author describes the use of linear circuitry in two areas of biological simulation: artificial neural networks and affective computing. The operational amplifier, with the assistance of some new analog chips and simple digital microcontrollers, is featured prominently in these linear designs. PMID:11143356

  5. Autoshaping and Automaintenance: A Neural-Network Approach

    ERIC Educational Resources Information Center

    Burgos, Jose E.

    2007-01-01

    This article presents an interpretation of autoshaping, and positive and negative automaintenance, based on a neural-network model. The model makes no distinction between operant and respondent learning mechanisms, and takes into account knowledge of hippocampal and dopaminergic systems. Four simulations were run, each one using an "A-B-A" design…

  6. Precision of a radial basis function neural network tracking method

    NASA Technical Reports Server (NTRS)

    Hanan, J.; Zhou, H.; Chao, T. H.

    2003-01-01

    The precision of a radial basis function (RBF) neural network based tracking method has been assessed against real targets. Precision was assessed against traditionally measured frame-by-frame measurements from the recorded data set. The results show the potential limit for the technique and reveal intricacies associated with empirical data not necessarily observed in simulations.

  7. Adaptive control of mobile robots using a neural network.

    PubMed

    de Sousa Júnior, C; Hermerly, E M

    2001-06-01

    A Neural Network - based control approach for mobile robot is proposed. The weight adaptation is made on-line, without previous learning. Several possible situations in robot navigation are considered, including uncertainties in the model and presence of disturbance. Weight adaptation laws are presented as well as simulation results. PMID:11574958

  8. Nonlinear signal processing using neural networks: Prediction and system modelling

    SciTech Connect

    Lapedes, A.; Farber, R.

    1987-06-01

    The backpropagation learning algorithm for neural networks is developed into a formalism for nonlinear signal processing. We illustrate the method by selecting two common topics in signal processing, prediction and system modelling, and show that nonlinear applications can be handled extremely well by using neural networks. The formalism is a natural, nonlinear extension of the linear Least Mean Squares algorithm commonly used in adaptive signal processing. Simulations are presented that document the additional performance achieved by using nonlinear neural networks. First, we demonstrate that the formalism may be used to predict points in a highly chaotic time series with orders of magnitude increase in accuracy over conventional methods including the Linear Predictive Method and the Gabor-Volterra-Weiner Polynomial Method. Deterministic chaos is thought to be involved in many physical situations including the onset of turbulence in fluids, chemical reactions and plasma physics. Secondly, we demonstrate the use of the formalism in nonlinear system modelling by providing a graphic example in which it is clear that the neural network has accurately modelled the nonlinear transfer function. It is interesting to note that the formalism provides explicit, analytic, global, approximations to the nonlinear maps underlying the various time series. Furthermore, the neural net seems to be extremely parsimonious in its requirements for data points from the time series. We show that the neural net is able to perform well because it globally approximates the relevant maps by performing a kind of generalized mode decomposition of the maps. 24 refs., 13 figs.

  9. Ca^2+ Dynamics and Propagating Waves in Neural Networks with Excitatory and Inhibitory Neurons.

    NASA Astrophysics Data System (ADS)

    Bondarenko, Vladimir E.

    2008-03-01

    Dynamics of neural spikes, intracellular Ca^2+, and Ca^2+ in intracellular stores was investigated both in isolated Chay's neurons and in the neurons coupled in networks. Three types of neural networks were studied: a purely excitatory neural network, with only excitatory (AMPA) synapses; a purely inhibitory neural network with only inhibitory (GABA) synapses; and a hybrid neural network, with both AMPA and GABA synapses. In the hybrid neural network, the ratio of excitatory to inhibitory neurons was 4:1. For each case, we considered two types of connections, ``all-with-all" and 20 connections per neuron. Each neural network contained 100 neurons with randomly distributed connection strengths. In the neural networks with ``all-with-all" connections and AMPA/GABA synapses an increase in average synaptic strength yielded bursting activity with increased/decreased number of spikes per burst. The neural bursts and Ca^2+ transients were synchronous at relatively large connection strengths despite random connection strengths. Simulations of the neural networks with 20 connections per neuron and with only AMPA synapses showed synchronous oscillations, while the neural networks with GABA or hybrid synapses generated propagating waves of membrane potential and Ca^2+ transients.

  10. The H1 neural network trigger project

    NASA Astrophysics Data System (ADS)

    Kiesling, C.; Denby, B.; Fent, J.; Fröchtenicht, W.; Garda, P.; Granado, B.; Grindhammer, G.; Haberer, W.; Janauschek, L.; Kobler, T.; Koblitz, B.; Nellen, G.; Prevotet, J.-C.; Schmidt, S.; Tzamariudaki, E.; Udluft, S.

    2001-08-01

    We present a short overview of neuromorphic hardware and some of the physics projects making use of such devices. As a concrete example we describe an innovative project within the H1-Experiment at the electron-proton collider HERA, instrumenting hardwired neural networks as pattern recognition machines to discriminate between wanted physics and uninteresting background at the trigger level. The decision time of the system is less than 20 microseconds, typical for a modern second level trigger. The neural trigger has been successfully running for the past four years and has turned out new physics results from H1 unobtainable so far with other triggering schemes. We describe the concepts and the technical realization of the neural network trigger system, present the most important physics results, and motivate an upgrade of the system for the future high luminosity running at HERA. The upgrade concentrates on "intelligent preprocessing" of the neural inputs which help to strongly improve the networks' discrimination power.

  11. Optical neural stimulation modeling on degenerative neocortical neural networks

    NASA Astrophysics Data System (ADS)

    Zverev, M.; Fanjul-Vélez, F.; Salas-García, I.; Arce-Diego, J. L.

    2015-07-01

    Neurodegenerative diseases usually appear at advanced age. Medical advances make people live longer and as a consequence, the number of neurodegenerative diseases continuously grows. There is still no cure for these diseases, but several brain stimulation techniques have been proposed to improve patients' condition. One of them is Optical Neural Stimulation (ONS), which is based on the application of optical radiation over specific brain regions. The outer cerebral zones can be noninvasively stimulated, without the common drawbacks associated to surgical procedures. This work focuses on the analysis of ONS effects in stimulated neurons to determine their influence in neuronal activity. For this purpose a neural network model has been employed. The results show the neural network behavior when the stimulation is provided by means of different optical radiation sources and constitute a first approach to adjust the optical light source parameters to stimulate specific neocortical areas.

  12. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  13. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  14. CAISSON: Interconnect Network Simulator

    NASA Technical Reports Server (NTRS)

    Springer, Paul L.

    2006-01-01

    Cray response to HPCS initiative. Model future petaflop computer interconnect. Parallel discrete event simulation techniques for large scale network simulation. Built on WarpIV engine. Run on laptop and Altix 3000. Can be sized up to 1000 simulated nodes per host node. Good parallel scaling characteristics. Flexible: multiple injectors, arbitration strategies, queue iterators, network topologies.

  15. 3-D flame temperature field reconstruction with multiobjective neural network

    NASA Astrophysics Data System (ADS)

    Wan, Xiong; Gao, Yiqing; Wang, Yuanmei

    2003-02-01

    A novel 3-D temperature field reconstruction method is proposed in this paper, which is based on multiwavelength thermometry and Hopfield neural network computed tomography. A mathematical model of multi-wavelength thermometry is founded, and a neural network algorithm based on multiobjective optimization is developed. Through computer simulation and comparison with the algebraic reconstruction technique (ART) and the filter back-projection algorithm (FBP), the reconstruction result of the new method is discussed in detail. The study shows that the new method always gives the best reconstruction results. At last, temperature distribution of a section of four peaks candle flame is reconstructed with this novel method.

  16. Fuzzy logic and neural networks

    SciTech Connect

    Loos, J.R.

    1994-11-01

    Combine fuzzy logic`s fuzzy sets, fuzzy operators, fuzzy inference, and fuzzy rules - like defuzzification - with neural networks and you can arrive at very unfuzzy real-time control. Fuzzy logic, cursed with a very whimsical title, simply means multivalued logic, which includes not only the conventional two-valued (true/false) crisp logic, but also the logic of three or more values. This means one can assign logic values of true, false, and somewhere in between. This is where fuzziness comes in. Multi-valued logic avoids the black-and-white, all-or-nothing assignment of true or false to an assertion. Instead, it permits the assignment of shades of gray. When assigning a value of true or false to an assertion, the numbers typically used are {open_quotes}1{close_quotes} or {open_quotes}0{close_quotes}. This is the case for programmed systems. If {open_quotes}0{close_quotes} means {open_quotes}false{close_quotes} and {open_quotes}1{close_quotes} means {open_quotes}true,{close_quotes} then {open_quotes}shades of gray{close_quotes} are any numbers between 0 and 1. Therefore, {open_quotes}nearly true{close_quotes} may be represented by 0.8 or 0.9, {open_quotes}nearly false{close_quotes} may be represented by 0.1 or 0.2, and {close_quotes}your guess is as good as mine{close_quotes} may be represented by 0.5. The flexibility available to one is limitless. One can associate any meaning, such as {open_quotes}nearly true{close_quotes}, to any value of any granularity, such as 0.9999. 2 figs.

  17. Massively parallel neural network intelligent browse

    NASA Astrophysics Data System (ADS)

    Maxwell, Thomas P.; Zion, Philip M.

    1992-04-01

    A massively parallel neural network architecture is currently being developed as a potential component of a distributed information system in support of NASA's Earth Observing System. This architecture can be trained, via an iterative learning process, to recognize objects in images based on texture features, allowing scientists to search for all patterns which are similar to a target pattern in a database of images. It may facilitate scientific inquiry by allowing scientists to automatically search for physical features of interest in a database through computer pattern recognition, alleviating the need for exhaustive visual searches through possibly thousands of images. The architecture is implemented on a Connection Machine such that each physical processor contains a simulated 'neuron' which views a feature vector derived from a subregion of the input image. Each of these neurons is trained, via the perceptron rule, to identify the same pattern. The network output gives a probability distribution over the input image of finding the target pattern in a given region. In initial tests the architecture was trained to separate regions containing clouds from clear regions in 512 by 512 pixel AVHRR images. We found that in about 10 minutes we can train a network to perform with high accuracy in recognizing clouds which were texturally similar to a target cloud group. These promising results suggest that this type of architecture may play a significant role in coping with the forthcoming flood of data from the Earth-monitoring missions of the major space-faring nations.

  18. Modeling fluctuations in default-mode brain network using a spiking neural network.

    PubMed

    Yamanishi, Teruya; Liu, Jian-Qin; Nishimura, Haruhiko

    2012-08-01

    Recently, numerous attempts have been made to understand the dynamic behavior of complex brain systems using neural network models. The fluctuations in blood-oxygen-level-dependent (BOLD) brain signals at less than 0.1 Hz have been observed by functional magnetic resonance imaging (fMRI) for subjects in a resting state. This phenomenon is referred to as a "default-mode brain network." In this study, we model the default-mode brain network by functionally connecting neural communities composed of spiking neurons in a complex network. Through computational simulations of the model, including transmission delays and complex connectivity, the network dynamics of the neural system and its behavior are discussed. The results show that the power spectrum of the modeled fluctuations in the neuron firing patterns is consistent with the default-mode brain network's BOLD signals when transmission delays, a characteristic property of the brain, have finite values in a given range. PMID:22830966

  19. On sparsely connected optimal neural networks

    SciTech Connect

    Beiu, V.; Draghici, S.

    1997-10-01

    This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.

  20. Numerical analysis of modeling based on improved Elman neural network.

    PubMed

    Jie, Shao; Li, Wang; WeiSong, Zhao; YaQin, Zhong; Malekian, Reza

    2014-01-01

    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172

  1. Neural network tracking and extension of positive tracking periods

    NASA Technical Reports Server (NTRS)

    Hanan, Jay C.; Chao, Tien-Hsin; Moreels, Pierre

    2004-01-01

    Feature detectors have been considered for the role of supplying additional information to a neural network tracker. The feature detector focuses on areas of the image with significant information. Basically, if a picture says a thousand words, the feature detectors are looking for the key phrases (keypoints). These keypoints are rotationally invariant and may be matched across frames. Application of these advanced feature detectors to the neural network tracking system at JPL has promising potential. As part of an ongoing program, an advanced feature detector was tested for augmentation of a neural network based tracker. The advance feature detector extended tracking periods in test sequences including aircraft tracking, rover tracking, and simulated Martian landing. Future directions of research are also discussed.

  2. Applications of neural networks in chemical engineering: Hybrid systems

    SciTech Connect

    Ferrada, J.J.; Osborne-Lee, I.W. ); Grizzaffi, P.A. )

    1990-01-01

    Expert systems are known to be useful in capturing expertise and applying knowledge to chemical engineering problems such as diagnosis, process control, process simulation, and process advisory. However, expert system applications are traditionally limited to knowledge domains that are heuristic and involve only simple mathematics. Neural networks, on the other hand, represent an emerging technology capable of rapid recognition of patterned behavior without regard to mathematical complexity. Although useful in problem identification, neural networks are not very efficient in providing in-depth solutions and typically do not promote full understanding of the problem or the reasoning behind its solutions. Hence, applications of neural networks have certain limitations. This paper explores the potential for expanding the scope of chemical engineering areas where neural networks might be utilized by incorporating expert systems and neural networks into the same application, a process called hybridization. In addition, hybrid applications are compared with those using more traditional approaches, the results of the different applications are analyzed, and the feasibility of converting the preliminary prototypes described herein into useful final products is evaluated. 12 refs., 8 figs.

  3. Artificial Neural Networks and Instructional Technology.

    ERIC Educational Resources Information Center

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  4. Neural-Network Modeling Of Arc Welding

    NASA Technical Reports Server (NTRS)

    Anderson, Kristinn; Barnett, Robert J.; Springfield, James F.; Cook, George E.; Strauss, Alvin M.; Bjorgvinsson, Jon B.

    1994-01-01

    Artificial neural networks considered for use in monitoring and controlling gas/tungsten arc-welding processes. Relatively simple network, using 4 welding equipment parameters as inputs, estimates 2 critical weld-bead paramaters within 5 percent. Advantage is computational efficiency.

  5. Higher-Order Neural Networks Recognize Patterns

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly; Ochoa, Ellen

    1996-01-01

    Networks of higher order have enhanced capabilities to distinguish between different two-dimensional patterns and to recognize those patterns. Also enhanced capabilities to "learn" patterns to be recognized: "trained" with far fewer examples and, therefore, in less time than necessary to train comparable first-order neural networks.

  6. Orthogonal Patterns In A Binary Neural Network

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1991-01-01

    Report presents some recent developments in theory of binary neural networks. Subject matter relevant to associate (content-addressable) memories and to recognition of patterns - both of considerable importance in advancement of robotics and artificial intelligence. When probed by any pattern, network converges to one of stored patterns.

  7. Target detection using multilayer feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Scherf, Alan V.; Scott, Peter A.

    1991-08-01

    Multilayer feedforward neural networks have been integrated with conventional image processing techniques to form a hybrid target detection algorithm for use in the F/A-18 FLIR pod advanced air-to-air track-while-scan mode. The network has been trained to detect and localize small targets in infrared imagery. Comparative performance between this target detection technique is evaluated.

  8. Improving neural network performance on SIMD architectures

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  9. Object Recognition by a Hopfield Neural Network

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1990-03-01

    A model-based object recognition technique is introduced in this paper to identify and locate an object in any position and orientation. The test scenes could consist of an isolated object or several partially overlapping objects. A cooperative feature matching technique is proposed which is implemented by a Hopfield neural network. The proposed matching technique uses the parallelism of the neural network to globally match all the objects (they may be overlapping or touching) in the input scene against all the object models in the model-database at the same time. For each model, distinct features such as curvature points (corners) are extracted and a graph consisting of a number of nodes connected by arcs is constructed. Each node in the graph represents a feature which has a numerical feature value and is connected to other nodes by an arc representing the relationship or compatibility between them. Object recognition is formulated as matching a global model graph, representing all the object models, with an input scene graph representing a single object or several overlapping objects. A 2-dimensional Hopfield binary neural network is implemented to perform a subgraph isomorphism to obtain the optimal compatible matching features between the two graphs. The synaptic interconnection weights between neurons are designed such that matched features belonging to the same model receive excitatory supports, and matched features belonging to different models receive an inhibitory support or a mutual support depending on whether the input scene is an isolated object or several overlapping objects. The coordinate transformation for mapping each pair of matched nodes from the model onto the input scene is calculated, followed by a simple clustering technique to eliminate any false matches. The orientation and the position of objects in the scene are then calculated by averaging the transformation of correct matched nodes. Some simulation results are shown to illustrate the

  10. A neural network approach to cloud classification

    NASA Technical Reports Server (NTRS)

    Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.

    1990-01-01

    It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.

  11. Neural network technologies for image classification

    NASA Astrophysics Data System (ADS)

    Korikov, A. M.; Tungusova, A. V.

    2015-11-01

    We analyze the classes of problems with an objective necessity to use neural network technologies, i.e. representation and resolution problems in the neural network logical basis. Among these problems, image recognition takes an important place, in particular the classification of multi-dimensional data based on information about textural characteristics. These problems occur in aerospace and seismic monitoring, materials science, medicine and other. We reviewed different approaches for the texture description: statistical, structural, and spectral. We developed a neural network technology for resolving a practical problem of cloud image classification for satellite snapshots from the spectroradiometer MODIS. The cloud texture is described by the statistical characteristics of the GLCM (Gray Level Co- Occurrence Matrix) method. From the range of neural network models that might be applied for image classification, we chose the probabilistic neural network model (PNN) and developed an implementation which performs the classification of the main types and subtypes of clouds. Also, we chose experimentally the optimal architecture and parameters for the PNN model which is used for image classification.

  12. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  13. Using neural networks for process planning

    NASA Astrophysics Data System (ADS)

    Huang, Samuel H.; Zhang, HongChao

    1995-08-01

    Process planning has been recognized as an interface between computer-aided design and computer-aided manufacturing. Since the late 1960s, computer techniques have been used to automate process planning activities. AI-based techniques are designed for capturing, representing, organizing, and utilizing knowledge by computers, and are extremely useful for automated process planning. To date, most of the AI-based approaches used in automated process planning are some variations of knowledge-based expert systems. Due to their knowledge acquisition bottleneck, expert systems are not sufficient in solving process planning problems. Fortunately, AI has developed other techniques that are useful for knowledge acquisition, e.g., neural networks. Neural networks have several advantages over expert systems that are desired in today's manufacturing practice. However, very few neural network applications in process planning have been reported. We present this paper in order to stimulate the research on using neural networks for process planning. This paper also identifies the problems with neural networks and suggests some possible solutions, which will provide some guidelines for research and implementation.

  14. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    USGS Publications Warehouse

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  15. In vitro--in silico--in vivo drug absorption model development based on mechanistic gastrointestinal simulation and artificial neural networks: nifedipine osmotic release tablets case study.

    PubMed

    Ilić, Marija; Ðuriš, Jelena; Kovačević, Ivan; Ibrić, Svetlana; Parojčić, Jelena

    2014-10-01

    In vitro--in vivo correlations (IVIVC) are generally accepted as a valuable tool in modified release formulation development aimed at (i) quantifying the in vivo drug delivery profile and formulation related effects on absorption; (ii) establishing clinically relevant dissolution specifications and (iii) supporting the biowaiver claims. The aim of the present study was to develop relevant IVIVC models based on mechanistic gastrointestinal simulation (GIS) and artificial neural network (ANN) analysis and to evaluate their applicability and usefulness in biopharmaceutical drug characterisation. Nifedipine osmotic release tablets were selected as model drug product on the basis of their robustness, dissolution limited drug absorption and the availability of relevant literature data. Although the osmotic release tablets have been designed to be robust against the influence of physiological conditions in the gastrointestinal tract, notable differences in nifedipine dissolution kinetics were observed depending on the in vitro experimental conditions employed. The results obtained indicate that both GIS and ANN model developed were sensitive to input kinetics represented by the in vitro profiles obtained under various experimental conditions. Different in silico approaches may be successfully employed in the in vitro--in silico--in vivo model development. However, the results obtained may differ and relevant outcomes are sensitive to the methodology employed. PMID:24911992

  16. Fuzzy neural network with fast backpropagation learning

    NASA Astrophysics Data System (ADS)

    Wang, Zhiling; De Sario, Marco; Guerriero, Andrea; Mugnuolo, Raffaele

    1995-03-01

    Neural filters with multilayer backpropagation network have been proved to be able to define mostly all linear or non-linear filters. Because of the slowness of the networks' convergency, however, the applicable fields have been limited. In this paper, fuzzy logic is introduced to adjust learning rate and momentum parameter depending upon output errors and training times. This makes the convergency of the network greatly improved. Test curves are shown to prove the fast filters' performance.

  17. Stability of Stochastic Neutral Cellular Neural Networks

    NASA Astrophysics Data System (ADS)

    Chen, Ling; Zhao, Hongyong

    In this paper, we study a class of stochastic neutral cellular neural networks. By constructing a suitable Lyapunov functional and employing the nonnegative semi-martingale convergence theorem we give some sufficient conditions ensuring the almost sure exponential stability of the networks. The results obtained are helpful to design stability of networks when stochastic noise is taken into consideration. Finally, two examples are provided to show the correctness of our analysis.

  18. Flexible body control using neural networks

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  19. Ultrasonographic Diagnosis of Cirrhosis Based on Preprocessing Using Pyramid Recurrent Neural Network

    NASA Astrophysics Data System (ADS)

    Lu, Jianming; Liu, Jiang; Zhao, Xueqin; Yahagi, Takashi

    In this paper, a pyramid recurrent neural network is applied to characterize the hepatic parenchymal diseases in ultrasonic B-scan texture. The cirrhotic parenchymal diseases are classified into 4 types according to the size of hypoechoic nodular lesions. The B-mode patterns are wavelet transformed , and then the compressed data are feed into a pyramid neural network to diagnose the type of cirrhotic diseases. Compared with the 3-layer neural networks, the performance of the proposed pyramid recurrent neural network is improved by utilizing the lower layer effectively. The simulation result shows that the proposed system is suitable for diagnosis of cirrhosis diseases.

  20. Digit and command interpretation for electronic book using neural network and genetic algorithm.

    PubMed

    Lam, H K; Leung, Frank H F

    2004-12-01

    This paper presents the interpretation of digits and commands using a modified neural network and the genetic algorithm. The modified neural network exhibits a node-to-node relationship which enhances its learning and generalization abilities. A digit-and-command interpreter constructed by the modified neural networks is proposed to recognize handwritten digits and commands. A genetic algorithm is employed to train the parameters of the modified neural networks of the digit-and-command interpreter. The proposed digit-and-command interpreter is successfully realized in an electronic book. Simulation and experimental results will be presented to show the applicability and merits of the proposed approach. PMID:15619928

  1. An Improved Transiently Chaotic Neural Network with Application to the Maximum Clique Problems

    NASA Astrophysics Data System (ADS)

    Xu, Xinshun; Tang, Zheng; Wang, Jiahai

    By analyzing the dynamic behaviors of the transiently chaotic neural network, we present a improved transiently chaotic neural network(TCNN) model for combinatorial optimization problems and test it on the maximum clique problem. Extensive simulations are performed and the results show that the improved transiently chaotic neural network model can yield satisfactory results on both some graphs of the DIMACS clique instances in the second DIMACS challenge and p-random graphs. It is superior to other algorithms in light of the solution quality and CPU time. Moreover, the improved model uses fewer steps to converge to saturated states in comparison with the original transiently chaotic neural network.

  2. Stability analysis of switched stochastic neural networks with time-varying delays.

    PubMed

    Wu, Xiaotai; Tang, Yang; Zhang, Wenbing

    2014-03-01

    This paper is concerned with the global exponential stability of switched stochastic neural networks with time-varying delays. Firstly, the stability of switched stochastic delayed neural networks with stable subsystems is investigated by utilizing the mathematical induction method, the piecewise Lyapunov function and the average dwell time approach. Secondly, by utilizing the extended comparison principle from impulsive systems, the stability of stochastic switched delayed neural networks with both stable and unstable subsystems is analyzed and several easy to verify conditions are derived to ensure the exponential mean square stability of switched delayed neural networks with stochastic disturbances. The effectiveness of the proposed results is illustrated by two simulation examples. PMID:24365535

  3. Application of artificial neural network coupled with genetic algorithm and simulated annealing to solve groundwater inflow problem to an advancing open pit mine

    NASA Astrophysics Data System (ADS)

    Bahrami, Saeed; Doulati Ardejani, Faramarz; Baafi, Ernest

    2016-05-01

    In this study, hybrid models are designed to predict groundwater inflow to an advancing open pit mine and the hydraulic head (HH) in observation wells at different distances from the centre of the pit during its advance. Hybrid methods coupling artificial neural network (ANN) with genetic algorithm (GA) methods (ANN-GA), and simulated annealing (SA) methods (ANN-SA), were utilised. Ratios of depth of pit penetration in aquifer to aquifer thickness, pit bottom radius to its top radius, inverse of pit advance time and the HH in the observation wells to the distance of observation wells from the centre of the pit were used as inputs to the networks. To achieve the objective two hybrid models consisting of ANN-GA and ANN-SA with 4-5-3-1 arrangement were designed. In addition, by switching the last argument of the input layer with the argument of the output layer of two earlier models, two new models were developed to predict the HH in the observation wells for the period of the mining process. The accuracy and reliability of models are verified by field data, results of a numerical finite element model using SEEP/W, outputs of simple ANNs and some well-known analytical solutions. Predicted results obtained by the hybrid methods are closer to the field data compared to the outputs of analytical and simple ANN models. Results show that despite the use of fewer and simpler parameters by the hybrid models, the ANN-GA and to some extent the ANN-SA have the ability to compete with the numerical models.

  4. Can neural networks compete with process calculations

    SciTech Connect

    Blaesi, J.; Jensen, B.

    1992-12-01

    Neural networks have been called a real alternative to rigorous theoretical models. A theoretical model for the calculation of refinery coker naphtha end point and coker furnace oil 90% point already was in place on the combination tower of a coking unit. Considerable data had been collected on the theoretical model during the commissioning phase and benefit analysis of the project. A neural net developed for the coker fractionator has equalled the accuracy of theoretical models, and shown the capability to handle normal operating conditions. One disadvantage of a neural network is the amount of data needed to create a good model. Anywhere from 100 to thousands of cases are needed to create a neural network model. Overall, the correlation between theoretical and neural net models for both the coker naphtha end point and the coker furnace oil 90% point was about .80; the average deviation was about 4 degrees. This indicates that the neural net model was at least as capable as the theoretical model in calculating inferred properties. 3 figs.

  5. Artificial neural networks for small dataset analysis.

    PubMed

    Pasini, Antonello

    2015-05-01

    Artificial neural networks (ANNs) are usually considered as tools which can help to analyze cause-effect relationships in complex systems within a big-data framework. On the other hand, health sciences undergo complexity more than any other scientific discipline, and in this field large datasets are seldom available. In this situation, I show how a particular neural network tool, which is able to handle small datasets of experimental or observational data, can help in identifying the main causal factors leading to changes in some variable which summarizes the behaviour of a complex system, for instance the onset of a disease. A detailed description of the neural network tool is given and its application to a specific case study is shown. Recommendations for a correct use of this tool are also supplied. PMID:26101654

  6. Kannada character recognition system using neural network

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  7. Web traffic prediction with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Gluszek, Adam; Kekez, Michal; Rudzinski, Filip

    2005-02-01

    The main aim of the paper is to present application of the artificial neural network in the web traffic prediction. First, the general problem of time series modelling and forecasting is shortly described. Next, the details of building of dynamic processes models with the neural networks are discussed. At this point determination of the model structure in terms of its inputs and outputs is the most important question because this structure is a rough approximation of the dynamics of the modelled process. The following section of the paper presents the results obtained applying artificial neural network (classical multilayer perceptron trained with backpropagation algorithm) to the real-world web traffic prediction. Finally, we discuss the results, describe weak points of presented method and propose some alternative approaches.

  8. Artificial neural networks for small dataset analysis

    PubMed Central

    2015-01-01

    Artificial neural networks (ANNs) are usually considered as tools which can help to analyze cause-effect relationships in complex systems within a big-data framework. On the other hand, health sciences undergo complexity more than any other scientific discipline, and in this field large datasets are seldom available. In this situation, I show how a particular neural network tool, which is able to handle small datasets of experimental or observational data, can help in identifying the main causal factors leading to changes in some variable which summarizes the behaviour of a complex system, for instance the onset of a disease. A detailed description of the neural network tool is given and its application to a specific case study is shown. Recommendations for a correct use of this tool are also supplied. PMID:26101654

  9. Neural network based adaptive control of nonlinear plants using random search optimization algorithms

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Wang, Shyh J.

    1992-01-01

    This paper presents a method for utilizing artificial neural networks for direct adaptive control of dynamic systems with poorly known dynamics. The neural network weights (controller gains) are adapted in real time using state measurements and a random search optimization algorithm. The results are demonstrated via simulation using two highly nonlinear systems.

  10. Training a Neural Network Via Large-Eddy Simulation for Autonomous Location and Quantification of CH4 Leaks at Natural Gas Facilities

    NASA Astrophysics Data System (ADS)

    Sauer, J.; Travis, B. J.; Munoz-Esparza, D.; Dubey, M. K.

    2015-12-01

    Fugitive methane (CH4) leaks from oil and gas production fields are a potential significant source of atmospheric methane. US DOE's ARPA-E MONITOR program is supporting research to locate and quantify fugitive methane leaks at natural gas facilities in order to achieve a 90% reduction in CH4 emissions. LANL, Aeris and Rice University are developing an LDS (leak detection system) that employs a compact laser absorption methane sensor and sonic anemometer coupled to an artificial neural network (ANN)-based source attribution algorithm. LANL's large-eddy simulation model, HIGRAD, provides high-fidelity simulated wind fields and turbulent CH4 plume dispersion data for various scenarios used in training the ANN. Numerous inverse solution methodologies have been applied over the last decade to assessment of greenhouse gas emissions. ANN learning is well suited to problems in which the training and observed data are noisy, or correspond to complex sensor data as is typical of meteorological and sensor data over a site. ANNs have been shown to achieve higher accuracy with more efficiency than other inverse modeling approaches in studies at larger scales, in urban environments, over short time scales, and even at small spatial scales for efficient source localization of indoor airborne contaminants. Our ANN is intended to characterize fugitive leaks rapidly, given site-specific, real-time, wind and CH4 concentration time-series data at multiple sensor locations, leading to a minimum time-to-detection and providing a first order improvement with respect to overall minimization of methane loss. Initial studies with the ANN on a variety of source location, sensor location, and meteorological condition scenarios are presented and discussed.

  11. Engine With Regression and Neural Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2001-01-01

    At the NASA Glenn Research Center, the NASA engine performance program (NEPP, ref. 1) and the design optimization testbed COMETBOARDS (ref. 2) with regression and neural network analysis-approximators have been coupled to obtain a preliminary engine design methodology. The solution to a high-bypass-ratio subsonic waverotor-topped turbofan engine, which is shown in the preceding figure, was obtained by the simulation depicted in the following figure. This engine is made of 16 components mounted on two shafts with 21 flow stations. The engine is designed for a flight envelope with 47 operating points. The design optimization utilized both neural network and regression approximations, along with the cascade strategy (ref. 3). The cascade used three algorithms in sequence: the method of feasible directions, the sequence of unconstrained minimizations technique, and sequential quadratic programming. The normalized optimum thrusts obtained by the three methods are shown in the following figure: the cascade algorithm with regression approximation is represented by a triangle, a circle is shown for the neural network solution, and a solid line indicates original NEPP results. The solutions obtained from both approximate methods lie within one standard deviation of the benchmark solution for each operating point. The simulation improved the maximum thrust by 5 percent. The performance of the linear regression and neural network methods as alternate engine analyzers was found to be satisfactory for the analysis and operation optimization of air-breathing propulsion engines (ref. 4).

  12. Application of neural networks to range-Doppler imaging

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoqing; Zhu, Zhaoda

    1991-10-01

    The use of neural networks are investigated for 2-D range Doppler microwave imaging. The range resolution of the microwave image is obtained by transmitting a wideband signal and the cross-range resolution is achieved by the Doppler frequency gradient in the same range bin. Hopfield neural networks are used to estimate the Doppler spectrum to enhance the cross- range resolution and reduce the processing time. There is a large number of neurons needed for the high cross-range resolution. In order to cut down the number of neurons, the reflectivities are replaced with their minimum norm estimates. The original Hopfield networks converge often to a local minina instead of the global minima. Simulated annealing is applied to control the gain of Hopfield networks to yield better convergence to the global minima. Results of imaging a model airplane from real microwave data are presented.

  13. Neural Network-Based Sensor Validation for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Moller, James C.; Litt, Jonathan S.; Guo, Ten-Huei

    1998-01-01

    Sensor failure detection, isolation, and accommodation using a neural network approach is described. An auto-associative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures.

  14. Neural network system for traffic flow management

    NASA Astrophysics Data System (ADS)

    Gilmore, John F.; Elibiary, Khalid J.; Petersson, L. E. Rickard

    1992-09-01

    Atlanta will be the home of several special events during the next five years ranging from the 1996 Olympics to the 1994 Super Bowl. When combined with the existing special events (Braves, Falcons, and Hawks games, concerts, festivals, etc.), the need to effectively manage traffic flow from surface streets to interstate highways is apparent. This paper describes a system for traffic event response and management for intelligent navigation utilizing signals (TERMINUS) developed at Georgia Tech for adaptively managing special event traffic flows in the Atlanta, Georgia area. TERMINUS (the original name given Atlanta, Georgia based upon its role as a rail line terminating center) is an intelligent surface street signal control system designed to manage traffic flow in Metro Atlanta. The system consists of three components. The first is a traffic simulation of the downtown Atlanta area around Fulton County Stadium that models the flow of traffic when a stadium event lets out. Parameters for the surrounding area include modeling for events during various times of day (such as rush hour). The second component is a computer graphics interface with the simulation that shows the traffic flows achieved based upon intelligent control system execution. The final component is the intelligent control system that manages surface street light signals based upon feedback from control sensors that dynamically adapt the intelligent controller's decision making process. The intelligent controller is a neural network model that allows TERMINUS to control the configuration of surface street signals to optimize the flow of traffic away from special events.

  15. Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks

    SciTech Connect

    Ziaul Huque

    2007-08-31

    This is the final technical report for the project titled 'Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks'. The aim of the project was to develop an efficient chemistry model for combustion simulations. The reduced chemistry model was developed mathematically without the need of having extensive knowledge of the chemistry involved. To aid in the development of the model, Neural Networks (NN) was used via a new network topology known as Non-linear Principal Components Analysis (NPCA). A commonly used Multilayer Perceptron Neural Network (MLP-NN) was modified to implement NPCA-NN. The training rate of NPCA-NN was improved with the GEneralized Regression Neural Network (GRNN) based on kernel smoothing techniques. Kernel smoothing provides a simple way of finding structure in data set without the imposition of a parametric model. The trajectory data of the reaction mechanism was generated based on the optimization techniques of genetic algorithm (GA). The NPCA-NN algorithm was then used for the reduction of Dimethyl Ether (DME) mechanism. DME is a recently discovered fuel made from natural gas, (and other feedstock such as coal, biomass, and urban wastes) which can be used in compression ignition engines as a substitute for diesel. An in-house two-dimensional Computational Fluid Dynamics (CFD) code was developed based on Meshfree technique and time marching solution algorithm. The project also provided valuable research experience to two graduate students.

  16. Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks

    SciTech Connect

    Nelson Butuk

    2004-12-01

    This is an annual technical report for the work done over the last year (period ending 9/30/2004) on the project titled ''Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks''. The aim of the project is to develop an efficient chemistry model for combustion simulations. The reduced chemistry model will be developed mathematically without the need of having extensive knowledge of the chemistry involved. To aid in the development of the model, Neural Networks (NN) will be used via a new network topology know as Non-linear Principal Components Analysis (NPCA). We report on the development of a procedure to speed up the training of NPCA. The developed procedure is based on the non-parametric statistical technique of kernel smoothing. When this smoothing technique is implemented as a Neural Network, It is know as Generalized Regression Neural Network (GRNN). We present results of implementing GRNN on a test problem. In addition, we present results of an in house developed 2-D CFD code that will be used through out the project period.

  17. Autonomous robot behavior based on neural networks

    NASA Astrophysics Data System (ADS)

    Grolinger, Katarina; Jerbic, Bojan; Vranjes, Bozo

    1997-04-01

    The purpose of autonomous robot is to solve various tasks while adapting its behavior to the variable environment, expecting it is able to navigate much like a human would, including handling uncertain and unexpected obstacles. To achieve this the robot has to be able to find solution to unknown situations, to learn experienced knowledge, that means action procedure together with corresponding knowledge on the work space structure, and to recognize working environment. The planning of the intelligent robot behavior presented in this paper implements the reinforcement learning based on strategic and random attempts for finding solution and neural network approach for memorizing and recognizing work space structure (structural assignment problem). Some of the well known neural networks based on unsupervised learning are considered with regard to the structural assignment problem. The adaptive fuzzy shadowed neural network is developed. It has the additional shadowed hidden layer, specific learning rule and initialization phase. The developed neural network combines advantages of networks based on the Adaptive Resonance Theory and using shadowed hidden layer provides ability to recognize lightly translated or rotated obstacles in any direction.

  18. A neural network with modular hierarchical learning

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)

    1994-01-01

    This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.

  19. Neural network tomography: network replication from output surface geometry.

    PubMed

    Minnett, Rupert C J; Smith, Andrew T; Lennon, William C; Hecht-Nielsen, Robert

    2011-06-01

    Multilayer perceptron networks whose outputs consist of affine combinations of hidden units using the tanh activation function are universal function approximators and are used for regression, typically by reducing the MSE with backpropagation. We present a neural network weight learning algorithm that directly positions the hidden units within input space by numerically analyzing the curvature of the output surface. Our results show that under some sampling requirements, this method can reliably recover the parameters of a neural network used to generate a data set. PMID:21377326

  20. An introduction to neural networks: A tutorial

    SciTech Connect

    Walker, J.L.; Hill, E.V.K.

    1994-12-31

    Neural networks are a powerful set of mathematical techniques used for solving linear and nonlinear classification and prediction (function approximation) problems. Inspired by studies of the brain, these series and parallel combinations of simple functional units called artificial neurons have the ability to learn or be trained to solve very complex problems. Fundamental aspects of artificial neurons are discussed, including their activation functions, their combination into multilayer feedforward networks with hidden layers, and the use of bias neurons to reduce training time. The back propagation (of errors) paradigm for supervised training of feedforward networks is explained. Then, the architecture and mathematics of a Kohonen self organizing map for unsupervised learning are discussed. Two example problems are given. The first is for the application of a back propagation neural network to learn the correct response to an input vector using supervised training. The second is a classification problem using a self organizing map and unsupervised training.

  1. Development of programmable artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J.

    1993-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  2. Simulation and prediction of suprapermafrost groundwater level variation in response to climate change using a neural network model

    NASA Astrophysics Data System (ADS)

    Chang, Juan; Wang, Genxu; Mao, Tianxu

    2015-10-01

    Suprapermafrost groundwater has an important role in the hydrologic cycle of the permafrost region. However, due to the notably harsh environmental conditions, there is little field monitoring data of groundwater systems, which has limited our understanding of permafrost groundwater dynamics. There is still no effective mathematical method and theory to be used for modeling and forecasting the variation in the permafrost groundwater. Two ANN models, one with three input variables (previous groundwater level, temperature and precipitation) and another with two input variables (temperature and precipitation only), were developed to simulate and predict the site-specific suprapermafrost groundwater level on the slope scale. The results indicate that the three input variable ANN model has superior real-time site-specific prediction capability and produces excellent accuracy performance in the simulation and forecasting of the variation in the suprapermafrost groundwater level. However, if there are no field observations of the suprapermafrost groundwater level, the ANN model developed using only the two input variables of the accessible climate data also has good accuracy and high validity in simulating and forecasting the suprapermafrost groundwater level variation to overcome the data limitations and parameter uncertainty. Under scenarios of the temperature increasing by 0.5 or 1.0 °C per 10 years, the suprapermafrost groundwater level is predicted to increase by 1.2-1.4% or 2.5-2.6% per year with precipitation increases of 10-20%, respectively. There were spatial variations in the responses of the suprapermafrost groundwater level to climate change on the slope scale. The variation ratio and the amplitude of the suprapermafrost groundwater level downslope are larger than those on the upper slope under climate warming. The obvious vulnerability and spatial variability of the suprapermafrost groundwater to climate change will impose intensive effects on the water

  3. An optimization methodology for neural network weights and architectures.

    PubMed

    Ludermir, Teresa B; Yamazaki, Akio; Zanchettin, Cleber

    2006-11-01

    This paper introduces a methodology for neural network global optimization. The aim is the simultaneous optimization of multilayer perceptron (MLP) network weights and architectures, in order to generate topologies with few connections and high classification performance for any data sets. The approach combines the advantages of simulated annealing, tabu search and the backpropagation training algorithm in order to generate an automatic process for producing networks with high classification performance and low complexity. Experimental results obtained with four classification problems and one prediction problem has shown to be better than those obtained by the most commonly used optimization techniques. PMID:17131660

  4. Constructive approximate interpolation by neural networks

    NASA Astrophysics Data System (ADS)

    Llanas, B.; Sainz, F. J.

    2006-04-01

    We present a type of single-hidden layer feedforward neural networks with sigmoidal nondecreasing activation function. We call them ai-nets. They can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can uniformly approximate any continuous function of one variable and can be used for constructing uniform approximants of continuous functions of several variables. All these capabilities are based on a closed expression of the networks.

  5. Digital Neural Networks for New Media

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Lambert; Malki, Suleyman

    Neural Networks perform computationally intensive tasks offering smart solutions for many new media applications. A number of analog and mixed digital/analog implementations have been proposed to smooth the algorithmic gap. But gradually, the digital implementation has become feasible, and the dedicated neural processor is on the horizon. A notable example is the Cellular Neural Network (CNN). The analog direction has matured for low-power, smart vision sensors; the digital direction is gradually being shaped into an IP-core for algorithm acceleration, especially for use in FPGA-based high-performance systems. The chapter discusses the next step towards a flexible and scalable multi-core engine using Application-Specific Integrated Processors (ASIP). This topographic engine can serve many new media tasks, as illustrated by novel applications in Homeland Security. We conclude with a view on the CNN kaleidoscope for the year 2020.

  6. LavaNet—Neural network development environment in a general mine planning package

    NASA Astrophysics Data System (ADS)

    Kapageridis, Ioannis Konstantinou; Triantafyllou, A. G.

    2011-04-01

    LavaNet is a series of scripts written in Perl that gives access to a neural network simulation environment inside a general mine planning package. A well known and a very popular neural network development environment, the Stuttgart Neural Network Simulator, is used as the base for the development of neural networks. LavaNet runs inside VULCAN™—a complete mine planning package with advanced database, modelling and visualisation capabilities. LavaNet is taking advantage of VULCAN's Perl based scripting environment, Lava, to bring all the benefits of neural network development and application to geologists, mining engineers and other users of the specific mine planning package. LavaNet enables easy development of neural network training data sets using information from any of the data and model structures available, such as block models and drillhole databases. Neural networks can be trained inside VULCAN™ and the results be used to generate new models that can be visualised in 3D. Direct comparison of developed neural network models with conventional and geostatistical techniques is now possible within the same mine planning software package. LavaNet supports Radial Basis Function networks, Multi-Layer Perceptrons and Self-Organised Maps.

  7. Optoelectronic Integrated Circuits For Neural Networks

    NASA Technical Reports Server (NTRS)

    Psaltis, D.; Katz, J.; Kim, Jae-Hoon; Lin, S. H.; Nouhi, A.

    1990-01-01

    Many threshold devices placed on single substrate. Integrated circuits containing optoelectronic threshold elements developed for use as planar arrays of artificial neurons in research on neural-network computers. Mounted with volume holograms recorded in photorefractive crystals serving as dense arrays of variable interconnections between neurons.

  8. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  9. Active Sampling in Evolving Neural Networks.

    ERIC Educational Resources Information Center

    Parisi, Domenico

    1997-01-01

    Comments on Raftopoulos article (PS 528 649) on facilitative effect of cognitive limitation in development and connectionist models. Argues that the use of neural networks within an "Artificial Life" perspective can more effectively contribute to the study of the role of cognitive limitations in development and their genetic basis than can using…

  10. Localizing Tortoise Nests by Neural Networks

    PubMed Central

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  11. Neural network application to comprehensive engine diagnostics

    NASA Technical Reports Server (NTRS)

    Marko, Kenneth A.

    1994-01-01

    We have previously reported on the use of neural networks for detection and identification of faults in complex microprocessor controlled powertrain systems. The data analyzed in those studies consisted of the full spectrum of signals passing between the engine and the real-time microprocessor controller. The specific task of the classification system was to classify system operation as nominal or abnormal and to identify the fault present. The primary concern in earlier work was the identification of faults, in sensors or actuators in the powertrain system as it was exercised over its full operating range. The use of data from a variety of sources, each contributing some potentially useful information to the classification task, is commonly referred to as sensor fusion and typifies the type of problems successfully addressed using neural networks. In this work we explore the application of neural networks to a different diagnostic problem, the diagnosis of faults in newly manufactured engines and the utility of neural networks for process control.

  12. Localizing Tortoise Nests by Neural Networks.

    PubMed

    Barbuti, Roberto; Chessa, Stefano; Micheli, Alessio; Pucci, Rita

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  13. Nonlinear Time Series Analysis via Neural Networks

    NASA Astrophysics Data System (ADS)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  14. Negative transfer problem in neural networks

    NASA Astrophysics Data System (ADS)

    Abunawass, Adel M.

    1992-07-01

    Harlow, 1949, observed that when human subjects were trained to perform simple discrimination tasks over a sequence of successive training sessions (trials), their performance improved as a function of the successive sessions. Harlow called this phenomena `learning-to- learn.' The subjects acquired knowledge and improved their ability to learn in future training sessions. It seems that previous training sessions contribute positively to the current one. Abunawass & Maki, 1989, observed that when a neural network (using the back-propagation model) is trained over successive sessions, the performance and learning ability of the network degrade as a function of the training sessions. In some cases this leads to a complete paralysis of the network. Abunawass & Maki called this phenomena the `negative transfer' problem, since previous training sessions contribute negatively to the current one. The effect of the negative transfer problem is in clear contradiction to that reported by Harlow on human subjects. Since the ability to model human cognition and learning is one of the most important goals (and claims) of neural networks, the negative transfer problem represents a clear limitation to this ability. This paper describes a new neural network sequential learning model known as Adaptive Memory Consolidation. In this model the network uses its past learning experience to enhance its future learning ability. Adaptive Memory Consolidation has led to the elimination and reversal of the effect of the negative transfer problem. Thus producing a `positive transfer' effect similar to Harlow's learning-to-learn phenomena.

  15. Optimal input sizes for neural network de-interlacing

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee

    2009-02-01

    Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.

  16. [Application of artificial neural networks in infectious diseases].

    PubMed

    Xu, Jun-fang; Zhou, Xiao-nong

    2011-02-28

    With the development of information technology, artificial neural networks has been applied to many research fields. Due to the special features such as nonlinearity, self-adaptation, and parallel processing, artificial neural networks are applied in medicine and biology. This review summarizes the application of artificial neural networks in the relative factors, prediction and diagnosis of infectious diseases in recent years. PMID:21823326

  17. Perspective: network-guided pattern formation of neural dynamics.

    PubMed

    Hütt, Marc-Thorsten; Kaiser, Marcus; Hilgetag, Claus C

    2014-10-01

    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings and lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatio-temporal pattern formation and propose a novel perspective for analysing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics. PMID:25180302

  18. Proceedings of the Neural Network Workshop for the Hanford Community

    SciTech Connect

    Keller, P.E.

    1994-01-01

    These proceedings were generated from a series of presentations made at the Neural Network Workshop for the Hanford Community. The abstracts and viewgraphs of each presentation are reproduced in these proceedings. This workshop was sponsored by the Computing and Information Sciences Department in the Molecular Science Research Center (MSRC) at the Pacific Northwest Laboratory (PNL). Artificial neural networks constitute a new information processing technology that is destined within the next few years, to provide the world with a vast array of new products. A major reason for this is that artificial neural networks are able to provide solutions to a wide variety of complex problems in a much simpler fashion than is possible using existing techniques. In recognition of these capabilities, many scientists and engineers are exploring the potential application of this new technology to their fields of study. An artificial neural network (ANN) can be a software simulation, an electronic circuit, optical system, or even an electro-chemical system designed to emulate some of the brain`s rudimentary structure as well as some of the learning processes that are believed to take place in the brain. For a very wide range of applications in science, engineering, and information technology, ANNs offer a complementary and potentially superior approach to that provided by conventional computing and conventional artificial intelligence. This is because, unlike conventional computers, which have to be programmed, ANNs essentially learn from experience and can be trained in a straightforward fashion to carry out tasks ranging from the simple to the highly complex.

  19. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity. PMID:18276371

  20. Predicting physical time series using dynamic ridge polynomial neural networks.

    PubMed

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  1. Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks

    PubMed Central

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  2. A stochastic learning algorithm for layered neural networks

    SciTech Connect

    Bartlett, E.B.; Uhrig, R.E.

    1992-12-31

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given.

  3. A stochastic learning algorithm for layered neural networks

    SciTech Connect

    Bartlett, E.B. . Dept. of Mechanical Engineering); Uhrig, R.E. . Dept. of Nuclear Engineering)

    1992-01-01

    The random optimization method typically uses a Gaussian probability density function (PDF) to generate a random search vector. In this paper the random search technique is applied to the neural network training problem and is modified to dynamically seek out the optimal probability density function (OPDF) from which to select the search vector. The dynamic OPDF search process, combined with an auto-adaptive stratified sampling technique and a dynamic node architecture (DNA) learning scheme, completes the modifications of the basic method. The DNA technique determines the appropriate number of hidden nodes needed for a given training problem. By using DNA, researchers do not have to set the neural network architectures before training is initiated. The approach is applied to networks of generalized, fully interconnected, continuous perceptions. Computer simulation results are given.

  4. Cellular neural networks for welding arc thermograms segmentation

    NASA Astrophysics Data System (ADS)

    Jamrozik, Wojciech

    2014-09-01

    Machine vision systems are used in many areas for monitoring of technological processes. Among this processes welding takes important place, where often infrared cameras are used. Besides reliable hardware, successful application of vision systems requires suitable software based on proper algorithms. One of most important group of image processing algorithms is connected to image segmentation. Obtainment of exact boundary of an object that changes shape in time, such as the welding arc, represented on a thermogram is not a trivial task. In the paper a segmentation method using supervised approach based on a cellular neural networks is presented. Simulated annealing and genetic algorithm were used for training of the network (template optimization). Comparison of proposed method to a well elaborated segmentation method based on region growing approach was made. Obtained results prove that the cellular neural network can be a valuable tool for infrared welding pool images segmentation.

  5. Artificial neural network for location estimation in wireless communication systems.

    PubMed

    Chen, Chien-Sheng

    2012-01-01

    In a wireless communication system, wireless location is the technique used to estimate the location of a mobile station (MS). To enhance the accuracy of MS location prediction, we propose a novel algorithm that utilizes time of arrival (TOA) measurements and the angle of arrival (AOA) information to locate MS when three base stations (BSs) are available. Artificial neural networks (ANN) are widely used techniques in various areas to overcome the problem of exclusive and nonlinear relationships. When the MS is heard by only three BSs, the proposed algorithm utilizes the intersections of three TOA circles (and the AOA line), based on various neural networks, to estimate the MS location in non-line-of-sight (NLOS) environments. Simulations were conducted to evaluate the performance of the algorithm for different NLOS error distributions. The numerical analysis and simulation results show that the proposed algorithms can obtain more precise location estimation under different NLOS environments. PMID:22736978

  6. Intrinsic adaptation in autonomous recurrent neural networks.

    PubMed

    Marković, Dimitrije; Gros, Claudius

    2012-02-01

    A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the quality of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics. PMID:22091667

  7. Groundwater remediation optimization using artificial neural networks

    SciTech Connect

    Rogers, L. L., LLNL

    1998-05-01

    One continuing point of research in optimizing groundwater quality management is reduction of computational burden which is particularly limiting in field-scale applications. Often evaluation of a single pumping strategy, i.e. one call to the groundwater flow and transport model (GFTM) may take several hours on a reasonably fast workstation. For computational flexibility and efficiency, optimal groundwater remediation design at Lawrence Livermore National Laboratory (LLNL) has relied on artificial neural networks (ANNS) trained to approximate the outcome of 2-D field-scale, finite difference/finite element GFTMs. The search itself has been directed primarily by the genetic algorithm (GA) or the simulated annealing (SA) algorithm. This approach has advantages of (1) up to a million fold increase in speed of remediation pattern assessment during the searches and sensitivity analyses for the 2-D LLNL work, (2) freedom from sequential runs of the GFTM (enables workstation farming), and (3) recycling of the knowledge base (i.e. runs of the GFTM necessary to train the ANNS). Reviewed here are the background and motivation for such work, recent applications, and continuing issues of research.

  8. HAWC Energy Reconstruction via Neural Network

    NASA Astrophysics Data System (ADS)

    Marinelli, Samuel; HAWC Collaboration

    2016-03-01

    The High-Altitude Water-Cherenkov (HAWC) γ-ray observatory is located at 4100 m above sea level on the Sierra Negra mountain in the state of Puebla, Mexico. Its 300 water-filled tanks are instrumented with PMTs that detect Cherenkov light produced by charged particles in atmospheric air showers induced by TeV γ-rays. The detector became fully operational in March of 2015. With a 2-sr field of view and duty cycle exceeding 90%, HAWC is a survey instrument sensitive to diverse γ-ray sources, including supernova remnants, pulsar wind nebulae, active galactic nuclei, and others. Particle-acceleration mechanisms at these sources can be inferred by studying their energy spectra, particularly at high energies. We have developed a technique for estimating primary- γ-ray energies using an artificial neural network (ANN). Input variables to the ANN are selected to characterize shower multiplicity in the detector, the fraction of the shower contained in the detector, and atmospheric attenuation of the shower. Monte Carlo simulations show that the new estimator has superior performance to the current estimator used in HAWC publications. This work was supported by the National Science Foundation.

  9. Metaheuristic Algorithms for Convolution Neural Network.

    PubMed

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  10. Metaheuristic Algorithms for Convolution Neural Network

    PubMed Central

    Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738

  11. Classifying multispectral data by neural networks

    NASA Technical Reports Server (NTRS)

    Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.

    1993-01-01

    Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.

  12. Color control of printers by neural networks

    NASA Astrophysics Data System (ADS)

    Tominaga, Shoji

    1998-07-01

    A method is proposed for solving the mapping problem from the 3D color space to the 4D CMYK space of printer ink signals by means of a neural network. The CIE-L*a*b* color system is used as the device-independent color space. The color reproduction problem is considered as the problem of controlling an unknown static system with four inputs and three outputs. A controller determines the CMYK signals necessary to produce the desired L*a*b* values with a given printer. Our solution method for this control problem is based on a two-phase procedure which eliminates the need for UCR and GCR. The first phase determines a neural network as a model of the given printer, and the second phase determines the combined neural network system by combining the printer model and the controller in such a way that it represents an identity mapping in the L*a*b* color space. Then the network of the controller part realizes the mapping from the L*a*b* space to the CMYK space. Practical algorithms are presented in the form of multilayer feedforward networks. The feasibility of the proposed method is shown in experiments using a dye sublimation printer and an ink jet printer.

  13. A Topological Perspective of Neural Network Structure

    NASA Astrophysics Data System (ADS)

    Sizemore, Ann; Giusti, Chad; Cieslak, Matthew; Grafton, Scott; Bassett, Danielle

    The wiring patterns of white matter tracts between brain regions inform functional capabilities of the neural network. Indeed, densely connected and cyclically arranged cognitive systems may communicate and thus perform distinctly. However, previously employed graph theoretical statistics are local in nature and thus insensitive to such global structure. Here we present an investigation of the structural neural network in eight healthy individuals using persistent homology. An extension of homology to weighted networks, persistent homology records both circuits and cliques (all-to-all connected subgraphs) through a repetitive thresholding process, thus perceiving structural motifs. We report structural features found across patients and discuss brain regions responsible for these patterns, finally considering the implications of such motifs in relation to cognitive function.

  14. a Heterosynaptic Learning Rule for Neural Networks

    NASA Astrophysics Data System (ADS)

    Emmert-Streib, Frank

    In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.

  15. Neural networks: Application to medical imaging

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  16. Controlling neural network responsiveness: tradeoffs and constraints

    PubMed Central

    Keren, Hanna; Marom, Shimon

    2014-01-01

    In recent years much effort is invested in means to control neural population responses at the whole brain level, within the context of developing advanced medical applications. The tradeoffs and constraints involved, however, remain elusive due to obvious complications entailed by studying whole brain dynamics. Here, we present effective control of response features (probability and latency) of cortical networks in vitro over many hours, and offer this approach as an experimental toy for studying controllability of neural networks in the wider context. Exercising this approach we show that enforcement of stable high activity rates by means of closed loop control may enhance alteration of underlying global input–output relations and activity dependent dispersion of neuronal pair-wise correlations across the network. PMID:24808860

  17. Computationally Efficient Neural Network Intrusion Security Awareness

    SciTech Connect

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  18. Neural network construction via back-propagation

    SciTech Connect

    Burwick, T.T.

    1994-06-01

    A method is presented that combines back-propagation with multi-layer neural network construction. Back-propagation is used not only to adjust the weights but also the signal functions. Going from one network to an equivalent one that has additional linear units, the non-linearity of these units and thus their effective presence is then introduced via back-propagation (weight-splitting). The back-propagated error causes the network to include new units in order to minimize the error function. We also show how this formalism allows to escape local minima.

  19. Multiscale Modeling of Cortical Neural Networks

    NASA Astrophysics Data System (ADS)

    Torben-Nielsen, Benjamin; Stiefel, Klaus M.

    2009-09-01

    In this study, we describe efforts at modeling the electrophysiological dynamics of cortical networks in a multi-scale manner. Specifically, we describe the implementation of a network model composed of simple single-compartmental neuron models, in which a single complex multi-compartmental model of a pyramidal neuron is embedded. The network is capable of generating Δ (2 Hz, observed during deep sleep states) and γ (40 Hz, observed during wakefulness) oscillations, which are then imposed onto the multi-compartmental model, thus providing realistic, dynamic boundary conditions. We furthermore discuss the challenges and chances involved in multi-scale modeling of neural function.

  20. Learning-induced pattern classification in a chaotic neural network

    NASA Astrophysics Data System (ADS)

    Li, Yang; Zhu, Ping; Xie, Xiaoping; He, Guoguang; Aihara, Kazuyuki

    2012-01-01

    In this Letter, we propose a Hebbian learning rule with passive forgetting (HLRPF) for use in a chaotic neural network (CNN). We then define the indices based on the Euclidean distance to investigate the evolution of the weights in a simplified way. Numerical simulations demonstrate that, under suitable external stimulations, the CNN with the proposed HLRPF acts as a fuzzy-like pattern classifier that performs much better than an ordinary CNN. The results imply relationship between learning and recognition.

  1. Tuning the stator resistance of induction motors using artificial neural network

    SciTech Connect

    Cabrera, L.A.; Elbuluk, M.E.; Husain, I.

    1997-09-01

    Tuning the stator resistance of induction motors is very important, especially when it is used to implement direct torque control (DTC) in which the stator resistance is a main parameter. In this paper, an artificial network (ANN) is used to accomplish tuning of the stator resistance of an induction motor. The parallel recursive prediction error and backpropagation training algorithms were used in training the neural network for the simulation and experimental results, respectively. The neural network used to tune the stator resistance was trained on-line, making the DTC strategy more robust and accurate. Simulation results are presented for three different neural-network configurations showing the efficiency of the tuning process. Experimental results were obtained for the one of the three neural-network configuration. Both simulation and experimental results showed that the ANN have tuned the stator resistance in the controller to track actual resistance of the machine.

  2. Tumor Diagnosis Using Backpropagation Neural Network Method

    NASA Astrophysics Data System (ADS)

    Ma, Lixing; Looney, Carl; Sukuta, Sydney; Bruch, Reinhard; Afanasyeva, Natalia

    1998-05-01

    For characterization of skin cancer, an artificial neural network (ANN) method has been developed to diagnose normal tissue, benign tumor and melanoma. The pattern recognition is based on a three-layer neural network fuzzy learning system. In this study, the input neuron data set is the Fourier Transform infrared (FT-IR)spectrum obtained by a new Fiberoptic Evanescent Wave Fourier Transform Infrared (FEW-FTIR) spectroscopy method in the range of 1480 to 1850 cm-1. Ten input features are extracted from the absorbency values in this region. A single hidden layer of neural nodes with sigmoids activation functions clusters the feature space into small subclasses and the output nodes are separated in different nonconvex classes to permit nonlinear discrimination of disease states. The output is classified as three classes: normal tissue, benign tumor and melanoma. The results obtained from the neural network pattern recognition are shown to be consistent with traditional medical diagnosis. Input features have also been extracted from the absorbency spectra using chemical factor analysis. These abstract features or factors are also used in the classification.

  3. Neural networks in the process industries

    SciTech Connect

    Ben, L.R.; Heavner, L.

    1996-12-01

    Neural networks, or more precisely, artificial neural networks (ANNs), are rapidly gaining in popularity. They first began to appear on the process-control scene in the early 1990s, but have been a research focus for more than 30 years. Neural networks are really empirical models that approximate the way man thinks neurons in the human brain work. Neural-net technology is not trying to produce computerized clones, but to model nature in an effort to mimic some of the brain`s capabilities. Modeling, for the purposes of this article, means developing a mathematical description of physical phenomena. The physics and chemistry of industrial processes are usually quite complex and sometimes poorly understood. Our process understanding, and our imperfect ability to describe complexity in mathematical terms, limit fidelity of first-principle models. Computational requirements for executing these complex models are a further limitation. It is often not possible to execute first-principle model algorithms at the high rate required for online control. Nevertheless, rigorous first principle models are commonplace design tools. Process control is another matter. Important model inputs are often not available as process measurements, making real-time application difficult. In fact, engineers often use models to infer unavailable measurements. 5 figs.

  4. Prototyping distributed simulation networks

    NASA Technical Reports Server (NTRS)

    Doubleday, Dennis L.

    1990-01-01

    Durra is a declarative language designed to support application-level programming. The use of Durra is illustrated to describe a simple distributed application: a simulation of a collection of networked vehicle simulators. It is shown how the language is used to describe the application, its components and structure, and how the runtime executive provides for the execution of the application.

  5. Pruning Neural Networks with Distribution Estimation Algorithms

    SciTech Connect

    Cantu-Paz, E

    2003-01-15

    This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than the original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.

  6. Membership generation using multilayer neural network

    NASA Technical Reports Server (NTRS)

    Kim, Jaeseok

    1992-01-01

    There has been intensive research in neural network applications to pattern recognition problems. Particularly, the back-propagation network has attracted many researchers because of its outstanding performance in pattern recognition applications. In this section, we describe a new method to generate membership functions from training data using a multilayer neural network. The basic idea behind the approach is as follows. The output values of a sigmoid activation function of a neuron bear remarkable resemblance to membership values. Therefore, we can regard the sigmoid activation values as the membership values in fuzzy set theory. Thus, in order to generate class membership values, we first train a suitable multilayer network using a training algorithm such as the back-propagation algorithm. After the training procedure converges, the resulting network can be treated as a membership generation network, where the inputs are feature values and the outputs are membership values in the different classes. This method allows fairly complex membership functions to be generated because the network is highly nonlinear in general. Also, it is to be noted that the membership functions are generated from a classification point of view. For pattern recognition applications, this is highly desirable, although the membership values may not be indicative of the degree of typicality of a feature value in a particular class.

  7. Computational capabilities of recurrent NARX neural networks.

    PubMed

    Siegelmann, H T; Horne, B G; Giles, C L

    1997-01-01

    Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=Psi(u(t-n(u)), ..., u(t-1), u(t), y(t-n(y)), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n(u) and n(y) are the input and output order, and the function Psi is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power. PMID:18255858

  8. Optical neural network system for pose determination of spinning satellites

    NASA Technical Reports Server (NTRS)

    Lee, Andrew; Casasent, David

    1990-01-01

    An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.

  9. Generalization of features in the assembly neural networks.

    PubMed

    Goltsev, Alexander; Wunsch, Donald C

    2004-02-01

    The purpose of the paper is an experimental study of the formation of class descriptions, taking place during learning, in assembly neural networks. The assembly neural network is artificially partitioned into several sub-networks according to the number of classes that the network has to recognize. The features extracted from input data are represented in neural column structures of the sub-networks. Hebbian neural assemblies are formed in the column structure of the sub-networks by weight adaptation. A specific class description is formed in each sub-network of the assembly neural network due to intersections between the neural assemblies. The process of formation of class descriptions in the sub-networks is interpreted as feature generalization. A set of special experiments is performed to study this process, on a task of character recognition using the MNIST database. PMID:15034946

  10. Neural network submodel as an abstraction tool: relating network performance to combat outcome

    NASA Astrophysics Data System (ADS)

    Jablunovsky, Greg; Dorman, Clark; Yaworsky, Paul S.

    2000-06-01

    Simulation of Command and Control (C2) networks has historically emphasized individual system performance with little architectural context or credible linkage to `bottom- line' measures of combat outcomes. Renewed interest in modeling C2 effects and relationships stems from emerging network intensive operational concepts. This demands improved methods to span the analytical hierarchy between C2 system performance models and theater-level models. Neural network technology offers a modeling approach that can abstract the essential behavior of higher resolution C2 models within a campaign simulation. The proposed methodology uses off-line learning of the relationships between network state and campaign-impacting performance of a complex C2 architecture and then approximation of that performance as a time-varying parameter in an aggregated simulation. Ultimately, this abstraction tool offers an increased fidelity of C2 system simulation that captures dynamic network dependencies within a campaign context.

  11. Associative learning in random environments using neural networks.

    PubMed

    Narendra, K S; Mukhopadhyay, S

    1991-01-01

    Associative learning is investigated using neural networks and concepts based on learning automata. The behavior of a single decision-maker containing a neural network is studied in a random environment using reinforcement learning. The objective is to determine the optimal action corresponding to a particular state. Since decisions have to be made throughout the context space based on a countable number of experiments, generalization is inevitable. Many different approaches can be followed to generate the desired discriminant function. Three different methods which use neural networks are discussed and compared. In the most general method, the output of the network determines the probability with which one of the actions is to be chosen. The weights of the network are updated on the basis of the actions and the response of the environment. The extension of similar concepts to decentralized decision-making in a context space is also introduced. Simulation results are included. Modifications in the implementations of the most general method to make it practically viable are also presented. All the methods suggested are feasible and the choice of a specific method depends on the accuracy desired as well as on the available computational power. PMID:18276348

  12. Review On Applications Of Neural Network To Computer Vision

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  13. The relevance of network micro-structure for neural dynamics.

    PubMed

    Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan

    2013-01-01

    The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits. PMID:23761758

  14. Functional expansion representations of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1992-01-01

    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  15. Correcting wave predictions with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Makarynskyy, O.; Makarynska, D.

    2003-04-01

    The predictions of wind waves with different lead times are necessary in a large scope of coastal and open ocean activities. Numerical wave models, which usually provide this information, are based on deterministic equations that do not entirely account for the complexity and uncertainty of the wave generation and dissipation processes. An attempt to improve wave parameters short-term forecasts based on artificial neural networks is reported. In recent years, artificial neural networks have been used in a number of coastal engineering applications due to their ability to approximate the nonlinear mathematical behavior without a priori knowledge of interrelations among the elements within a system. The common multilayer feed-forward networks, with a nonlinear transfer functions in the hidden layers, were developed and employed to forecast the wave characteristics over one hour intervals starting from one up to 24 hours, and to correct these predictions. Three non-overlapping data sets of wave characteristics, both from a buoy, moored roughly 60 miles west of the Aran Islands, west coast of Ireland, were used to train and validate the neural nets involved. The networks were trained with error back propagation algorithm. Time series plots and scatterplots of the wave characteristics as well as tables with statistics show an improvement of the results achieved due to the correction procedure employed.

  16. Convolutional Neural Network Based dem Super Resolution

    NASA Astrophysics Data System (ADS)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  17. Subsonic Aircraft With Regression and Neural-Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2004-01-01

    At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics

  18. Character Recognition Using Genetically Trained Neural Networks

    SciTech Connect

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the amount of

  19. Neural network for tsunami and runup forecast

    NASA Astrophysics Data System (ADS)

    Namekar, Shailesh; Yamazaki, Yoshiki; Cheung, Kwok Fai

    2009-04-01

    This paper examines the use of neural network to model nonlinear tsunami processes for forecasting of coastal waveforms and runup. The three-layer network utilizes a radial basis function in the hidden, middle layer for nonlinear transformation of input waveforms near the tsunami source. Events based on the 2006 Kuril Islands tsunami demonstrate the implementation and capability of the network. Division of the Kamchatka-Kuril subduction zone into a number of subfaults facilitates development of a representative tsunami dataset using a nonlinear long-wave model. The computed waveforms near the tsunami source serve as the input and the far-field waveforms and runup provide the target output for training of the network through a back-propagation algorithm. The trained network reproduces the resonance of tsunami waves and the topography-dominated runup patterns at Hawaii's coastlines from input water-level data off the Aleutian Islands.

  20. Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control.

    PubMed

    Yang, Shiju; Li, Chuandong; Huang, Tingwen

    2016-03-01

    The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. PMID:26797471

  1. Maximus-AI: Using Elman Neural Networks for Implementing a SLMR Trading Strategy

    NASA Astrophysics Data System (ADS)

    Marques, Nuno C.; Gomes, Carlos

    This paper presents a stop-loss - maximum return (SLMR) trading strategy based on improving the classic moving average technical indicator with neural networks. We propose an improvement in the efficiency of the long term moving average by using the limited recursion in Elman Neural Networks, jointly with hybrid neuro-symbolic neural network, while still fully keeping all the learning capabilities of non-recursive parts of the network. Simulations using Eurostoxx50 financial index will illustrate the potential of such a strategy for avoiding negative asset returns and decreasing the investment risk.

  2. Simulating neural systems with Xyce.

    SciTech Connect

    Schiek, Richard Louis; Thornquist, Heidi K.; Mei, Ting; Warrender, Christina E.; Aimone, James Bradley; Teeter, Corinne; Duda, Alex M.

    2012-12-01

    Sandia's parallel circuit simulator, Xyce, can address large scale neuron simulations in a new way extending the range within which one can perform high-fidelity, multi-compartment neuron simulations. This report documents the implementation of neuron devices in Xyce, their use in simulation and analysis of neuron systems.

  3. Fuzzy neural networks for classification and detection of anomalies.

    PubMed

    Meneganti, M; Saviello, F S; Tagliaferri, R

    1998-01-01

    In this paper, a new learning algorithm for the Simpson's fuzzy min-max neural network is presented. It overcomes some undesired properties of the Simpson's model: specifically, in it there are neither thresholds that bound the dimension of the hyperboxes nor sensitivity parameters. Our new algorithm improves the network performance: in fact, the classification result does not depend on the presentation order of the patterns in the training set, and at each step, the classification error in the training set cannot increase. The new neural model is particularly useful in classification problems as it is shown by comparison with some fuzzy neural nets cited in literature (Simpson's min-max model, fuzzy ARTMAP proposed by Carpenter, Grossberg et al. in 1992, adaptive fuzzy systems as introduced by Wang in his book) and the classical multilayer perceptron neural network with backpropagation learning algorithm. The tests were executed on three different classification problems: the first one with two-dimensional synthetic data, the second one with realistic data generated by a simulator to find anomalies in the cooling system of a blast furnace, and the third one with real data for industrial diagnosis. The experiments were made following some recent evaluation criteria known in literature and by using Microsoft Visual C++ development environment on personal computers. PMID:18255771

  4. Equipment and wafer modeling of batch furnaces by neural networks

    NASA Astrophysics Data System (ADS)

    Benesch, N.; Schneider, Claus; Lehnert, Wolfgang; Pfitzner, Lothar; Ryssel, Heiner

    1999-04-01

    In semiconductor manufacturing there is a great demand for innovations towards higher cost-effectiveness. The increasing employment of advanced control systems for process and equipment control is one means to improve manufacturing processes effectively and, hence, to lower costs. A precondition for an accurate and fast control is the availability of process models. In this paper neural networks are applied to non-linear system identification as an alternative or addition to physical models. Neural empirical models are developed with the help of measured input and output data of a system or process. After a brief summary of the theory of neural networks their application to system identification is described in detail. The capabilities of the neural network models are demonstrated by several examples. The temperature dynamics of a vertical furnace for the oxidation of 300 mm wafers as well as the zone temperatures of a 150 mm LPCVD furnace are simulated and the results are verified by measurements. Moreover, in order to control wafer temperatures in batch furnaces, an appropriate model was developed and implemented in a model- based controller.

  5. Application of neural networks to seismic active control

    SciTech Connect

    Tang, Yu

    1995-07-01

    An exploratory study on seismic active control using an artificial neural network (ANN) is presented in which a singledegree-of-freedom (SDF) structural system is controlled by a trained neural network. A feed-forward neural network and the backpropagation training method are used in the study. In backpropagation training, the learning rate is determined by ensuring the decrease of the error function at each training cycle. The training patterns for the neural net are generated randomly. Then, the trained ANN is used to compute the control force according to the control algorithm. The control strategy proposed herein is to apply the control force at every time step to destroy the build-up of the system response. The ground motions considered in the simulations are the N21E and N69W components of the Lake Hughes No. 12 record that occurred in the San Fernando Valley in California on February 9, 1971. Significant reduction of the structural response by one order of magnitude is observed. Also, it is shown that the proposed control strategy has the ability to reduce the peak that occurs during the first few cycles of the time history. These promising results assert the potential of applying ANNs to active structural control under seismic loads.

  6. An Improved Maximum Neural Network with Stochastic Dynamics Characteristic for Maximum Clique Problem

    NASA Astrophysics Data System (ADS)

    Yang, Gang; Tang, Zheng; Dai, Hongwei

    Through analyzing the dynamics characteristic of maximum neural network with an added vertex, we find that the solution quality is mainly determined by the added vertex weights. In order to increase maximum neural network ability, a stochastic nonlinear self-feedback and flexible annealing strategy are embedded in maximum neural network, which makes the network more powerful to escape local minima and be independent of the initial values. Simultaneously, we present that solving ability of maximum neural network is dependence on problem. We introduce a new parameter into our network to improve the solving ability. The simulation in k random graph and some DIMACS clique instances in the second DIMACS challenge shows that our improved network is superior to other algorithms in light of the solution quality and CPU time.

  7. Analysis of Stochastic Response of Neural Networks with Stochastic Input

    Energy Science and Technology Software Center (ESTSC)

    1996-10-10

    Software permits the user to extend capability of his/her neural network to include probablistic characteristics of input parameter. User inputs topology and weights associated with neural network along with distributional characteristics of input parameters. Network response is provided via a cumulative density function of network response variable.

  8. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    PubMed Central

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  9. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    PubMed

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  10. Training Data Requirement for a Neural Network to Predict Aerodynamic Coefficients

    NASA Technical Reports Server (NTRS)

    Korsmeyer, David (Technical Monitor); Rajkumar, T.; Bardina, Jorge

    2003-01-01

    Basic aerodynamic coefficients are modeled as functions of angle of attack, speed brake deflection angle, Mach number, and side slip angle. Most of the aerodynamic parameters can be well-fitted using polynomial functions. We previously demonstrated that a neural network is a fast, reliable way of predicting aerodynamic coefficients. We encountered few under fitted and/or over fitted results during prediction. The training data for the neural network are derived from wind tunnel test measurements and numerical simulations. The basic questions that arise are: how many training data points are required to produce an efficient neural network prediction, and which type of transfer functions should be used between the input-hidden layer and hidden-output layer. In this paper, a comparative study of the efficiency of neural network prediction based on different transfer functions and training dataset sizes is presented. The results of the neural network prediction reflect the sensitivity of the architecture, transfer functions, and training dataset size.

  11. Synchronization criteria for generalized reaction-diffusion neural networks via periodically intermittent control

    NASA Astrophysics Data System (ADS)

    Gan, Qintao; Lv, Tianshi; Fu, Zhenhua

    2016-04-01

    In this paper, the synchronization problem for a class of generalized neural networks with time-varying delays and reaction-diffusion terms is investigated concerning Neumann boundary conditions in terms of p-norm. The proposed generalized neural networks model includes reaction-diffusion local field neural networks and reaction-diffusion static neural networks as its special cases. By establishing a new inequality, some simple and useful conditions are obtained analytically to guarantee the global exponential synchronization of the addressed neural networks under the periodically intermittent control. According to the theoretical results, the influences of diffusion coefficients, diffusion space, and control rate on synchronization are analyzed. Finally, the feasibility and effectiveness of the proposed methods are shown by simulation examples, and by choosing different diffusion coefficients, diffusion spaces, and control rates, different controlled synchronization states can be obtained.

  12. Real-time neural network based camera localization and its extension to mobile robot control.

    PubMed

    Choi, D H; Oh, S Y

    1997-06-01

    The feasibility of using neural networks for camera localization and mobile robot control is investigated here. This approach has the advantages of eliminating the laborious and error-prone process of imaging system modeling and calibration procedures. Basically, two different approaches of using neural networks are introduced of which one is a hybrid approach combining neural networks and the pinhole-based analytic solution while the other is purely neural network based. These techniques have been tested and compared through both simulation and real-time experiments and are shown to yield more precise localization than analytic approaches. Furthermore, this neural localization method is also shown to be directly applicable to the navigation control of an experimental mobile robot along the hallway purely guided by a dark wall strip. It also facilitates multi-sensor fusion through the use of multiple sensors of different types for control due to the network's capability of learning without models. PMID:9427102

  13. Synchronization criteria for generalized reaction-diffusion neural networks via periodically intermittent control.

    PubMed

    Gan, Qintao; Lv, Tianshi; Fu, Zhenhua

    2016-04-01

    In this paper, the synchronization problem for a class of generalized neural networks with time-varying delays and reaction-diffusion terms is investigated concerning Neumann boundary conditions in terms of p-norm. The proposed generalized neural networks model includes reaction-diffusion local field neural networks and reaction-diffusion static neural networks as its special cases. By establishing a new inequality, some simple and useful conditions are obtained analytically to guarantee the global exponential synchronization of the addressed neural networks under the periodically intermittent control. According to the theoretical results, the influences of diffusion coefficients, diffusion space, and control rate on synchronization are analyzed. Finally, the feasibility and effectiveness of the proposed methods are shown by simulation examples, and by choosing different diffusion coefficients, diffusion spaces, and control rates, different controlled synchronization states can be obtained. PMID:27131492

  14. Training neural networks with heterogeneous data.

    PubMed

    Drakopoulos, John A; Abdulkader, Ahmad

    2005-01-01

    Data pruning and ordered training are two methods and the results of a small theory that attempts to formalize neural network training with heterogeneous data. Data pruning is a simple process that attempts to remove noisy data. Ordered training is a more complex method that partitions the data into a number of categories and assigns training times to those assuming that data size and training time have a polynomial relation. Both methods derive from a set of premises that form the 'axiomatic' basis of our theory. Both methods have been applied to a time-delay neural network-which is one of the main learners in Microsoft's Tablet PC handwriting recognition system. Their effect is presented in this paper along with a rough estimate of their effect on the overall multi-learner system. The handwriting data and the chosen language are Italian. PMID:16095874

  15. A Novel Higher Order Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuxiang

    2010-05-01

    In this paper a new Higher Order Neural Network (HONN) model is introduced and applied in several data mining tasks. Data Mining extracts hidden patterns and valuable information from large databases. A hyperbolic tangent function is used as the neuron activation function for the new HONN model. Experiments are conducted to demonstrate the advantages and disadvantages of the new HONN model, when compared with several conventional Artificial Neural Network (ANN) models: Feedforward ANN with the sigmoid activation function; Feedforward ANN with the hyperbolic tangent activation function; and Radial Basis Function (RBF) ANN with the Gaussian activation function. The experimental results seem to suggest that the new HONN holds higher generalization capability as well as abilities in handling missing data.

  16. Privacy-preserving backpropagation neural network learning.

    PubMed

    Chen, Tingting; Zhong, Sheng

    2009-10-01

    With the development of distributed computing environment , many learning problems now have to deal with distributed input data. To enhance cooperations in learning, it is important to address the privacy concern of each data holder by extending the privacy preservation notion to original learning algorithms. In this paper, we focus on preserving the privacy in an important learning model, multilayer neural networks. We present a privacy-preserving two-party distributed algorithm of backpropagation which allows a neural network to be trained without requiring either party to reveal her data to the other. We provide complete correctness and security analysis of our algorithms. The effectiveness of our algorithms is verified by experiments on various real world data sets. PMID:19709975

  17. Automatic breast density classification using neural network

    NASA Astrophysics Data System (ADS)

    Arefan, D.; Talebpour, A.; Ahmadinejhad, N.; Kamali Asl, A.

    2015-12-01

    According to studies, the risk of breast cancer directly associated with breast density. Many researches are done on automatic diagnosis of breast density using mammography. In the current study, artifacts of mammograms are removed by using image processing techniques and by using the method presented in this study, including the diagnosis of points of the pectoral muscle edges and estimating them using regression techniques, pectoral muscle is detected with high accuracy in mammography and breast tissue is fully automatically extracted. In order to classify mammography images into three categories: Fatty, Glandular, Dense, a feature based on difference of gray-levels of hard tissue and soft tissue in mammograms has been used addition to the statistical features and a neural network classifier with a hidden layer. Image database used in this research is the mini-MIAS database and the maximum accuracy of system in classifying images has been reported 97.66% with 8 hidden layers in neural network.

  18. On analog implementations of discrete neural networks

    SciTech Connect

    Beiu, V.; Moore, K.R.

    1998-12-01

    The paper will show that in order to obtain minimum size neural networks (i.e., size-optimal) for implementing any Boolean function, the nonlinear activation function of the neutrons has to be the identity function. The authors shall shortly present many results dealing with the approximation capabilities of neural networks, and detail several bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions they will show that implementing Boolean functions can be done using neurons having an identity nonlinear function. It follows that size-optimal solutions can be obtained only using analog circuitry. Conclusions, and several comments on the required precision are ending the paper.

  19. Neural Networks For Demodulation Of Phase-Modulated Signals

    NASA Technical Reports Server (NTRS)

    Altes, Richard A.

    1995-01-01

    Hopfield neural networks proposed for demodulating quadrature phase-shift-keyed (QPSK) signals carrying digital information. Networks solve nonlinear integral equations prior demodulation circuits cannot solve. Consists of set of N operational amplifiers connected in parallel, with weighted feedback from output terminal of each amplifier to input terminals of other amplifiers. Used to solve signal processing problems. Implemented as analog very-large-scale integrated circuit that achieves rapid convergence. Alternatively, implemented as digital simulation of such circuit. Also used to improve phase estimation performance over that of phase-locked loop.

  20. Backstepping Control Augmented by Neural Networks For Robot Manipulators

    NASA Astrophysics Data System (ADS)

    Belkheiri, Mohammed; Boudjema, Farès

    2008-06-01

    A new control approach is proposed to address the tracking problem of robot manipulators. In this approach, one relies first on a partially known model to the system to be controlled using a backstepping control strategy. The obtained controller is then augmented by an online neural network that serves as an approximator for the neglected dynamics and modeling errors. The proposed approach is systematic, and exploits the known nonlinear dynamics to derive the stepwise virtual stabilizing control laws. At the final step, an augmented Lyapunov function is introduced to derive the adaptation laws of the network weights. The effectiveness of the proposed controller is demonstrated through computer simulation on PUMA 560 robot.

  1. Hybrid interior point training of modular neural networks.

    PubMed

    Szymanski, P T; Lemmon, M; Bett, C J

    1998-03-01

    Modular neural networks use a single gating neuron to select the outputs of a collection of agent neurons. Expectation-maximization (EM) algorithms provide one way of training modular neural networks to approximate non-linear functionals. This paper introduces a hybrid interior-point (HIP) algorithm for training modular networks. The HIP algorithm combines an interior-point linear programming (LP) algorithm with a Newton-Raphson iteration in such a way that the computational efficiency of the interior point LP methods is preserved. The algorithm is formally proven to converge asymptotically to locally optimal networks with a total computational cost that scales in a polynomial manner with problem size. Simulation experiments show that the HIP algorithm produces networks whose average approximation error is better than that of EM-trained networks. These results also demonstrate that the computational cost of the HIP algorithm scales at a slower rate than the EM-procedure and that, for small-size networks, the total computational costs of both methods are comparable. PMID:12662833

  2. Chimera-like States in Modular Neural Networks

    PubMed Central

    Hizanidis, Johanne; Kouvaris, Nikos E.; Gorka, Zamora-López; Díaz-Guilera, Albert; Antonopoulos, Chris G.

    2016-01-01

    Chimera states, namely the coexistence of coherent and incoherent behavior, were previously analyzed in complex networks. However, they have not been extensively studied in modular networks. Here, we consider a neural network inspired by the connectome of the C. elegans soil worm, organized into six interconnected communities, where neurons obey chaotic bursting dynamics. Neurons are assumed to be connected with electrical synapses within their communities and with chemical synapses across them. As our numerical simulations reveal, the coaction of these two types of coupling can shape the dynamics in such a way that chimera-like states can happen. They consist of a fraction of synchronized neurons which belong to the larger communities, and a fraction of desynchronized neurons which are part of smaller communities. In addition to the Kuramoto order parameter ρ, we also employ other measures of coherence, such as the chimera-like χ and metastability λ indices, which quantify the degree of synchronization among communities and along time, respectively. We perform the same analysis for networks that share common features with the C. elegans neural network. Similar results suggest that under certain assumptions, chimera-like states are prominent phenomena in modular networks, and might provide insight for the behavior of more complex modular networks. PMID:26796971

  3. Neural network error correction for solving coupled ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  4. Neural Simulations on Multi-Core Architectures

    PubMed Central

    Eichner, Hubert; Klug, Tobias; Borst, Alexander

    2009-01-01

    Neuroscience is witnessing increasing knowledge about the anatomy and electrophysiological properties of neurons and their connectivity, leading to an ever increasing computational complexity of neural simulations. At the same time, a rather radical change in personal computer technology emerges with the establishment of multi-cores: high-density, explicitly parallel processor architectures for both high performance as well as standard desktop computers. This work introduces strategies for the parallelization of biophysically realistic neural simulations based on the compartmental modeling technique and results of such an implementation, with a strong focus on multi-core architectures and automation, i.e. user-transparent load balancing. PMID:19636393

  5. Neural network with dynamically adaptable neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.

  6. Reconstructing irregularly sampled images by neural networks

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Yellott, John I., Jr.

    1989-01-01

    Neural-network-like models of receptor position learning and interpolation function learning are being developed as models of how the human nervous system might handle the problems of keeping track of the receptor positions and interpolating the image between receptors. These models may also be of interest to designers of image processing systems desiring the advantages of a retina-like image sampling array.

  7. Artificial neural network cardiopulmonary modeling and diagnosis

    DOEpatents

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  8. Artificial neural network cardiopulmonary modeling and diagnosis

    DOEpatents

    Kangas, L.J.; Keller, P.E.

    1997-10-28

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis. 12 figs.

  9. Analog hardware for learning neural networks

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P. (Inventor)

    1991-01-01

    This is a recurrent or feedforward analog neural network processor having a multi-level neuron array and a synaptic matrix for storing weighted analog values of synaptic connection strengths which is characterized by temporarily changing one connection strength at a time to determine its effect on system output relative to the desired target. That connection strength is then adjusted based on the effect, whereby the processor is taught the correct response to training examples connection by connection.

  10. Hybrid pyramid/neural network object recognition

    NASA Astrophysics Data System (ADS)

    Anandan, P.; Burt, Peter J.; Pearson, John C.; Spence, Clay D.

    1994-02-01

    This work concerns computationally efficient computer vision methods for the search for and identification of small objects in large images. The approach combines neural network pattern recognition with pyramid-based coarse-to-fine search, in a way that eliminates the drawbacks of each method when used by itself and, in addition, improves object identification through learning and exploiting the low-resolution image context associated with the objects. The presentation will describe the system architecture and the performance on illustrative problems.

  11. Nonvolatile Array Of Synapses For Neural Network

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Elements of array programmed with help of ultraviolet light. A 32 x 32 very-large-scale integrated-circuit array of electronic synapses serves as building-block chip for analog neural-network computer. Synaptic weights stored in nonvolatile manner. Makes information content of array invulnerable to loss of power, and, by eliminating need for circuitry to refresh volatile synaptic memory, makes architecture simpler and more compact.

  12. Diagnosing process faults using neural network models

    SciTech Connect

    Buescher, K.L.; Jones, R.D.; Messina, M.J.

    1993-11-01

    In order to be of use for realistic problems, a fault diagnosis method should have the following three features. First, it should apply to nonlinear processes. Second, it should not rely on extensive amounts of data regarding previous faults. Lastly, it should detect faults promptly. The authors present such a scheme for static (i.e., non-dynamic) systems. It involves using a neural network to create an associative memory whose fixed points represent the normal behavior of the system.

  13. Learning in Neural Networks: VLSI Implementation Strategies

    NASA Technical Reports Server (NTRS)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  14. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  15. Applying neural networks to optimize instrumentation performance

    SciTech Connect

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  16. Neural network architectures to analyze OPAD data

    NASA Technical Reports Server (NTRS)

    Whitaker, Kevin W.

    1992-01-01

    A prototype Optical Plume Anomaly Detection (OPAD) system is now installed on the space shuttle main engine (SSME) Technology Test Bed (TTB) at MSFC. The OPAD system requirements dictate the need for fast, efficient data processing techniques. To address this need of the OPAD system, a study was conducted into how artificial neural networks could be used to assist in the analysis of plume spectral data.

  17. Neural Network Solves "Traveling-Salesman" Problem

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  18. Program PSNN (Plasma Spectroscopy Neural Network)

    SciTech Connect

    Morgan, W.L.; Larsen, J.T.

    1993-08-01

    This program uses the standard ``delta rule`` back-propagation supervised training algorithm for multi-layer neural networks. The inputs are line intensities in arbitrary units, which are then normalized within the program. The outputs are T{sub e}(eV), N{sub e}(cm{sup {minus}3}), and a fractional ionization, which in our testing using H- and He-like spectra, was N(He)/[N(H) + N(He)].

  19. Analysis of IMS spectra using neural networks

    SciTech Connect

    Bell, S.E.

    1992-09-01

    Ion mobility spectrometry (IMS) has been used for over 20 years, and IMS coupled to gas chromatography (GC/IMS) has been used for over 10 years. There still is no systematic approach to IMS spectral interpretation such as exists for mass spectrometry and infrared spectrometry. Neural networks, a form of adaptive pattern recognition, were examined as a method of data reduction for IMS and GC/IMS. A wide variety of volatile organics were analyzed using IMS and GC/IMS and submitted to different networks for identification. Several different networks and data preprocessing algorithms were studied. A network was linked to a simple rule-based expert system and analyzed. The expert system was used to filter out false positive identifications made by the network using retention indices. The various network configurations were compared to other pattern recognition techniques, including human experts. The network performance was comparable to human experts, but responded much faster. Preliminary comparison of the network to other pattern recognition showed comparable performance. Linkage of the network output to the rule-based retention index system yielded the best performance.

  20. Analysis of IMS spectra using neural networks

    SciTech Connect

    Bell, S.E.

    1992-01-01

    Ion mobility spectrometry (IMS) has been used for over 20 years, and IMS coupled to gas chromatography (GC/IMS) has been used for over 10 years. There still is no systematic approach to IMS spectral interpretation such as exists for mass spectrometry and infrared spectrometry. Neural networks, a form of adaptive pattern recognition, were examined as a method of data reduction for IMS and GC/IMS. A wide variety of volatile organics were analyzed using IMS and GC/IMS and submitted to different networks for identification. Several different networks and data preprocessing algorithms were studied. A network was linked to a simple rule-based expert system and analyzed. The expert system was used to filter out false positive identifications made by the network using retention indices. The various network configurations were compared to other pattern recognition techniques, including human experts. The network performance was comparable to human experts, but responded much faster. Preliminary comparison of the network to other pattern recognition showed comparable performance. Linkage of the network output to the rule-based retention index system yielded the best performance.

  1. Neural Network Approach To Sensory Fusion

    NASA Astrophysics Data System (ADS)

    Pearson, John C.; Gelfand, Jack J.; Sullivan, W. E.; Peterson, Richard M.; Spence, Clay D.

    1988-08-01

    We present a neural network model for sensory fusion based on the design of the visual/acoustic target localiza-tion system of the barn owl. This system adaptively fuses its separate visual and acoustic representations of object position into a single joint representation used for head orientation. The building block in this system, as in much of the brain, is the neuronal map. Neuronal maps are large arrays of locally interconnected neurons that represent information in a map-like form, that is, parameter values are systematically encoded by the position of neural activation in the array. The computational load is distributed to a hierarchy of maps, and the computation is performed in stages by transforming the representation from map to map via the geometry of the projections between the maps and the local interactions within the maps. For example, azimuthal position is computed from the frequency and binaural phase information encoded in the signals of the acoustic sensors, while elevation is computed in a separate stream using binaural intensity information. These separate streams are merged in their joint projection onto the external nucleus of the inferior colliculus, a two dimensional array of cells which contains a map of acoustic space. This acoustic map, and the visual map of the retina, jointly project onto the optic tectum, creating a fused visual/acoustic representation of position in space that is used for object localization. In this paper we describe our mathematical model of the stage of visual/acoustic fusion in the optic tectum. The model assumes that the acoustic projection from the external nucleus onto the tectum is roughly topographic and one-to-many, while the visual projection from the retina onto the tectum is topographic and one-to-one. A simple process of self-organization alters the strengths of the acoustic connections, effectively forming a focused beam of strong acoustic connections whose inputs are coincident with the visual inputs

  2. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  3. Incremental communication for multilayer neural networks: error analysis.

    PubMed

    Ghorbani, A A; Bhavsar, V C

    1998-01-01

    Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed (1995) an incremental internode communication method. In the incremental communication method, instead of communicating the full magnitude of the output value of a node, only the increment or decrement to its previous value is sent to a communication link. In this paper, the effects of the limited precision incremental communication method on the convergence behavior and performance of multilayer neural networks are investigated. The nonlinear aspects of representing the incremental values with reduced (limited) precision for the commonly used error backpropagation training algorithm are analyzed. It is shown that the nonlinear effect of small perturbations in the input(s)/output of a node does not cause instability. The analysis is supported by simulation studies of two problems. The simulation results demonstrate that the limited precision errors are bounded and do not seriously affect the convergence of multilayer neural networks. PMID:18252431

  4. The next generation of neural network chips

    SciTech Connect

    Beiu, V.

    1997-08-01

    There have been many national and international neural networks research initiatives: USA (DARPA, NIBS), Canada (IRIS), Japan (HFSP) and Europe (BRAIN, GALA TEA, NERVES, ELENE NERVES 2) -- just to mention a few. Recent developments in the field of neural networks, cognitive science, bioengineering and electrical engineering have made it possible to understand more about the functioning of large ensembles of identical processing elements. There are more research papers than ever proposing solutions and hardware implementations are by no means an exception. Two fields (computing and neuroscience) are interacting in ways nobody could imagine just several years ago, and -- with the advent of new technologies -- researchers are focusing on trying to copy the Brain. Such an exciting confluence may quite shortly lead to revolutionary new computers and it is the aim of this invited session to bring to light some of the challenging research aspects dealing with the hardware realizability of future intelligent chips. Present-day (conventional) technology is (still) mostly digital and, thus, occupies wider areas and consumes much more power than the solutions envisaged. The innovative algorithmic and architectural ideals should represent important breakthroughs, paving the way towards making neural network chips available to the industry at competitive prices, in relatively small packages and consuming a fraction of the power required by equivalent digital solutions.

  5. CALIBRATION OF ONLINE ANALYZERS USING NEURAL NETWORKS

    SciTech Connect

    Rajive Ganguli; Daniel E. Walsh; Shaohai Yu

    2003-12-05

    Neural networks were used to calibrate an online ash analyzer at the Usibelli Coal Mine, Healy, Alaska, by relating the Americium and Cesium counts to the ash content. A total of 104 samples were collected from the mine, with 47 being from screened coal, and the rest being from unscreened coal. Each sample corresponded to 20 seconds of coal on the running conveyor belt. Neural network modeling used the quick stop training procedure. Therefore, the samples were split into training, calibration and prediction subsets. Special techniques, using genetic algorithms, were developed to representatively split the sample into the three subsets. Two separate approaches were tried. In one approach, the screened and unscreened coal was modeled separately. In another, a single model was developed for the entire dataset. No advantage was seen from modeling the two subsets separately. The neural network method performed very well on average but not individually, i.e. though each prediction was unreliable, the average of a few predictions was close to the true average. Thus, the method demonstrated that the analyzers were accurate at 2-3 minutes intervals (average of 6-9 samples), but not at 20 seconds (each prediction).

  6. Efficient implementation of neural network deinterlacing

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee

    2009-02-01

    Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.

  7. Shale Gas reservoirs characterization using neural network

    NASA Astrophysics Data System (ADS)

    Ouadfeul, Sid-Ali; Aliouane, Leila

    2014-05-01

    In this paper, a tentative of shale gas reservoirs characterization enhancement from well-logs data using neural network is established. The goal is to predict the Total Organic carbon (TOC) in boreholes where the TOC core rock or TOC well-log measurement does not exist. The Multilayer perceptron (MLP) neural network with three layers is established. The MLP input layer is constituted with five neurons corresponding to the Bulk density, Neutron porosity, sonic P wave slowness and photoelectric absorption coefficient. The hidden layer is forms with nine neurons and the output layer is formed with one neuron corresponding to the TOC log. Application to two boreholes located in Barnett shale formation where a well A is used as a pilot and a well B is used for propagation shows clearly the efficiency of the neural network method to improve the shale gas reservoirs characterization. The established formalism plays a high important role in the shale gas plays economy and long term gas energy production.

  8. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. . Dept. of Nuclear Engineering Oak Ridge National Lab., TN )

    1992-01-01

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  9. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. |

    1992-12-31

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  10. Multiresolution training of Kohonen neural networks

    NASA Astrophysics Data System (ADS)

    Tamir, Dan E.

    2007-09-01

    This paper analyses a trade-off between convergence rate and distortion obtained through a multi-resolution training of a Kohonen Competitive Neural Network. Empirical results show that a multi-resolution approach can improve the training stage of several unsupervised pattern classification algorithms including K-means clustering, LBG vector quantization, and competitive neural networks. While, previous research concentrated on convergence rate of on-line unsupervised training. New results, reported in this paper, show that the multi-resolution approach can be used to improve training quality (measured as a derivative of the rate distortion function) on the account of convergence speed. The probability of achieving a desired point in the quality/convergence-rate space of Kohonen Competitive Neural Networks (KCNN) is evaluated using a detailed Monte Carlo set of experiments. It is shown that multi-resolution can reduce the distortion by a factor of 1.5 to 6 while maintaining the convergence rate of traditional KCNN. Alternatively, the convergence rate can be improved without loss of quality. The experiments include a controlled set of synthetic data, as well as, image data. Experimental results are reported and evaluated.

  11. Deep learning in neural networks: an overview.

    PubMed

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. PMID:25462637

  12. Neural network method for characterizing video cameras

    NASA Astrophysics Data System (ADS)

    Zhou, Shuangquan; Zhao, Dazun

    1998-08-01

    This paper presents a neural network method for characterizing color video camera. A multilayer feedforward network with the error back-propagation learning rule for training, is used as a nonlinear transformer to model a camera, which realizes a mapping from the CIELAB color space to RGB color space. With SONY video camera, D65 illuminant, Pritchard Spectroradiometer, 410 JIS color charts as training data and 36 charts as testing data, results show that the mean error of training data is 2.9 and that of testing data is 4.0 in a 2563 RGB space.

  13. A space-time neural network

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1991-01-01

    Introduced here is a novel technique which adds the dimension of time to the well known back propagation neural network algorithm. Cited here are several reasons why the inclusion of automated spatial and temporal associations are crucial to effective systems modeling. An overview of other works which also model spatiotemporal dynamics is furnished. A detailed description is given of the processes necessary to implement the space-time network algorithm. Several demonstrations that illustrate the capabilities and performance of this new architecture are given.

  14. Prediction of Universal Time using the artificial neural network

    NASA Astrophysics Data System (ADS)

    Richard, J. Y.; Lopes, P.; Barache, C.; Bizouard, C.; Gambis, D.

    2014-12-01

    The monitoring of the Earth Orientation Parameters (EOP) variations is the main task of the Earth orientation Center of the IERS. In addition, for various applications linked in particular to navigation, precise orbit determination or leap seconds announcements, short and long term predictions are required. The method which is currently applied for predictions is based on deterministic processes, Least Square fitting, autoregressive filtering (Gambis and Luzum 2011). We present an alternative method, the Artificial Neural Networks (ANN) which has have already been successfully applied for pattern recognition. It has been tested as well by various authors for EOP predictions but with so far no real improvement compared to the current methods (Schuh et. al. 2002). New formalisms recently developed allow reconsidering the use of neural networks for EOP predictions. Series of simulations were performed for both short and long term predictions. Statistics and comparisons with the current methods are presented.

  15. Relabeling exchange method (REM) for learning in neural networks

    NASA Astrophysics Data System (ADS)

    Wu, Wen; Mammone, Richard J.

    1994-02-01

    The supervised training of neural networks require the use of output labels which are usually arbitrarily assigned. In this paper it is shown that there is a significant difference in the rms error of learning when `optimal' label assignment schemes are used. We have investigated two efficient random search algorithms to solve the relabeling problem: the simulated annealing and the genetic algorithm. However, we found them to be computationally expensive. Therefore we shall introduce a new heuristic algorithm called the Relabeling Exchange Method (REM) which is computationally more attractive and produces optimal performance. REM has been used to organize the optimal structure for multi-layered perceptrons and neural tree networks. The method is a general one and can be implemented as a modification to standard training algorithms. The motivation of the new relabeling strategy is based on the present interpretation of dyslexia as an encoding problem.

  16. Neural Network and Regression Approximations Used in Aircraft Design

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Lavelle, Thomas M.

    1999-01-01

    NASA Lewis Research Center's CometBoards Test Bed was used to create regression and neural network models for a High-Speed Civil Transport (HSCT) aircraft. Both approximation models that replaced the actual analysis tool predicted the aircraft response in a trivial computational effort. The models allow engineers to quickly study the effects of design variables on constraint and objective values for a given aircraft configuration. For example, an engineer can change the engine size by 1000 pounds of thrust and quickly see how this change affects all the output values without rerunning the entire simulation. In addition, an engineer can change a constraint and use the approximation models to quickly reoptimize the configuration. Generating the neural network and the regression models is a time-consuming process, but this exercise has to be carried out only once. Furthermore, an automated process can reduce calculations substantially.

  17. Model for a neural network structure and signal transmission

    NASA Astrophysics Data System (ADS)

    Kotsavasiloglou, C.; Kalampokis, A.; Argyrakis, P.; Baloyannis, S.

    1997-10-01

    We present a model of a neural network that is based on the diffusion-limited-aggregation (DLA) structure from fractal physics. A single neuron is one DLA cluster, while a large number of clusters, in an interconnected fashion, make up the neural network. Using simulation techniques, a signal is randomly generated and traced through its transmission inside the neuron and from neuron to neuron through the synapses. The activity of the entire neural network is monitored as a function of time. The characteristics included in the model contain, among others, the threshold for firing, the excitatory or inhibitory character of the synapse, the synaptic delay, and the refractory period. The system activity results in ``noisy'' time series that exhibit an oscillatory character. Standard power spectra are evaluated and fractal analyses performed, showing that the system is not chaotic, but the varying parameters can be associated with specific values of fractal dimensions. It is found that the network activity is not linear with the system parameters, e.g., with the numbers of active synapses. The details of this behavior may have interesting repercussions from the neurological point of view.

  18. Neural networks for sensor fusion of meteorological measurements

    NASA Astrophysics Data System (ADS)

    Yee, Young P.; Measure, Edward M.; Cogan, James L.; Bleiweis, Max

    2001-03-01

    The collection and management of vast quantities of meteorological data, including satellite-based as well as ground- based measurements, is presenting great challenges in the optimal usage of this information. To address these issues, the Army Laboratory has developed neural networks for combining for combining multi-sensor meteorological data for Army battlefield weather forecasting models. As a demonstration of this data fusion methodology, multi-sensor data was taken from the Meteorological Measurement Set Profiler (MMSP-POC) system and from satellites with orbits coinciding with the geographical locations of interest. The MMS Profiler-POC comprises a suite of remote sensing instrumentation and surface measuring devices. Neural network techniques were used to retrieve temperature and wind information from a combination of polar orbiter and/ or geostationary satellite observations and ground-based measurements. Back-propagation neural networks were constructed which use satellite radiances, simulated microwave radiometer measurements, and other ground-based measurements as inputs and produced temperature and wind profiles as outputs. The network was trained with Rawinsonde measurements used as truth-values. The final outcome will be an integrated, merged temperature/wind profile from the surface up to the upper troposphere.

  19. Desynchronization in diluted neural networks

    SciTech Connect

    Zillmer, Ruediger; Livi, Roberto; Politi, Antonio; Torcini, Alessandro

    2006-09-15

    The dynamical behavior of a weakly diluted fully inhibitory network of pulse-coupled spiking neurons is investigated. Upon increasing the coupling strength, a transition from regular to stochasticlike regime is observed. In the weak-coupling phase, a periodic dynamics is rapidly approached, with all neurons firing with the same rate and mutually phase locked. The strong-coupling phase is characterized by an irregular pattern, even though the maximum Lyapunov exponent is negative. The paradox is solved by drawing an analogy with the phenomenon of 'stable chaos', i.e., by observing that the stochasticlike behavior is 'limited' to an exponentially long (with the system size) transient. Remarkably, the transient dynamics turns out to be stationary.

  20. S.E.U. experiments on an artificial neural network implemented by means of digital processors

    SciTech Connect

    Velazco, R.; Assoum, A.; Cheynet, P.; Olmos, M.

    1996-12-01

    The SEU sensitivity of an Artificial Neural Network intended to be used in space to detect protonic whistlers is investigated. The authors evaluate its behavior in the presence of SEU-like faults for a hardware implementation, associating a general purpose microprocessor to a dedicated neural processor. Experimental results (SEU simulations and heavy ion ground tests) show the robustness of this implementation.