Science.gov

Sample records for cellular neural networks

  1. Realization problem of multi-layer cellular neural networks.

    PubMed

    Ban, Jung-Chao; Chang, Chih-Hung

    2015-10-01

    This paper investigates whether the output space of a multi-layer cellular neural network can be realized via a single layer cellular neural network in the sense of the existence of finite-to-one map from one output space to the other. Whenever such realization exists, the phenomena exhibited in the output space of the revealed single layer cellular neural network is at most a constant multiple of the phenomena exhibited in the output space of the original multi-layer cellular neural network. Meanwhile, the computation complexity of a single layer system is much less than the complexity of a multi-layer system. Namely, one can trade the precision of the results for the execution time. We remark that a routine extension of the proposed methodology in this paper can be applied to the substitution of hidden spaces although the detailed illustration is omitted.

  2. Shunting inhibitory cellular neural networks with chaotic external inputs

    NASA Astrophysics Data System (ADS)

    Akhmet, M. U.; Fen, M. O.

    2013-06-01

    Taking advantage of external inputs, it is shown that shunting inhibitory cellular neural networks behave chaotically. The analysis is based on the Li-Yorke definition of chaos. Appropriate illustrations which support the theoretical results are depicted.

  3. Shunting inhibitory cellular neural networks with chaotic external inputs.

    PubMed

    Akhmet, M U; Fen, M O

    2013-06-01

    Taking advantage of external inputs, it is shown that shunting inhibitory cellular neural networks behave chaotically. The analysis is based on the Li-Yorke definition of chaos. Appropriate illustrations which support the theoretical results are depicted.

  4. Template learning of cellular neural network using genetic programming.

    PubMed

    Radwan, Elsayed; Tazaki, Eiichiro

    2004-08-01

    A new learning algorithm for space invariant Uncoupled Cellular Neural Network is introduced. Learning is formulated as an optimization problem. Genetic Programming has been selected for creating new knowledge because they allow the system to find new rules both near to good ones and far from them, looking for unknown good control actions. According to the lattice Cellular Neural Network architecture, Genetic Programming will be used in deriving the Cloning Template. Exploration of any stable domain is possible by the current approach. Details of the algorithm are discussed and several application results are shown.

  5. Modeling a Nonlinear Liquid Level System by Cellular Neural Networks

    NASA Astrophysics Data System (ADS)

    Hernandez-Romero, Norberto; Seck-Tuoh-Mora, Juan Carlos; Gonzalez-Hernandez, Manuel; Medina-Marin, Joselito; Flores-Romero, Juan Jose

    This paper presents the analogue simulation of a nonlinear liquid level system composed by two tanks; the system is controlled using the methodology of exact linearization via state feedback by cellular neural networks (CNNs). The relevance of this manuscript is to show how a block diagram representing the analogue modeling and control of a nonlinear dynamical system, can be implemented and regulated by CNNs, whose cells may contain numerical values or arithmetic and control operations. In this way the dynamical system is modeled by a set of local-interacting elements without need of a central supervisor.

  6. Video sequence compression via supervised training on cellular neural networks.

    PubMed

    Rodríguez, L; Zufiria, P J; Berzal, J A

    1997-02-01

    In this paper, a novel approach for video sequence compression using Cellular Neural Networks (CNN's) is presented. CNN's are nets characterized by local interconnections between neurons (usually called cells), and can be modeled as dynamical systems. From among many different types, a CNN model operating in discrete-time (DT-CNN) has been chosen, its parameters being defined so that they are shared among all the cells in the network. The compression process proposed in this work is based on the possibility of replicating a given video sequence as a trajectory generated by the DT-CNN. In order for the CNN to follow a prescribed trajectory, a supervised training algorithm is implemented. Compression is achieved due to the fact that all the information contained in the sequence can be stored into a small number of parameters and initial conditions once training is stopped. Different improvements upon the basic formulation are analyzed and issues such as feasibility and complexity of the compression problem are also addressed. Finally, some examples with real video sequences illustrate the applicability of the method.

  7. When are two multi-layer cellular neural networks the same?

    PubMed

    Ban, Jung-Chao; Chang, Chih-Hung

    2016-07-01

    This paper aims to characterize whether a multi-layer cellular neural network is of deep architecture; namely, when can an n-layer cellular neural network be replaced by an m-layer cellular neural network for mnetwork is revealed.

  8. Effects of cellular homeostatic intrinsic plasticity on dynamical and computational properties of biological recurrent neural networks.

    PubMed

    Naudé, Jérémie; Cessac, Bruno; Berry, Hugues; Delord, Bruno

    2013-09-18

    Homeostatic intrinsic plasticity (HIP) is a ubiquitous cellular mechanism regulating neuronal activity, cardinal for the proper functioning of nervous systems. In invertebrates, HIP is critical for orchestrating stereotyped activity patterns. The functional impact of HIP remains more obscure in vertebrate networks, where higher order cognitive processes rely on complex neural dynamics. The hypothesis has emerged that HIP might control the complexity of activity dynamics in recurrent networks, with important computational consequences. However, conflicting results about the causal relationships between cellular HIP, network dynamics, and computational performance have arisen from machine-learning studies. Here, we assess how cellular HIP effects translate into collective dynamics and computational properties in biological recurrent networks. We develop a realistic multiscale model including a generic HIP rule regulating the neuronal threshold with actual molecular signaling pathways kinetics, Dale's principle, sparse connectivity, synaptic balance, and Hebbian synaptic plasticity (SP). Dynamic mean-field analysis and simulations unravel that HIP sets a working point at which inputs are transduced by large derivative ranges of the transfer function. This cellular mechanism ensures increased network dynamics complexity, robust balance with SP at the edge of chaos, and improved input separability. Although critically dependent upon balanced excitatory and inhibitory drives, these effects display striking robustness to changes in network architecture, learning rates, and input features. Thus, the mechanism we unveil might represent a ubiquitous cellular basis for complex dynamics in neural networks. Understanding this robustness is an important challenge to unraveling principles underlying self-organization around criticality in biological recurrent neural networks.

  9. Effects of cellular homeostatic intrinsic plasticity on dynamical and computational properties of biological recurrent neural networks.

    PubMed

    Naudé, Jérémie; Cessac, Bruno; Berry, Hugues; Delord, Bruno

    2013-09-18

    Homeostatic intrinsic plasticity (HIP) is a ubiquitous cellular mechanism regulating neuronal activity, cardinal for the proper functioning of nervous systems. In invertebrates, HIP is critical for orchestrating stereotyped activity patterns. The functional impact of HIP remains more obscure in vertebrate networks, where higher order cognitive processes rely on complex neural dynamics. The hypothesis has emerged that HIP might control the complexity of activity dynamics in recurrent networks, with important computational consequences. However, conflicting results about the causal relationships between cellular HIP, network dynamics, and computational performance have arisen from machine-learning studies. Here, we assess how cellular HIP effects translate into collective dynamics and computational properties in biological recurrent networks. We develop a realistic multiscale model including a generic HIP rule regulating the neuronal threshold with actual molecular signaling pathways kinetics, Dale's principle, sparse connectivity, synaptic balance, and Hebbian synaptic plasticity (SP). Dynamic mean-field analysis and simulations unravel that HIP sets a working point at which inputs are transduced by large derivative ranges of the transfer function. This cellular mechanism ensures increased network dynamics complexity, robust balance with SP at the edge of chaos, and improved input separability. Although critically dependent upon balanced excitatory and inhibitory drives, these effects display striking robustness to changes in network architecture, learning rates, and input features. Thus, the mechanism we unveil might represent a ubiquitous cellular basis for complex dynamics in neural networks. Understanding this robustness is an important challenge to unraveling principles underlying self-organization around criticality in biological recurrent neural networks. PMID:24048833

  10. Exponential synchronization for fuzzy cellular neural networks with time-varying delays and nonlinear impulsive effects.

    PubMed

    Pu, Hao; Liu, Yanmin; Jiang, Haijun; Hu, Cheng

    2015-08-01

    In this paper, the globally exponential synchronization of delayed fuzzy cellular neural networks with nonlinear impulsive effects are concerned. By utilizing inequality techniques and Lyapunov functional method, some sufficient conditions on the exponential synchronization are obtained based on [Formula: see text]-norm. Finally, a simulation example is given to illustrate the effectiveness of the theoretical results.

  11. Functional recognition imaging using artificial neural networks: applications to rapid cellular identification via broadband electromechanical response.

    PubMed

    Nikiforov, M P; Reukov, V V; Thompson, G L; Vertegel, A A; Guo, S; Kalinin, S V; Jesse, S

    2009-10-01

    Functional recognition imaging in scanning probe microscopy (SPM) using artificial neural network identification is demonstrated. This approach utilizes statistical analysis of complex SPM responses at a single spatial location to identify the target behavior, which is reminiscent of associative thinking in the human brain, obviating the need for analytical models. We demonstrate, as an example of recognition imaging, rapid identification of cellular organisms using the difference in electromechanical activity over a broad frequency range. Single-pixel identification of model Micrococcus lysodeikticus and Pseudomonas fluorescens bacteria is achieved, demonstrating the viability of the method.

  12. New color image encryption algorithm based on compound chaos mapping and hyperchaotic cellular neural network

    NASA Astrophysics Data System (ADS)

    Li, Jinqing; Bai, Fengming; Di, Xiaoqiang

    2013-01-01

    We propose an image encryption/decryption algorithm based on chaotic control parameter and hyperchaotic system with the composite permutation-diffusion structure. Compound chaos mapping is used to generate control parameters in the permutation stage. The high correlation between pixels is shuffled. In the diffusion stage, compound chaos mapping of different initial condition and control parameter generates the diffusion parameters, which are applied to hyperchaotic cellular neural networks. The diffusion key stream is obtained by this process and implements the pixels' diffusion. Compared with the existing methods, both simulation and statistical analysis of our proposed algorithm show that the algorithm has a good performance against attacks and meets the corresponding security level.

  13. Functional Recognition Imaging Using Artificial Neural Networks: Applications to Rapid Cellular Identification by Broadband Electromechanical Response

    PubMed Central

    Nikiforov, M.P.; Reukov, V.V.; Thompson, G.L.; Vertegel, A.A.; Guo, S.; Jesse, S.; Kalinin, S.V.

    2010-01-01

    Functional recognition imaging in Scanning Probe Microscopy (SPM) using artificial neural network identification is demonstrated. This approach utilizes statistical analysis of complex SPM responses to identify the target behavior, reminiscent of associative thinking in the human brain and obviating the need for analytical models. As an example of recognition imaging, we demonstrate rapid identification of cellular organisms using difference in electromechanical activity in a broad frequency range. Single-pixel identification of model Micrococcus lysodeikticus and Pseudomonas fluorescens bacteria is achieved, demonstrating the viability of the method. PMID:19752493

  14. Segmentation algorithm via Cellular Neural/Nonlinear Network: implementation on Bio-inspired hardware platform

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Vecchio, Pietro; Grassi, Giuseppe

    2011-12-01

    The Bio-inspired (Bi-i) Cellular Vision System is a computing platform consisting of sensing, array sensing-processing, and digital signal processing. The platform is based on the Cellular Neural/Nonlinear Network (CNN) paradigm. This article presents the implementation of a novel CNN-based segmentation algorithm onto the Bi-i system. Each part of the algorithm, along with the corresponding implementation on the hardware platform, is carefully described through the article. The experimental results, carried out for Foreman and Car-phone video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frames/s. Comparisons with existing CNN-based methods show that the conceived approach is more accurate, thus representing a good trade-off between real-time requirements and accuracy.

  15. Condition monitoring of 3G cellular networks through competitive neural models.

    PubMed

    Barreto, Guilherme A; Mota, João C M; Souza, Luis G M; Frota, Rewbenio A; Aguayo, Leonardo

    2005-09-01

    We develop an unsupervised approach to condition monitoring of cellular networks using competitive neural algorithms. Training is carried out with state vectors representing the normal functioning of a simulated CDMA2000 network. Once training is completed, global and local normality profiles (NPs) are built from the distribution of quantization errors of the training state vectors and their components, respectively. The global NP is used to evaluate the overall condition of the cellular system. If abnormal behavior is detected, local NPs are used in a component-wise fashion to find abnormal state variables. Anomaly detection tests are performed via percentile-based confidence intervals computed over the global and local NPs. We compared the performance of four competitive algorithms [winner-take-all (WTA), frequency-sensitive competitive learning (FSCL), self-organizing map (SOM), and neural-gas algorithm (NGA)] and the results suggest that the joint use of global and local NPs is more efficient and more robust than current single-threshold methods.

  16. Noise-robust realization of Turing-complete cellular automata by using neural networks with pattern representation

    NASA Astrophysics Data System (ADS)

    Oku, Makito; Aihara, Kazuyuki

    2010-11-01

    A modularly-structured neural network model is considered. Each module, which we call a ‘cell’, consists of two parts: a Hopfield neural network model and a multilayered perceptron. An array of such cells is used to simulate the Rule 110 cellular automaton with high accuracy even when all the units of neural networks are replaced by stochastic binary ones. We also find that noise not only degrades but also facilitates computation if the outputs of multilayered perceptrons are below the threshold required to update the states of the cells, which is a stochastic resonance in computation.

  17. Stability analysis of switched cellular neural networks: A mode-dependent average dwell time approach.

    PubMed

    Huang, Chuangxia; Cao, Jie; Cao, Jinde

    2016-10-01

    This paper addresses the exponential stability of switched cellular neural networks by using the mode-dependent average dwell time (MDADT) approach. This method is quite different from the traditional average dwell time (ADT) method in permitting each subsystem to have its own average dwell time. Detailed investigations have been carried out for two cases. One is that all subsystems are stable and the other is that stable subsystems coexist with unstable subsystems. By employing Lyapunov functionals, linear matrix inequalities (LMIs), Jessen-type inequality, Wirtinger-based inequality, reciprocally convex approach, we derived some novel and less conservative conditions on exponential stability of the networks. Comparing to ADT, the proposed MDADT show that the minimal dwell time of each subsystem is smaller and the switched system stabilizes faster. The obtained results extend and improve some existing ones. Moreover, the validness and effectiveness of these results are demonstrated through numerical simulations.

  18. Attractivity analysis of memristor-based cellular neural networks with time-varying delays.

    PubMed

    Guo, Zhenyuan; Wang, Jun; Yan, Zheng

    2014-04-01

    This paper presents new theoretical results on the invariance and attractivity of memristor-based cellular neural networks (MCNNs) with time-varying delays. First, sufficient conditions to assure the boundedness and global attractivity of the networks are derived. Using state-space decomposition and some analytic techniques, it is shown that the number of equilibria located in the saturation regions of the piecewise-linear activation functions of an n-neuron MCNN with time-varying delays increases significantly from 2(n) to 2(2n2)+n) (2(2n2) times) compared with that without a memristor. In addition, sufficient conditions for the invariance and local or global attractivity of equilibria or attractive sets in any designated region are derived. Finally, two illustrative examples are given to elaborate the characteristics of the results in detail.

  19. Application of neural networks to channel assignment for cellular CDMA networks with multiple services and mobile base stations

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1996-03-01

    The use of artificial neural networks to the channel assignment problem for cellular code- division multiple access (CDMA) telecommunications systems is considered. CDMA takes advantage of voice activity and spatial isolation because its capacity is only interference limited, unlike time-division multiple access (TDMA) and frequency-division multiple access (FDMA) where capacities are bandwidth limited. Any reduction in interference in CDMA translates linearly into increased capacity. FDMA and TDMA use a frequency reuse pattern as a method to increase capacity, while CDMA reuses the same frequency for all cells and gains a reuse efficiency by means of orthogonal codes. The latter method can improve system capacity by factors of four to six over digital TDMA or FDMA. Cellular carriers are planning to provide multiple communication services using CDMA in the next generation cellular system infrastructure. The approach of this study is the use of neural network methods for automatic and local network control, based on traffic behavior in specific cell cites and demand history. The goal is to address certain problems associated with the management of mobile and personal communication services in a cellular radio communications environment. In planning a cellular radio network, the operator assigns channels to the radio cells so that the probability of the processed carrier-to-interference ratio, CII, exceeding a predefined value is sufficiently low. The RF propagation, determined from the topography and infrastructure in the operating area, is used in conjunction with the densities of expected communications traffic to formulate interference constraints. These constraints state which radio cells may use the same code (channel) or adjacent channels at a time. The traffic loading and the number of service grades can also be used to calculate the number of required channels (codes) for each cell. The general assignment problem is the task of assigning the required number

  20. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  1. Pesticide residue screening using a novel artificial neural network combined with a bioelectric cellular biosensor.

    PubMed

    Ferentinos, Konstantinos P; Yialouris, Costas P; Blouchos, Petros; Moschopoulou, Georgia; Kintzios, Spyridon

    2013-01-01

    We developed a novel artificial neural network (ANN) system able to detect and classify pesticide residues. The novel ANN is coupled, in a customized way, to a cellular biosensor operation based on the bioelectric recognition assay (BERA) and able to simultaneously assay eight samples in three minutes. The novel system was developed using the data (time series) of the electrophysiological responses of three different cultured cell lines against three different pesticide groups (carbamates, pyrethroids, and organophosphates). Using the novel system, we were able to classify correctly the presence of the investigated pesticide groups with an overall success rate of 83.6%. Considering that only 70,000-80,000 samples are annually tested in Europe with current conventional technologies (an extremely minor fraction of the actual screening needs), the system reported in the present study could contribute to a screening system milestone for the future landscape in food safety control.

  2. Convergence and attractivity of memristor-based cellular neural networks with time delays.

    PubMed

    Qin, Sitian; Wang, Jun; Xue, Xiaoping

    2015-03-01

    This paper presents theoretical results on the convergence and attractivity of memristor-based cellular neural networks (MCNNs) with time delays. Based on a realistic memristor model, an MCNN is modeled using a differential inclusion. The essential boundedness of its global solutions is proven. The state of MCNNs is further proven to be convergent to a critical-point set located in saturated region of the activation function, when the initial state locates in a saturated region. It is shown that the state convergence time period is finite and can be quantitatively estimated using given parameters. Furthermore, the positive invariance and attractivity of state in non-saturated regions are also proven. The simulation results of several numerical examples are provided to substantiate the results.

  3. Modeling of trophospheric ozone concentrations using genetically trained multi-level cellular neural networks

    NASA Astrophysics Data System (ADS)

    Ozcan, H. Kurtulus; Bilgili, Erdem; Sahin, Ulku; Ucan, O. Nuri; Bayat, Cuma

    2007-09-01

    Tropospheric ozone concentrations, which are an important air pollutant, are modeled by the use of an artificial intelligence structure. Data obtained from air pollution measurement stations in the city of Istanbul are utilized in constituting the model. A supervised algorithm for the evaluation of ozone concentration using a genetically trained multi-level cellular neural network (ML-CNN) is introduced, developed, and applied to real data. A genetic algorithm is used in the optimization of CNN templates. The model results and the actual measurement results are compared and statistically evaluated. It is observed that seasonal changes in ozone concentrations are reflected effectively by the concentrations estimated by the multilevel-CNN model structure, with a correlation value of 0.57 ascertained between actual and model results. It is shown that the multilevel-CNN modeling technique is as satisfactory as other modeling techniques in associating the data in a complex medium in air pollution applications.

  4. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631

  5. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  6. Workshop on neural networks

    SciTech Connect

    Uhrig, R.E.; Emrich, M.L.

    1990-01-01

    The topics covered in this report are: Learning, Memory, and Artificial Neural Systems; Emerging Neural Network Technology; Neural Networks; Digital Signal Processing and Neural Networks; Application of Neural Networks to In-Core Fuel Management; Neural Networks in Process Control; Neural Network Applications in Image Processing; Neural Networks for Multi-Sensor Information Fusion; Neural Network Research in Instruments Controls Division; Neural Networks Research in the ORNL Engineering Physics and Mathematics Division; Neural Network Applications for Linear Programming; Neural Network Applications to Signal Processing and Diagnostics; Neural Networks in Filtering and Control; Neural Network Research at Tennessee Technological University; and Global Minima within the Hopfield Hypercube.

  7. A universal concept based on cellular neural networks for ultrafast and flexible solving of differential equations.

    PubMed

    Chedjou, Jean Chamberlain; Kyamakya, Kyandoghere

    2015-04-01

    This paper develops and validates a comprehensive and universally applicable computational concept for solving nonlinear differential equations (NDEs) through a neurocomputing concept based on cellular neural networks (CNNs). High-precision, stability, convergence, and lowest-possible memory requirements are ensured by the CNN processor architecture. A significant challenge solved in this paper is that all these cited computing features are ensured in all system-states (regular or chaotic ones) and in all bifurcation conditions that may be experienced by NDEs.One particular quintessence of this paper is to develop and demonstrate a solver concept that shows and ensures that CNN processors (realized either in hardware or in software) are universal solvers of NDE models. The solving logic or algorithm of given NDEs (possible examples are: Duffing, Mathieu, Van der Pol, Jerk, Chua, Rössler, Lorenz, Burgers, and the transport equations) through a CNN processor system is provided by a set of templates that are computed by our comprehensive templates calculation technique that we call nonlinear adaptive optimization. This paper is therefore a significant contribution and represents a cutting-edge real-time computational engineering approach, especially while considering the various scientific and engineering applications of this ultrafast, energy-and-memory-efficient, and high-precise NDE solver concept. For illustration purposes, three NDE models are demonstratively solved, and related CNN templates are derived and used: the periodically excited Duffing equation, the Mathieu equation, and the transport equation. PMID:25794380

  8. A universal concept based on cellular neural networks for ultrafast and flexible solving of differential equations.

    PubMed

    Chedjou, Jean Chamberlain; Kyamakya, Kyandoghere

    2015-04-01

    This paper develops and validates a comprehensive and universally applicable computational concept for solving nonlinear differential equations (NDEs) through a neurocomputing concept based on cellular neural networks (CNNs). High-precision, stability, convergence, and lowest-possible memory requirements are ensured by the CNN processor architecture. A significant challenge solved in this paper is that all these cited computing features are ensured in all system-states (regular or chaotic ones) and in all bifurcation conditions that may be experienced by NDEs.One particular quintessence of this paper is to develop and demonstrate a solver concept that shows and ensures that CNN processors (realized either in hardware or in software) are universal solvers of NDE models. The solving logic or algorithm of given NDEs (possible examples are: Duffing, Mathieu, Van der Pol, Jerk, Chua, Rössler, Lorenz, Burgers, and the transport equations) through a CNN processor system is provided by a set of templates that are computed by our comprehensive templates calculation technique that we call nonlinear adaptive optimization. This paper is therefore a significant contribution and represents a cutting-edge real-time computational engineering approach, especially while considering the various scientific and engineering applications of this ultrafast, energy-and-memory-efficient, and high-precise NDE solver concept. For illustration purposes, three NDE models are demonstratively solved, and related CNN templates are derived and used: the periodically excited Duffing equation, the Mathieu equation, and the transport equation.

  9. A Proposal for Energy-Efficient Cellular Neural Network Based on Spintronic Devices

    NASA Astrophysics Data System (ADS)

    Pan, Chenyun; Naeemi, Azad

    2016-09-01

    Due to the massive parallel computing capability and outstanding image and signal processing performance, cellular neural network (CNN) is one promising type of non-Boolean computing system that can outperform the traditional digital logic computation and mitigate the physical scaling limit of the conventional CMOS technology. The CNN was originally implemented by VLSI analog technologies with operational amplifiers and operational transconductance amplifiers as neurons and synapses, respectively, which are power and area consuming. In this paper, we propose a hybrid structure to implement the CNN with magnetic components and CMOS peripherals with a complete driving and sensing circuitry. In addition, we propose a digitally programmable magnetic synapse that can achieve both positive and negative values of the templates. After rigorous performance analyses and comparisons, optimal energy is achieved based on various design parameters, including the driving voltage and the CMOS driving size. At a comparable footprint area and operation speed, a spintronic CNN is projected to achieve more than one order of magnitude energy reduction per operation compared to its CMOS counterpart.

  10. Memristor-based cellular nonlinear/neural network: design, analysis, and applications.

    PubMed

    Duan, Shukai; Hu, Xiaofang; Dong, Zhekang; Wang, Lidan; Mazumder, Pinaki

    2015-06-01

    Cellular nonlinear/neural network (CNN) has been recognized as a powerful massively parallel architecture capable of solving complex engineering problems by performing trillions of analog operations per second. The memristor was theoretically predicted in the late seventies, but it garnered nascent research interest due to the recent much-acclaimed discovery of nanocrossbar memories by engineers at the Hewlett-Packard Laboratory. The memristor is expected to be co-integrated with nanoscale CMOS technology to revolutionize conventional von Neumann as well as neuromorphic computing. In this paper, a compact CNN model based on memristors is presented along with its performance analysis and applications. In the new CNN design, the memristor bridge circuit acts as the synaptic circuit element and substitutes the complex multiplication circuit used in traditional CNN architectures. In addition, the negative differential resistance and nonlinear current-voltage characteristics of the memristor have been leveraged to replace the linear resistor in conventional CNNs. The proposed CNN design has several merits, for example, high density, nonvolatility, and programmability of synaptic weights. The proposed memristor-based CNN design operations for implementing several image processing functions are illustrated through simulation and contrasted with conventional CNNs. Monte-Carlo simulation has been used to demonstrate the behavior of the proposed CNN due to the variations in memristor synaptic weights.

  11. Modeling urban land use changes in Lanzhou based on artificial neural network and cellular automata

    NASA Astrophysics Data System (ADS)

    Xu, Xibao; Zhang, Jianming; Zhou, Xiaojian

    2008-10-01

    This paper presented a model to simulate urban land use changes based on artificial neural network (ANN) and cellular automata (CA). The model was scaled down at the intra-urban level with subtle land use categorization, developed with Matlab 7.2 and loosely coupled with GIS. Urban land use system is a very complicated non-linear social system influenced by many factors. In this paper, four aspects of a totality 17 factors, including physical, social-economic, neighborhoods and policy, were considered synthetically. ANN was proposed as a solution of CA model calibration through its training to acquire the multitudinous parameters as a substitute for the complex transition rules. A stochastic perturbation parameter v was added into the model, and five different scenarios with different values of v and the threshold were designed for simulations and predictions to explore their effects on urban land use changes. Simulations of 2005 and predictions of 2015 under the five different scenarios were made and evaluated. Finally, the advantages and disadvantages of the model were discussed.

  12. A new approach to the structural features of the Aegean Sea: Cellular neural network

    NASA Astrophysics Data System (ADS)

    Aydogan, Davut; Elmas, Ali; Albora, A. Muhittin; Ucan, Osman N.

    2005-03-01

    In this study, structural features in the Aegean Sea were investigated by application of Cellular Neural Network (CNN) and Cross-Correlation methods to the gravity anomaly map. CNN is a stochastic image processing technique, which is based on template optimization using neighbourhood relationships of pixels, and probabilistic properties of two-Dimensional (2-D) input data. The performance of CNN can be evaluated by various interesting real applications in geophysics such as edge detection, data enhancement and separation of regional/residual potential anomaly maps. In this study, CNN is used in edge detection of geological bodies closer to the surface, which are masked by other structures with various depths and dimensions. CNN was first tested for (prismatic) synthetic examples and satisfactory results were obtained. Subsequently, CNN/Cross-Correlation maps and bathymetric features were evaluated together to obtain a new structural map for most of the Aegean Sea. In our structural map, the locations of the faults and basins are generally in accordance with the previous maps from restricted areas based on seismic data. In the southern and southeastern parts of the Aegean Sea, E-W trending faults cut NE-SW trending basins and faults, similar to on-shore Western Anatolia. Also, in the western, central and northern parts of the Aegean Sea, all of these structures are truncated by NE-trending faults.

  13. Neural Networks

    SciTech Connect

    Smith, Patrick I.

    2003-09-23

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  14. Intrinsic Cellular Properties and Connectivity Density Determine Variable Clustering Patterns in Randomly Connected Inhibitory Neural Networks

    PubMed Central

    Rich, Scott; Booth, Victoria; Zochowski, Michal

    2016-01-01

    The plethora of inhibitory interneurons in the hippocampus and cortex play a pivotal role in generating rhythmic activity by clustering and synchronizing cell firing. Results of our simulations demonstrate that both the intrinsic cellular properties of neurons and the degree of network connectivity affect the characteristics of clustered dynamics exhibited in randomly connected, heterogeneous inhibitory networks. We quantify intrinsic cellular properties by the neuron's current-frequency relation (IF curve) and Phase Response Curve (PRC), a measure of how perturbations given at various phases of a neurons firing cycle affect subsequent spike timing. We analyze network bursting properties of networks of neurons with Type I or Type II properties in both excitability and PRC profile; Type I PRCs strictly show phase advances and IF curves that exhibit frequencies arbitrarily close to zero at firing threshold while Type II PRCs display both phase advances and delays and IF curves that have a non-zero frequency at threshold. Type II neurons whose properties arise with or without an M-type adaptation current are considered. We analyze network dynamics under different levels of cellular heterogeneity and as intrinsic cellular firing frequency and the time scale of decay of synaptic inhibition are varied. Many of the dynamics exhibited by these networks diverge from the predictions of the interneuron network gamma (ING) mechanism, as well as from results in all-to-all connected networks. Our results show that randomly connected networks of Type I neurons synchronize into a single cluster of active neurons while networks of Type II neurons organize into two mutually exclusive clusters segregated by the cells' intrinsic firing frequencies. Networks of Type II neurons containing the adaptation current behave similarly to networks of either Type I or Type II neurons depending on network parameters; however, the adaptation current creates differences in the cluster dynamics

  15. A New Approach for Border Detection of the Dumluca (Turkey) Iron Ore Area: Wavelet Cellular Neural Networks

    NASA Astrophysics Data System (ADS)

    Albora, A. Muhittin; Bal, Abdullah; Ucan, Osman N.

    2007-01-01

    Anomaly analysis is used for various geophysics applications such as determination of geophysical structure's location and border detections. Besides the classical geophysical techniques, artificial intelligence based image processing algorithms have been found attractive for geophysical anomaly analysis. Recently, cellular neural networks (CNN) have been applied to geophysical data and satisfactory results are reported. CNN provides fast and parallel computational capability for geophysical image processing applications due to its filtering structure. The behavior of CNN is defined by two template matrices that are adjusted by a properly supervised learning algorithm. After training stage for geophysical data, Bouguer anomaly maps can be processed and analyzed sequentially. In this paper, CNN learning and processing capability have been improved, combining Wavelet functions and backpropagation learning algorithms. The new architecture is denoted as Wavelet-Cellular Neural networks (Wave-CNN) and it is employed to analyze Bouguer anomaly maps which are important to extract useful information in geophysics. At first, Wave-CNN performance is tested on synthetic geophysical data, which are created by a computer environment. Then, Bouguer anomaly maps of the Dumluca iron ore field have been analyzed and results are reported in comparison to real drilling results.

  16. A cardiac electrical activity model based on a cellular automata system in comparison with neural network model.

    PubMed

    Khan, Muhammad Sadiq Ali; Yousuf, Sidrah

    2016-03-01

    Cardiac Electrical Activity is commonly distributed into three dimensions of Cardiac Tissue (Myocardium) and evolves with duration of time. The indicator of heart diseases can occur randomly at any time of a day. Heart rate, conduction and each electrical activity during cardiac cycle should be monitor non-invasively for the assessment of "Action Potential" (regular) and "Arrhythmia" (irregular) rhythms. Many heart diseases can easily be examined through Automata model like Cellular Automata concepts. This paper deals with the different states of cardiac rhythms using cellular automata with the comparison of neural network also provides fast and highly effective stimulation for the contraction of cardiac muscles on the Atria in the result of genesis of electrical spark or wave. The specific formulated model named as "States of automaton Proposed Model for CEA (Cardiac Electrical Activity)" by using Cellular Automata Methodology is commonly shows the three states of cardiac tissues conduction phenomena (i) Resting (Relax and Excitable state), (ii) ARP (Excited but Absolutely refractory Phase i.e. Excited but not able to excite neighboring cells) (iii) RRP (Excited but Relatively Refractory Phase i.e. Excited and able to excite neighboring cells). The result indicates most efficient modeling with few burden of computation and it is Action Potential during the pumping of blood in cardiac cycle.

  17. A cardiac electrical activity model based on a cellular automata system in comparison with neural network model.

    PubMed

    Khan, Muhammad Sadiq Ali; Yousuf, Sidrah

    2016-03-01

    Cardiac Electrical Activity is commonly distributed into three dimensions of Cardiac Tissue (Myocardium) and evolves with duration of time. The indicator of heart diseases can occur randomly at any time of a day. Heart rate, conduction and each electrical activity during cardiac cycle should be monitor non-invasively for the assessment of "Action Potential" (regular) and "Arrhythmia" (irregular) rhythms. Many heart diseases can easily be examined through Automata model like Cellular Automata concepts. This paper deals with the different states of cardiac rhythms using cellular automata with the comparison of neural network also provides fast and highly effective stimulation for the contraction of cardiac muscles on the Atria in the result of genesis of electrical spark or wave. The specific formulated model named as "States of automaton Proposed Model for CEA (Cardiac Electrical Activity)" by using Cellular Automata Methodology is commonly shows the three states of cardiac tissues conduction phenomena (i) Resting (Relax and Excitable state), (ii) ARP (Excited but Absolutely refractory Phase i.e. Excited but not able to excite neighboring cells) (iii) RRP (Excited but Relatively Refractory Phase i.e. Excited and able to excite neighboring cells). The result indicates most efficient modeling with few burden of computation and it is Action Potential during the pumping of blood in cardiac cycle. PMID:27087101

  18. Adaptive handoff algorithms based on self-organizing neural networks to enhance the quality of service of nonstationary traffic in heirarchical cellular networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2000-03-01

    Third-generation (3G) wireless networks, based on a hierarchical cellular structure, support tiered levels of multimedia services. These services can be categorized as real-time and delay-sensitive, or non-real-time and delay- insensitive. Each call carries demand for one or more services in parallel; each with a guaranteed quality of service (QoS). Roaming is handled by handoff procedures between base stations (BSs) and the mobile subscribers (MSs) within the network. Metrics such as the probabilities of handoff failure, dropped calls and blocked calls; handoff transition time; and handoff rate are used to evaluate the handoff schemes, which also directly affects QoS. Previous researchers have proposed a fuzzy logic system (FLS) with neural encoding of the rule base and probabilistic neural network to solve the handoff decision as a pattern recognition problem in the set of MS signal measurements and mobility amid fading path uncertainties. Both neural approaches evalute only voice traffic in a closed, single- layer network of uniform cells. This paper proposed a new topology-preserving, self-organizing neural network (SONN) for both handoff and admission control as part of an overall resource allocation (RA) problem to support QoS in a three- layer, wideband CDMA HCS with dynamic loading of multimedia services. MS profiles include simultaneous service requirements, which are mapped to a new set of variables, defined in terms of the network radio resources (RRs). Simulations of the new SONN-based algorithms under various operating scenarios of MS mobility, dynamic loading, active set size, and RR bounds, using published traffic models of 3G services, compare their performance with earlier approaches.

  19. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  20. Nested Neural Networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1992-01-01

    Report presents analysis of nested neural networks, consisting of interconnected subnetworks. Analysis based on simplified mathematical models more appropriate for artificial electronic neural networks, partly applicable to biological neural networks. Nested structure allows for retrieval of individual subpatterns. Requires fewer wires and connection devices than fully connected networks, and allows for local reconstruction of damaged subnetworks without rewiring entire network.

  1. A 181 GOPS AKAZE Accelerator Employing Discrete-Time Cellular Neural Networks for Real-Time Feature Extraction.

    PubMed

    Jiang, Guangli; Liu, Leibo; Zhu, Wenping; Yin, Shouyi; Wei, Shaojun

    2015-01-01

    This paper proposes a real-time feature extraction VLSI architecture for high-resolution images based on the accelerated KAZE algorithm. Firstly, a new system architecture is proposed. It increases the system throughput, provides flexibility in image resolution, and offers trade-offs between speed and scaling robustness. The architecture consists of a two-dimensional pipeline array that fully utilizes computational similarities in octaves. Secondly, a substructure (block-serial discrete-time cellular neural network) that can realize a nonlinear filter is proposed. This structure decreases the memory demand through the removal of data dependency. Thirdly, a hardware-friendly descriptor is introduced in order to overcome the hardware design bottleneck through the polar sample pattern; a simplified method to realize rotation invariance is also presented. Finally, the proposed architecture is designed in TSMC 65 nm CMOS technology. The experimental results show a performance of 127 fps in full HD resolution at 200 MHz frequency. The peak performance reaches 181 GOPS and the throughput is double the speed of other state-of-the-art architectures. PMID:26404305

  2. A 181 GOPS AKAZE Accelerator Employing Discrete-Time Cellular Neural Networks for Real-Time Feature Extraction

    PubMed Central

    Jiang, Guangli; Liu, Leibo; Zhu, Wenping; Yin, Shouyi; Wei, Shaojun

    2015-01-01

    This paper proposes a real-time feature extraction VLSI architecture for high-resolution images based on the accelerated KAZE algorithm. Firstly, a new system architecture is proposed. It increases the system throughput, provides flexibility in image resolution, and offers trade-offs between speed and scaling robustness. The architecture consists of a two-dimensional pipeline array that fully utilizes computational similarities in octaves. Secondly, a substructure (block-serial discrete-time cellular neural network) that can realize a nonlinear filter is proposed. This structure decreases the memory demand through the removal of data dependency. Thirdly, a hardware-friendly descriptor is introduced in order to overcome the hardware design bottleneck through the polar sample pattern; a simplified method to realize rotation invariance is also presented. Finally, the proposed architecture is designed in TSMC 65 nm CMOS technology. The experimental results show a performance of 127 fps in full HD resolution at 200 MHz frequency. The peak performance reaches 181 GOPS and the throughput is double the speed of other state-of-the-art architectures. PMID:26404305

  3. Parallel Consensual Neural Networks

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Sveinsson, J. R.; Ersoy, O. K.; Swain, P. H.

    1993-01-01

    A new neural network architecture is proposed and applied in classification of remote sensing/geographic data from multiple sources. The new architecture is called the parallel consensual neural network and its relation to hierarchical and ensemble neural networks is discussed. The parallel consensual neural network architecture is based on statistical consensus theory. The input data are transformed several times and the different transformed data are applied as if they were independent inputs and are classified using stage neural networks. Finally, the outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote sensing data and geographic data are given. The performance of the consensual neural network architecture is compared to that of a two-layer (one hidden layer) conjugate-gradient backpropagation neural network. The results with the proposed neural network architecture compare favorably in terms of classification accuracy to the backpropagation method.

  4. Hierarchical random cellular neural networks for system-level brain-like signal processing.

    PubMed

    Kozma, Robert; Puljic, Marko

    2013-09-01

    Sensory information processing and cognition in brains are modeled using dynamic systems theory. The brain's dynamic state is described by a trajectory evolving in a high-dimensional state space. We introduce a hierarchy of random cellular automata as the mathematical tools to describe the spatio-temporal dynamics of the cortex. The corresponding brain model is called neuropercolation which has distinct advantages compared to traditional models using differential equations, especially in describing spatio-temporal discontinuities in the form of phase transitions. Phase transitions demarcate singularities in brain operations at critical conditions, which are viewed as hallmarks of higher cognition and awareness experience. The introduced Monte-Carlo simulations obtained by parallel computing point to the importance of computer implementations using very large-scale integration (VLSI) and analog platforms.

  5. Electronic neural networks

    SciTech Connect

    Howard, R.E.; Jackel, L.D.; Graf, H.P.

    1988-02-01

    The use of electronic neural networks to handle some complex computing problems is discussed. A simple neural model is shown and discussed in terms of its computational aspects. The use of electronic neural networks in machine pattern recognition and classification and in machine learning is examined. CMOS programmable networks are discussed. 15 references.

  6. Nanostructured cellular networks.

    PubMed

    Moriarty, P; Taylor, M D R; Brust, M

    2002-12-01

    Au nanocrystals spin-coated onto silicon from toluene form cellular networks. A quantitative statistical crystallography analysis shows that intercellular correlations drive the networks far from statistical equilibrium. Spin-coating from hexane does not produce cellular structure, yet a strong correlation is retained in the positions of nanocrystal aggregates. Mechanisms based on Marangoni convection alone cannot account for the variety of patterns observed, and we argue that spinodal decomposition plays an important role in foam formation.

  7. Foundations of neural networks

    SciTech Connect

    Simpson, P.K.

    1994-12-31

    Building intelligent systems that can model human behavior has captured the attention of the world for years. So, it is not surprising that a technology such as neural networks has generated great interest. This paper will provide an evolutionary introduction to neural networks by beginning with the key elements and terminology of neural networks, and developing the topologies, learning laws, and recall dynamics from this infrastructure. The perspective taken in this paper is largely that of an engineer, emphasizing the application potential of neural networks and drawing comparisons with other techniques that have similar motivations. As such, mathematics will be relied upon in many of the discussions to make points as precise as possible. The paper begins with a review of what neural networks are and why they are so appealing. A typical neural network is immediately introduced to illustrate several of the key features. With this network as a reference, the evolutionary introduction to neural networks is then pursued. The fundamental elements of a neural network, such as input and output patterns, processing element, connections, and threshold operations, are described, followed by descriptions of neural network topologies, learning algorithms, and recall dynamics. A taxonomy of neural networks is presented that uses two of the key characteristics of learning and recall. Finally, a comparison of neural networks and similar nonneural information processing methods is presented.

  8. Modeling land use and land cover changes in a vulnerable coastal region using artificial neural networks and cellular automata.

    PubMed

    Qiang, Yi; Lam, Nina S N

    2015-03-01

    As one of the most vulnerable coasts in the continental USA, the Lower Mississippi River Basin (LMRB) region has endured numerous hazards over the past decades. The sustainability of this region has drawn great attention from the international, national, and local communities, wanting to understand how the region as a system develops under intense interplay between the natural and human factors. A major problem in this deltaic region is significant land loss over the years due to a combination of natural and human factors. The main scientific and management questions are what factors contribute to the land use land cover (LULC) changes in this region, can we model the changes, and how would the LULC look like in the future given the current factors? This study analyzed the LULC changes of the region between 1996 and 2006 by utilizing an artificial neural network (ANN) to derive the LULC change rules from 15 human and natural variables. The rules were then used to simulate future scenarios in a cellular automation model. A stochastic element was added in the model to represent factors that were not included in the current model. The analysis was conducted for two sub-regions in the study area for comparison. The results show that the derived ANN models could simulate the LULC changes with a high degree of accuracy (above 92 % on average). A total loss of 263 km(2) in wetlands from 2006 to 2016 was projected, whereas the trend of forest loss will cease. These scenarios provide useful information to decision makers for better planning and management of the region. PMID:25647797

  9. Modeling land use and land cover changes in a vulnerable coastal region using artificial neural networks and cellular automata.

    PubMed

    Qiang, Yi; Lam, Nina S N

    2015-03-01

    As one of the most vulnerable coasts in the continental USA, the Lower Mississippi River Basin (LMRB) region has endured numerous hazards over the past decades. The sustainability of this region has drawn great attention from the international, national, and local communities, wanting to understand how the region as a system develops under intense interplay between the natural and human factors. A major problem in this deltaic region is significant land loss over the years due to a combination of natural and human factors. The main scientific and management questions are what factors contribute to the land use land cover (LULC) changes in this region, can we model the changes, and how would the LULC look like in the future given the current factors? This study analyzed the LULC changes of the region between 1996 and 2006 by utilizing an artificial neural network (ANN) to derive the LULC change rules from 15 human and natural variables. The rules were then used to simulate future scenarios in a cellular automation model. A stochastic element was added in the model to represent factors that were not included in the current model. The analysis was conducted for two sub-regions in the study area for comparison. The results show that the derived ANN models could simulate the LULC changes with a high degree of accuracy (above 92 % on average). A total loss of 263 km(2) in wetlands from 2006 to 2016 was projected, whereas the trend of forest loss will cease. These scenarios provide useful information to decision makers for better planning and management of the region.

  10. A consensual neural network

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Ersoy, O. K.; Swain, P. H.

    1991-01-01

    A neural network architecture called a consensual neural network (CNN) is proposed for the classification of data from multiple sources. Its relation to hierarchical and ensemble neural networks is discussed. CNN is based on the statistical consensus theory and uses nonlinearly transformed input data. The input data are transformed several times, and the different transformed data are applied as if they were independent inputs. The independent inputs are classified using stage neural networks and outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote-sensing data and geographic data are given.

  11. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  12. Exploring neural network technology

    SciTech Connect

    Naser, J.; Maulbetsch, J.

    1992-12-01

    EPRI is funding several projects to explore neural network technology, a form of artificial intelligence that some believe may mimic the way the human brain processes information. This research seeks to provide a better understanding of fundamental neural network characteristics and to identify promising utility industry applications. Results to date indicate that the unique attributes of neural networks could lead to improved monitoring, diagnostic, and control capabilities for a variety of complex utility operations. 2 figs.

  13. Parallel architectures and neural networks

    SciTech Connect

    Calianiello, E.R. )

    1989-01-01

    This book covers parallel computer architectures and neural networks. Topics include: neural modeling, use of ADA to simulate neural networks, VLSI technology, implementation of Boltzmann machines, and analysis of neural nets.

  14. Global Exponential Stability of Almost Periodic Solution for Neutral-Type Cohen-Grossberg Shunting Inhibitory Cellular Neural Networks with Distributed Delays and Impulses

    PubMed Central

    Xu, Lijun; Jiang, Qi; Gu, Guodong

    2016-01-01

    A kind of neutral-type Cohen-Grossberg shunting inhibitory cellular neural networks with distributed delays and impulses is considered. Firstly, by using the theory of impulsive differential equations and the contracting mapping principle, the existence and uniqueness of the almost periodic solution for the above system are obtained. Secondly, by constructing a suitable Lyapunov functional, the global exponential stability of the unique almost periodic solution is also investigated. The work in this paper improves and extends some results in recent years. As an application, an example and numerical simulations are presented to demonstrate the feasibility and effectiveness of the main results. PMID:27190502

  15. Neural network machine vision

    SciTech Connect

    Fox, R.O.; Czerniejewski, F.; Fluet, F.; Mitchell, E.

    1988-09-01

    Gould, Inc. and Nestor, Inc. cooperated on a joint development project to combine machine vision technology with neural network technology. The result is a machine vision system which can be trained by an inexperienced operator to perform qualitative classification. The hardware preprocessor reduces the information in the 2D camera image from 122,880 (i.e. 512 x 240) bytes to several hundred bytes in 64 milliseconds. The output of the preprocessor, which is in the format of connected lines, is fed to the first neural network. This neural network performs feature recognition. The output of the first neural network is a probability map. This map is fed to the input of the second neural network which performs object verification. The output of the second neural network is the object location and classification in the field of view. This information can optionally be fed into a third neural network which analyzes spatial relationships of objects in the field of view. The final output is a classification, by quality level, or by style. The system has been tested on applications ranging from the grading of plywood and the grading of paper to the sorting of fabricated metal parts. Specific application examples are presented.

  16. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  17. Critical Branching Neural Networks

    ERIC Educational Resources Information Center

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  18. Processing the Bouguer anomaly map of Biga and the surrounding area by the cellular neural network: application to the southwestern Marmara region

    NASA Astrophysics Data System (ADS)

    Aydogan, D.

    2007-04-01

    An image processing technique called the cellular neural network (CNN) approach is used in this study to locate geological features giving rise to gravity anomalies such as faults or the boundary of two geologic zones. CNN is a stochastic image processing technique based on template optimization using the neighborhood relationships of cells. These cells can be characterized by a functional block diagram that is typical of neural network theory. The functionality of CNN is described in its entirety by a number of small matrices (A, B and I) called the cloning template. CNN can also be considered to be a nonlinear convolution of these matrices. This template describes the strength of the nearest neighbor interconnections in the network. The recurrent perceptron learning algorithm (RPLA) is used in optimization of cloning template. The CNN and standard Canny algorithms were first tested on two sets of synthetic gravity data with the aim of checking the reliability of the proposed approach. The CNN method was compared with classical derivative techniques by applying the cross-correlation method (CC) to the same anomaly map as this latter approach can detect some features that are difficult to identify on the Bouguer anomaly maps. This approach was then applied to the Bouguer anomaly map of Biga and its surrounding area, in Turkey. Structural features in the area between Bandirma, Biga, Yenice and Gonen in the southwest Marmara region are investigated by applying the CNN and CC to the Bouguer anomaly map. Faults identified by these algorithms are generally in accordance with previously mapped surface faults. These examples show that the geologic boundaries can be detected from Bouguer anomaly maps using the cloning template approach. A visual evaluation of the outputs of the CNN and CC approaches is carried out, and the results are compared with each other. This approach provides quantitative solutions based on just a few assumptions, which makes the method more

  19. Nested neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    Nested neural networks, consisting of small interconnected subnetworks, allow for the storage and retrieval of neural state patterns of different sizes. The subnetworks are naturally categorized by layers of corresponding to spatial frequencies in the pattern field. The storage capacity and the error correction capability of the subnetworks generally increase with the degree of connectivity between layers (the nesting degree). Storage of only few subpatterns in each subnetworks results in a vast storage capacity of patterns and subpatterns in the nested network, maintaining high stability and error correction capability.

  20. Critical branching neural networks.

    PubMed

    Kello, Christopher T

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical branching and, in doing so, simulates observed scaling laws as pervasive to neural and behavioral activity. These scaling laws are related to neural and cognitive functions, in that critical branching is shown to yield spiking activity with maximal memory and encoding capacities when analyzed using reservoir computing techniques. The model is also shown to account for findings of pervasive 1/f scaling in speech and cued response behaviors that are difficult to explain by isolable causes. Issues and questions raised by the model and its results are discussed from the perspectives of physics, neuroscience, computer and information sciences, and psychological and cognitive sciences.

  1. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Lambe, John; Moopen, Alexander; Thakoor, Anilkumar P.

    1988-01-01

    Memory based on neural network models content-addressable and fault-tolerant. System includes electronic equivalent of synaptic network; particular, matrix of programmable binary switching elements over which data distributed. Switches programmed in parallel by outputs of serial-input/parallel-output shift registers. Input and output terminals of bank of high-gain nonlinear amplifiers connected in nonlinear-feedback configuration by switches and by memory-prompting shift registers.

  2. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  3. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  4. Chaotic Neural Networks and Beyond

    NASA Astrophysics Data System (ADS)

    Aihara, Kazuyuki; Yamada, Taiji; Oku, Makito

    2013-01-01

    A chaotic neuron model which is closely related to deterministic chaos observed experimentally with squid giant axons is explained, and used to construct a chaotic neural network model. Further, such a chaotic neural network is extended to different chaotic models such as a largescale memory relation network, a locally connected network, a vector-valued network, and a quaternionic-valued neuron.

  5. Practical emotional neural networks.

    PubMed

    Lotfi, Ehsan; Akbarzadeh-T, M-R

    2014-11-01

    In this paper, we propose a limbic-based artificial emotional neural network (LiAENN) for a pattern recognition problem. LiAENN is a novel computational neural model of the emotional brain that models emotional situations such as anxiety and confidence in the learning process, the short paths, the forgetting processes, and inhibitory mechanisms of the emotional brain. In the model, the learning weights are adjusted by the proposed anxious confident decayed brain emotional learning rules (ACDBEL). In engineering applications, LiAENN is utilized in facial detection, and emotion recognition. According to the comparative results on ORL and Yale datasets, LiAENN shows a higher accuracy than other applied emotional networks such as brain emotional learning (BEL) and emotional back propagation (EmBP) based networks.

  6. Neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  7. Rule generation from neural networks

    SciTech Connect

    Fu, L.

    1994-08-01

    The neural network approach has proven useful for the development of artificial intelligence systems. However, a disadvantage with this approach is that the knowledge embedded in the neural network is opaque. In this paper, we show how to interpret neural network knowledge in symbolic form. We lay down required definitions for this treatment, formulate the interpretation algorithm, and formally verify its soundness. The main result is a formalized relationship between a neural network and a rule-based system. In addition, it has been demonstrated that the neural network generates rules of better performance than the decision tree approach in noisy conditions. 7 refs.

  8. Parallel processing neural networks

    SciTech Connect

    Zargham, M.

    1988-09-01

    A model for Neural Network which is based on a particular kind of Petri Net has been introduced. The model has been implemented in C and runs on the Sequent Balance 8000 multiprocessor, however it can be directly ported to different multiprocessor environments. The potential advantages of using Petri Nets include: (1) the overall system is often easier to understand due to the graphical and precise nature of the representation scheme, (2) the behavior of the system can be analyzed using Petri Net theory. Though, the Petri Net is an obvious choice as a basis for the model, the basic Petri Net definition is not adequate to represent the neuronal system. To eliminate certain inadequacies more information has been added to the Petri Net model. In the model, a token represents either a processor or a post synaptic potential. Progress through a particular Neural Network is thus graphically depicted in the movement of the processor tokens through the Petri Net.

  9. Neural networks for triggering

    SciTech Connect

    Denby, B. ); Campbell, M. ); Bedeschi, F. ); Chriss, N.; Bowers, C. ); Nesti, F. )

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  10. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  11. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  12. Kohonen neural networks and language.

    PubMed

    Anderson, B

    1999-10-15

    Kohonen neural networks are a type of self-organizing network that recognizes the statistical characteristics of input datasets. The application of this type of neural network to language theory is demonstrated in the present note by showing three brief applications: recognizing word borders, learning the limited phonemes of one's native tongue, and category-specific naming impairments.

  13. Space-Time Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  14. Local coupled feedforward neural network.

    PubMed

    Sun, Jianye

    2010-01-01

    In this paper, the local coupled feedforward neural network is presented. Its connection structure is same as that of Multilayer Perceptron with one hidden layer. In the local coupled feedforward neural network, each hidden node is assigned an address in an input space, and each input activates only the hidden nodes near it. For each input, only the activated hidden nodes take part in forward and backward propagation processes. Theoretical analysis and simulation results show that this neural network owns the "universal approximation" property and can solve the learning problem of feedforward neural networks. In addition, its characteristic of local coupling makes knowledge accumulation possible.

  15. [Artificial neural networks in Neurosciences].

    PubMed

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María

    2011-11-01

    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  16. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  17. MSAT and cellular hybrid networking

    NASA Technical Reports Server (NTRS)

    Baranowsky, Patrick W., II

    1993-01-01

    Westinghouse Electric Corporation is developing both the Communications Ground Segment and the Series 1000 Mobile Phone for American Mobile Satellite Corporation's (AMSC's) Mobile Satellite (MSAT) system. The success of the voice services portion of this system depends, to some extent, upon the interoperability of the cellular network and the satellite communication circuit switched communication channels. This paper will describe the set of user-selectable cellular interoperable modes (cellular first/satellite second, etc.) provided by the Mobile Phone and described how they are implemented with the ground segment. Topics including roaming registration and cellular-to-satellite 'seamless' call handoff will be discussed, along with the relevant Interim Standard IS-41 Revision B Cellular Radiotelecommunications Intersystem Operations and IOS-553 Mobile Station - Land Station Compatibility Specification.

  18. Three dimensional living neural networks

    NASA Astrophysics Data System (ADS)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  19. Dynamic interactions in neural networks

    SciTech Connect

    Arbib, M.A. ); Amari, S. )

    1989-01-01

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  20. Neural Networks for the Beginner.

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  1. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  2. Neural Network Development Tool (NETS)

    NASA Technical Reports Server (NTRS)

    Baffes, Paul T.

    1990-01-01

    Artificial neural networks formed from hundreds or thousands of simulated neurons, connected in manner similar to that in human brain. Such network models learning behavior. Using NETS involves translating problem to be solved into input/output pairs, designing network configuration, and training network. Written in C.

  3. Neural networks in astronomy.

    PubMed

    Tagliaferri, Roberto; Longo, Giuseppe; Milano, Leopoldo; Acernese, Fausto; Barone, Fabrizio; Ciaramella, Angelo; De Rosa, Rosario; Donalek, Ciro; Eleuteri, Antonio; Raiconi, Giancarlo; Sessa, Salvatore; Staiano, Antonino; Volpicelli, Alfredo

    2003-01-01

    In the last decade, the use of neural networks (NN) and of other soft computing methods has begun to spread also in the astronomical community which, due to the required accuracy of the measurements, is usually reluctant to use automatic tools to perform even the most common tasks of data reduction and data mining. The federation of heterogeneous large astronomical databases which is foreseen in the framework of the astrophysical virtual observatory and national virtual observatory projects, is, however, posing unprecedented data mining and visualization problems which will find a rather natural and user friendly answer in artificial intelligence tools based on NNs, fuzzy sets or genetic algorithms. This review is aimed to both astronomers (who often have little knowledge of the methodological background) and computer scientists (who often know little about potentially interesting applications), and therefore will be structured as follows: after giving a short introduction to the subject, we shall summarize the methodological background and focus our attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining. Most of the original work described in the paper has been performed in the framework of the AstroNeural collaboration (Napoli-Salerno).

  4. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  5. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  6. Neural Networks for Readability Analysis.

    ERIC Educational Resources Information Center

    McEneaney, John E.

    This paper describes and reports on the performance of six related artificial neural networks that have been developed for the purpose of readability analysis. Two networks employ counts of linguistic variables that simulate a traditional regression-based approach to readability. The remaining networks determine readability from "visual snapshots"…

  7. Neural Networks Of VLSI Components

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1991-01-01

    Concept for design of electronic neural network calls for assembly of very-large-scale integrated (VLSI) circuits of few standard types. Each VLSI chip, which contains both analog and digital circuitry, used in modular or "building-block" fashion by interconnecting it in any of variety of ways with other chips. Feedforward neural network in typical situation operates under control of host computer and receives inputs from, and sends outputs to, other equipment.

  8. Interval neural networks

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

  9. Neural-Network-Development Program

    NASA Technical Reports Server (NTRS)

    Phillips, Todd A.

    1993-01-01

    NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.

  10. Neural networks in clinical medicine.

    PubMed

    Penny, W; Frost, D

    1996-01-01

    Neural networks are parallel, distributed, adaptive information-processing systems that develop their functionality in response to exposure to information. This paper is a tutorial for researchers intending to use neural nets for medical decision-making applications. It includes detailed discussion of the issues particularly relevant to medical data as well as wider issues relevant to any neural net application. The article is restricted to back-propagation learning in multilayer perceptrons, as this is the neural net model most widely used in medical applications.

  11. Cellular recurrent deep network for image registration

    NASA Astrophysics Data System (ADS)

    Alam, M.; Vidyaratne, L.; Iftekharuddin, Khan M.

    2015-09-01

    Image registration using Artificial Neural Network (ANN) remains a challenging learning task. Registration can be posed as a two-step problem: parameter estimation and actual alignment/transformation using the estimated parameters. To date ANN based image registration techniques only perform the parameter estimation, while affine equations are used to perform the actual transformation. In this paper, we propose a novel deep ANN based image rigid registration that combines parameter estimation and transformation as a simultaneous learning task. Our previous work shows that a complex universal approximator known as Cellular Simultaneous Recurrent Network (CSRN) can successfully approximate affine transformations with known transformation parameters. This study introduces a deep ANN that combines a feed forward network with a CSRN to perform full rigid registration. Layer wise training is used to pre-train feed forward network for parameter estimation and followed by a CSRN for image transformation respectively. The deep network is then fine-tuned to perform the final registration task. Our result shows that the proposed deep ANN architecture achieves comparable registration accuracy to that of image affine transformation using CSRN with known parameters. We also demonstrate the efficacy of our novel deep architecture by a performance comparison with a deep clustered MLP.

  12. Preliminary Analysis of the efficacy of Artificial neural Network (ANN) and Cellular Automaton (CA) based Land Use Models in Urban Land-Use Planning

    NASA Astrophysics Data System (ADS)

    Harun, R.

    2013-05-01

    This research provides an opportunity of collaboration between urban planners and modellers by providing a clear theoretical foundations on the two most widely used urban land use models, and assessing the effectiveness of applying the models in urban planning context. Understanding urban land cover change is an essential element for sustainable urban development as it affects ecological functioning in urban ecosystem. Rapid urbanization due to growing inclination of people to settle in urban areas has increased the complexities in predicting that at what shape and size cities will grow. The dynamic changes in the spatial pattern of urban landscapes has exposed the policy makers and environmental scientists to great challenge. But geographic science has grown in symmetry to the advancements in computer science. Models and tools are developed to support urban planning by analyzing the causes and consequences of land use changes and project the future. Of all the different types of land use models available in recent days, it has been found by researchers that the most frequently used models are Cellular Automaton (CA) and Artificial Neural Networks (ANN) models. But studies have demonstrated that the existing land use models have not been able to meet the needs of planners and policy makers. There are two primary causes identified behind this prologue. First, there is inadequate understanding of the fundamental theories and application of the models in urban planning context i.e., there is a gap in communication between modellers and urban planners. Second, the existing models exclude many key drivers in the process of simplification of the complex urban system that guide urban spatial pattern. Thus the models end up being effective in assessing the impacts of certain land use policies, but cannot contribute in new policy formulation. This paper is an attempt to increase the knowledge base of planners on the most frequently used land use model and also assess the

  13. Information complexity of neural networks.

    PubMed

    Kon, M A; Plaskota, L

    2000-04-01

    This paper studies the question of lower bounds on the number of neurons and examples necessary to program a given task into feed forward neural networks. We introduce the notion of information complexity of a network to complement that of neural complexity. Neural complexity deals with lower bounds for neural resources (numbers of neurons) needed by a network to perform a given task within a given tolerance. Information complexity measures lower bounds for the information (i.e. number of examples) needed about the desired input-output function. We study the interaction of the two complexities, and so lower bounds for the complexity of building and then programming feed-forward nets for given tasks. We show something unexpected a priori--the interaction of the two can be simply bounded, so that they can be studied essentially independently. We construct radial basis function (RBF) algorithms of order n3 that are information-optimal, and give example applications.

  14. Antenna analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  15. Nonlinear control with neural networks

    SciTech Connect

    Malik, S.A.

    1996-12-31

    Research results are presented to show the successful industrial application of neural networks in closed loop. Two distillation columns are used to demonstrate the effectiveness of nonlinear controllers. The two columns chosen for this purpose are very dissimilar in operating characteristics, and dynamic behavior. One of the columns is a crude column, and the second, a depropaniser, is a smaller column in a vapor recovery unit. In earlier work, neural networks had been presented as general function estimators, for prediction of stream compositions and the suitability of the various network architectures for this task had been investigated. This report reviews the successful application of neural networks, as feedback controllers, to large industrial distillation columns. 21 refs.

  16. Micromechanics of cellularized biopolymer networks

    PubMed Central

    Jones, Christopher A. R.; Cibula, Matthew; Feng, Jingchen; Krnacik, Emma A.; McIntyre, David H.; Levine, Herbert; Sun, Bo

    2015-01-01

    Collagen gels are widely used in experiments on cell mechanics because they mimic the extracellular matrix in physiological conditions. Collagen gels are often characterized by their bulk rheology; however, variations in the collagen fiber microstructure and cell adhesion forces cause the mechanical properties to be inhomogeneous at the cellular scale. We study the mechanics of type I collagen on the scale of tens to hundreds of microns by using holographic optical tweezers to apply pN forces to microparticles embedded in the collagen fiber network. We find that in response to optical forces, particle displacements are inhomogeneous, anisotropic, and asymmetric. Gels prepared at 21 °C and 37 °C show qualitative difference in their micromechanical characteristics. We also demonstrate that contracting cells remodel the micromechanics of their surrounding extracellular matrix in a strain- and distance-dependent manner. To further understand the micromechanics of cellularized extracellular matrix, we have constructed a computational model which reproduces the main experiment findings. PMID:26324923

  17. Neural network computer simulation of medical aerosols.

    PubMed

    Richardson, C J; Barlow, D J

    1996-06-01

    Preliminary investigations have been conducted to assess the potential for using artificial neural networks to simulate aerosol behaviour, with a view to employing this type of methodology in the evaluation and design of pulmonary drug-delivery systems. Details are presented of the general purpose software developed for these tasks; it implements a feed-forward back-propagation algorithm with weight decay and connection pruning, the user having complete run-time control of the network architecture and mode of training. A series of exploratory investigations is then reported in which different network structures and training strategies are assessed in terms of their ability to simulate known patterns of fluid flow in simple model systems. The first of these involves simulations of cellular automata-generated data for fluid flow through a partially obstructed two-dimensional pipe. The artificial neural networks are shown to be highly successful in simulating the behaviour of this simple linear system, but with important provisos relating to the information content of the training data and the criteria used to judge when the network is properly trained. A second set of investigations is then reported in which similar networks are used to simulate patterns of fluid flow through aerosol generation devices, using training data furnished through rigorous computational fluid dynamics modelling. These more complex three-dimensional systems are modelled with equal success. It is concluded that carefully tailored, well trained networks could provide valuable tools not just for predicting but also for analysing the spatial dynamics of pharmaceutical aerosols.

  18. Plant Growth Models Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  19. Neural Networks for Flight Control

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  20. Neural language networks at birth

    PubMed Central

    Perani, Daniela; Saccuman, Maria C.; Scifo, Paola; Anwander, Alfred; Spada, Danilo; Baldoli, Cristina; Poloniato, Antonella; Lohmann, Gabriele; Friederici, Angela D.

    2011-01-01

    The ability to learn language is a human trait. In adults and children, brain imaging studies have shown that auditory language activates a bilateral frontotemporal network with a left hemispheric dominance. It is an open question whether these activations represent the complete neural basis for language present at birth. Here we demonstrate that in 2-d-old infants, the language-related neural substrate is fully active in both hemispheres with a preponderance in the right auditory cortex. Functional and structural connectivities within this neural network, however, are immature, with strong connectivities only between the two hemispheres, contrasting with the adult pattern of prevalent intrahemispheric connectivities. Thus, although the brain responds to spoken language already at birth, thereby providing a strong biological basis to acquire language, progressive maturation of intrahemispheric functional connectivity is yet to be established with language exposure as the brain develops. PMID:21896765

  1. Artificial neural networks in medicine

    SciTech Connect

    Keller, P.E.

    1994-07-01

    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  2. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  3. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  4. Statistical Mechanics of Neural Networks

    NASA Astrophysics Data System (ADS)

    Rau, Albrecht

    1992-01-01

    Available from UMI in association with The British Library. Requires signed TDF. In this thesis we study neural networks using tools from the statistical mechanics of systems with quenched disorder. We apply these tools to two structurally different types of networks, feed-forward and feedback networks, whose properties we first review. After reviewing the use of feed-forward networks to infer unknown rules from sets of examples, we demonstrate how practical considerations can be incorporated into the analysis and how, as a consequence, existing learning theories have to be modified. To do so, we analyse the learning of rules which cannot be learnt perfectly due to constraints on the networks used. We present and analyse a model of multi-class classification and mention how it can be used. Finally we give an analytical treatment of a "learning by query" algorithm, for which the rule is extracted from queries which are not random but selected to increase the information gain. In this thesis feedback networks are used as associative memories. Our study centers on an analysis of specific features of the basins of attraction and the structure of weight space of optimized neural networks. We investigate the pattern selectivity of optimized networks, i.e. their ability to differentiate similar but distinct patterns, and show how the basins of attraction may be enlarged using external stimulus fields. Using a new method of analysis we study the weight space organization of optimized neural networks and show how the insights gained can be used to classify different groups of networks.

  5. Effects of nerve injury and segmental regeneration on the cellular correlates of neural morphallaxis.

    PubMed

    Martinez, Veronica G; Manson, Josiah M B; Zoran, Mark J

    2008-09-15

    Functional recovery of neural networks after injury requires a series of signaling events similar to the embryonic processes that governed initial network construction. Neural morphallaxis, a form of nervous system regeneration, involves reorganization of adult neural connectivity patterns. Neural morphallaxis in the worm, Lumbriculus variegatus, occurs during asexual reproduction and segmental regeneration, as body fragments acquire new positional identities along the anterior-posterior axis. Ectopic head (EH) formation, induced by ventral nerve cord lesion, generated morphallactic plasticity including the reorganization of interneuronal sensory fields and the induction of a molecular marker of neural morphallaxis. Morphallactic changes occurred only in segments posterior to an EH. Neither EH formation, nor neural morphallaxis was observed after dorsal body lesions, indicating a role for nerve cord injury in morphallaxis induction. Furthermore, a hierarchical system of neurobehavioral control was observed, where anterior heads were dominant and an EH controlled body movements only in the absence of the anterior head. Both suppression of segmental regeneration and blockade of asexual fission, after treatment with boric acid, disrupted the maintenance of neural morphallaxis, but did not block its induction. Therefore, segmental regeneration (i.e., epimorphosis) may not be required for the induction of morphallactic remodeling of neural networks. However, on-going epimorphosis appears necessary for the long-term consolidation of cellular and molecular mechanisms underlying the morphallaxis of neural circuitry. PMID:18561185

  6. Collective modes in neural networks.

    PubMed

    Parikh, J C; Satyan, V; Pratap, R

    1989-02-01

    A theoretical model, based on response of a neural network to an external stimulus, was constructed to determine its collective modes. It is suggested that the waves observed in EEG records reflect the cooperative electrical activity of a large number of neurons. Further, an actual EEG time series was analyzed to deduce two dynamic parameters, dimension d of phase space of the neural system and the minimum number of variables nc necessary to describe the EEG pattern. We find d = 6.2 and nc = 11. PMID:2722419

  7. Cellular mechanisms of posterior neural tube morphogenesis in the zebrafish.

    PubMed

    Harrington, Michael J; Chalasani, Kavita; Brewster, Rachel

    2010-03-01

    The zebrafish is a well established model system for studying neural development, yet neurulation remains poorly understood in this organism. In particular, the morphogenetic movements that shape the posterior neural tube (PNT) have not been described. Using tools for imaging neural tissue and tracking the behavior of cells in real time, we provide the first comprehensive analysis of the cellular events shaping the PNT. We observe that this tissue is formed in a stepwise manner, beginning with merging of presumptive neural domains in the tailbud (Stage 1); followed by neural convergence and infolding to shape the neural rod (Stage 2); and continued elongation of the PNT, in absence of further convergence (Stage 3). We further demonstrate that cell proliferation plays only a minimal role in PNT elongation. Overall, these mechanisms resemble those previously described in anterior regions, suggesting that, in contrast to amniotes, neurulation is a fairly uniform process in zebrafish.

  8. Discontinuities in recurrent neural networks.

    PubMed

    Gavaldá, R; Siegelmann, H T

    1999-04-01

    This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.

  9. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  10. International joint conference on neural networks

    SciTech Connect

    Not Available

    1989-01-01

    This book contains papers on neural networks. Included are the following topics: A self-training visual inspection system with a neural network classifier; A bifurcation theory approach to vector field programming for periodic attractors; and construction of neural nets using the radon transform.

  11. The LILARTI neural network system

    SciTech Connect

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  12. The hysteretic Hopfield neural network.

    PubMed

    Bharitkar, S; Mendel, J M

    2000-01-01

    A new neuron activation function based on a property found in physical systems--hysteresis--is proposed. We incorporate this neuron activation in a fully connected dynamical system to form the hysteretic Hopfield neural network (HHNN). We then present an analog implementation of this architecture and its associated dynamical equation and energy function.We proceed to prove Lyapunov stability for this new model, and then solve a combinatorial optimization problem (i.e., the N-queen problem) using this network. We demonstrate the advantages of hysteresis by showing increased frequency of convergence to a solution, when the parameters associated with the activation function are varied. PMID:18249816

  13. Imaging of the islet neural network.

    PubMed

    Tang, S-C; Peng, S-J; Chien, H-J

    2014-09-01

    The islets of Langerhans receive signals from the circulation and nerves to modulate hormone secretion in response to physiological cues. Although the rich islet innervation has been documented in the literature dating as far back as Paul Langerhans' discovery of islets in the pancreas, it remains a challenging task for researchers to acquire detailed islet innervation patterns in health and disease due to the dispersed nature of the islet neurovascular network. In this article, we discuss the recent development of 3-dimensional (3D) islet neurohistology, in which transparent pancreatic specimens were prepared by optical clearing to visualize the islet microstructure, vasculature and innervation with deep-tissue microscopy. Mouse islets were used as an example to illustrate how to apply this 3D imaging approach to characterize (i) the islet parasympathetic innervation, (ii) the islet sympathetic innervation and its reinnervation after transplantation under the kidney capsule and (iii) the reactive cellular response of the Schwann cell network in islet injury. While presenting and characterizing the innervation patterns, we also discuss how to apply the signals derived from transmitted light microscopy, vessel painting and immunostaining of neural markers to verify the location and source of tissue information. In summary, the systematic development of tissue labelling, clearing and imaging methods to reveal the islet neuroanatomy offers insights to help study the neural-islet regulatory mechanisms and the role of neural tissue remodelling in the development of diabetes.

  14. The recent excitement about neural networks.

    PubMed

    Crick, F

    1989-01-12

    The remarkable properties of some recent computer algorithms for neural networks seemed to promise a fresh approach to understanding the computational properties of the brain. Unfortunately most of these neural nets are unrealistic in important respects.

  15. Remote Energy Monitoring System via Cellular Network

    NASA Astrophysics Data System (ADS)

    Yunoki, Shoji; Tamaki, Satoshi; Takada, May; Iwaki, Takashi

    Recently, improvement on power saving and cost efficiency by monitoring the operation status of various facilities over the network has gained attention. Wireless network, especially cellular network, has advantage in mobility, coverage, and scalability. On the other hand, it has disadvantage of low reliability, due to rapid changes in the available bandwidth. We propose a transmission control scheme based on data priority and instantaneous available bandwidth to realize a highly reliable remote monitoring system via cellular network. We have developed our proposed monitoring system and evaluated the effectiveness of our scheme, and proved it reduces the maximum transmission delay of sensor status to 1/10 compared to best effort transmission.

  16. Alternative learning algorithms for feedforward neural networks

    SciTech Connect

    Vitela, J.E.

    1996-03-01

    The efficiency of the back propagation algorithm to train feed forward multilayer neural networks has originated the erroneous belief among many neural networks users, that this is the only possible way to obtain the gradient of the error in this type of networks. The purpose of this paper is to show how alternative algorithms can be obtained within the framework of ordered partial derivatives. Two alternative forward-propagating algorithms are derived in this work which are mathematically equivalent to the BP algorithm. This systematic way of obtaining learning algorithms illustrated with this particular type of neural networks can also be used with other types such as recurrent neural networks.

  17. Neural network modeling of emotion

    NASA Astrophysics Data System (ADS)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  18. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  19. Neural networks and MIMD-multiprocessors

    NASA Technical Reports Server (NTRS)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  20. Adaptive optimization and control using neural networks

    SciTech Connect

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  1. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  2. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  3. Advances in neural networks research: an introduction.

    PubMed

    Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar

    2009-01-01

    The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications. PMID:19632811

  4. Advances in neural networks research: an introduction.

    PubMed

    Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar

    2009-01-01

    The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications.

  5. Neural network modeling of distillation columns

    SciTech Connect

    Baratti, R.; Vacca, G.; Servida, A.

    1995-06-01

    Neural network modeling (NNM) was implemented for monitoring and control applications on two actual distillation columns: the butane splitter tower and the gasoline stabilizer. The two distillation columns are in operation at the SARAS refinery. Results show that with proper implementation techniques NNM can significantly improve column operation. The common belief that neural networks can be used as black-box process models is not completely true. Effective implementation always requires a minimum degree of process knowledge to identify the relevant inputs to the net. After background and generalities on neural network modeling, the paper describes efforts on the development of neural networks for the two distillation units.

  6. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  7. Neural Networks for Rapid Design and Analysis

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.; Maghami, Peiman G.

    1998-01-01

    Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.

  8. Aerodynamic Design Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.

    2003-01-01

    The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.

  9. Bayesian regularization of neural networks.

    PubMed

    Burden, Frank; Winkler, Dave

    2008-01-01

    Bayesian regularized artificial neural networks (BRANNs) are more robust than standard back-propagation nets and can reduce or eliminate the need for lengthy cross-validation. Bayesian regularization is a mathematical process that converts a nonlinear regression into a "well-posed" statistical problem in the manner of a ridge regression. The advantage of BRANNs is that the models are robust and the validation process, which scales as O(N2) in normal regression methods, such as back propagation, is unnecessary. These networks provide solutions to a number of problems that arise in QSAR modeling, such as choice of model, robustness of model, choice of validation set, size of validation effort, and optimization of network architecture. They are difficult to overtrain, since evidence procedures provide an objective Bayesian criterion for stopping training. They are also difficult to overfit, because the BRANN calculates and trains on a number of effective network parameters or weights, effectively turning off those that are not relevant. This effective number is usually considerably smaller than the number of weights in a standard fully connected back-propagation neural net. Automatic relevance determination (ARD) of the input variables can be used with BRANNs, and this allows the network to "estimate" the importance of each input. The ARD method ensures that irrelevant or highly correlated indices used in the modeling are neglected as well as showing which are the most important variables for modeling the activity data. This chapter outlines the equations that define the BRANN method plus a flowchart for producing a BRANN-QSAR model. Some results of the use of BRANNs on a number of data sets are illustrated and compared with other linear and nonlinear models.

  10. Neural networks for nuclear spectroscopy

    SciTech Connect

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T.

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  11. Neural Network model for memory

    NASA Astrophysics Data System (ADS)

    Vipin, Meena; Srivastava, Vipin; Granato, Enzo

    1992-10-01

    We propose a model for memory within the framework of Neural Network which is akin to the realistic memory, in that it tends to forget upon learning more, and has both long-term as well as short-term memories. It has great advantage over the existing models proposed so far by Parisi and Gordon which have only short-term and long-term memories respectively. Our model resorts to learning within bounds like the previous two models, however, the essential difference lies in the reinitialization of the synaptic efficacy after it accumulates up to a preassigned value.

  12. Neural Network Classifies Teleoperation Data

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Giancaspro, Antonio; Losito, Sergio; Pasquariello, Guido

    1994-01-01

    Prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on manipulator hand. Prototype is early, subsystem-level product of continuing effort to develop automated system that assists in training and supervising human control operator: provides symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to operator in real time during successive executions of same task. Also simplifies transition between teleoperation and autonomous modes of telerobotic system.

  13. Neural Network Controlled Visual Saccades

    NASA Astrophysics Data System (ADS)

    Johnson, Jeffrey D.; Grogan, Timothy A.

    1989-03-01

    The paper to be presented will discuss research on a computer vision system controlled by a neural network capable of learning through classical (Pavlovian) conditioning. Through the use of unconditional stimuli (reward and punishment) the system will develop scan patterns of eye saccades necessary to differentiate and recognize members of an input set. By foveating only those portions of the input image that the system has found to be necessary for recognition the drawback of computational explosion as the size of the input image grows is avoided. The model incorporates many features found in animal vision systems, and is governed by understandable and modifiable behavior patterns similar to those reported by Pavlov in his classic study. These behavioral patterns are a result of a neuronal model, used in the network, explicitly designed to reproduce this behavior.

  14. Damage Assessment Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Zapico, J. L.; González, M. P.; Worden, K.

    2003-01-01

    In this paper, a method of damage assessment based on neural networks (NNs) is presented and applied to the Steelquake structure. The method is intended to assess the overall damage at each floor in composite frames caused by seismic loading. A neural network is used to calibrate the initial undamaged structure, and another to predict the damage. The natural frequencies of the structure are used as inputs of the NNs. The data used to train the NNs were obtained through a finite element (FE) model. Many previous approaches have exhibited a relatively poor capacity of generalisation. In order to overcome this problem, a FE model more suitable to the definition of damage is tried herein. Further work in this paper is concerned with the validation of the method. For this end, the damage levels of the structure were obtained through the trained NNs from the available experimental modal data. Then, the stiffness matrices of the structure predicted by the method were compared with those identified from pseudo-dynamic tests. Results are excellent. The new FE model definition allows the NNs to have a much better generalisation. The obtained values of the terms of the stiffness matrix of the undamaged structure are almost exact when comparing with the experimental ones, while the absolute differences are lower than 8.6% for the damaged structure.

  15. Artificial neural networks in neurosurgery.

    PubMed

    Azimi, Parisa; Mohammadi, Hasan Reza; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad; Montazeri, Ali

    2015-03-01

    Artificial neural networks (ANNs) effectively analyze non-linear data sets. The aimed was A review of the relevant published articles that focused on the application of ANNs as a tool for assisting clinical decision-making in neurosurgery. A literature review of all full publications in English biomedical journals (1993-2013) was undertaken. The strategy included a combination of key words 'artificial neural networks', 'prognostic', 'brain', 'tumor tracking', 'head', 'tumor', 'spine', 'classification' and 'back pain' in the title and abstract of the manuscripts using the PubMed search engine. The major findings are summarized, with a focus on the application of ANNs for diagnostic and prognostic purposes. Finally, the future of ANNs in neurosurgery is explored. A total of 1093 citations were identified and screened. In all, 57 citations were found to be relevant. Of these, 50 articles were eligible for inclusion in this review. The synthesis of the data showed several applications of ANN in neurosurgery, including: (1) diagnosis and assessment of disease progression in low back pain, brain tumours and primary epilepsy; (2) enhancing clinically relevant information extraction from radiographic images, intracranial pressure processing, low back pain and real-time tumour tracking; (3) outcome prediction in epilepsy, brain metastases, lumbar spinal stenosis, lumbar disc herniation, childhood hydrocephalus, trauma mortality, and the occurrence of symptomatic cerebral vasospasm in patients with aneurysmal subarachnoid haemorrhage; (4) the use in the biomechanical assessments of spinal disease. ANNs can be effectively employed for diagnosis, prognosis and outcome prediction in neurosurgery.

  16. A new formulation for feedforward neural networks.

    PubMed

    Razavi, Saman; Tolson, Bryan A

    2011-10-01

    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization.

  17. Drift chamber tracking with neural networks

    SciTech Connect

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  18. Extrapolation limitations of multilayer feedforward neural networks

    NASA Technical Reports Server (NTRS)

    Haley, Pamela J.; Soloway, Donald

    1992-01-01

    The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.

  19. Coherence resonance in bursting neural networks

    NASA Astrophysics Data System (ADS)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  20. Coherence resonance in bursting neural networks.

    PubMed

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal-a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  1. Neural network based architectures for aerospace applications

    NASA Technical Reports Server (NTRS)

    Ricart, Richard

    1987-01-01

    A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed.

  2. Evolving neural networks for detecting breast cancer.

    PubMed

    Fogel, D B; Wasson, E C; Boughton, E M

    1995-09-01

    Artificial neural networks are applied to the problem of detecting breast cancer from histologic data. Evolutionary programming is used to train the networks. This stochastic optimization method reduces the chance of becoming trapped in locally optimal weight sets. Preliminary results indicate that very parsimonious neural nets can outperform other methods reported in the literature on the same data. The results are statistically significant.

  3. SEU fault tolerance in artificial neural networks

    SciTech Connect

    Velazco, R.; Assoum, A.; Radi, N.E.; Ecoffet, R.; Botey, X.

    1995-12-01

    In this paper the authors investigate the robustness of Artificial Neural Networks when encountering transient modification of information bits related to the network operation. These kinds of faults are likely to occur as a consequence of interaction with radiation. Results of tests performed to evaluate the fault tolerance properties of two different digital neural circuits are presented.

  4. Applications of Neural Networks in Finance.

    ERIC Educational Resources Information Center

    Crockett, Henry; Morrison, Ronald

    1994-01-01

    Discusses research with neural networks in the area of finance. Highlights include bond pricing, theoretical exposition of primary bond pricing, bond pricing regression model, and an example that created networks with corporate bonds and NeuralWare Neuralworks Professional H software using the back-propagation technique. (LRW)

  5. Neural Network Computing and Natural Language Processing.

    ERIC Educational Resources Information Center

    Borchardt, Frank

    1988-01-01

    Considers the application of neural network concepts to traditional natural language processing and demonstrates that neural network computing architecture can: (1) learn from actual spoken language; (2) observe rules of pronunciation; and (3) reproduce sounds from the patterns derived by its own processes. (Author/CB)

  6. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging.

  7. Radiation Behavior of Analog Neural Network Chip

    NASA Technical Reports Server (NTRS)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  8. A Survey of Neural Network Publications.

    ERIC Educational Resources Information Center

    Vijayaraman, Bindiganavale S.; Osyk, Barbara

    This paper is a survey of publications on artificial neural networks published in business journals for the period ending July 1996. Its purpose is to identify and analyze trends in neural network research during that period. This paper shows which topics have been heavily researched, when these topics were researched, and how that research has…

  9. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  10. Self-organization of neural networks

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Winston, Jeffrey V.; Rafelski, Johann

    1984-05-01

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (“brainwashing”) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conductive to the simulation of memory and learning phenomena.

  11. Creativity in design and artificial neural networks

    SciTech Connect

    Neocleous, C.C.; Esat, I.I.; Schizas, C.N.

    1996-12-31

    The creativity phase is identified as an integral part of the design phase. The characteristics of creative persons which are relevant to designing artificial neural networks manifesting aspects of creativity, are identified. Based on these identifications, a general framework of artificial neural network characteristics to implement such a goal are proposed.

  12. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  13. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  14. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    PubMed

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  15. Inferring cellular networks – a review

    PubMed Central

    Markowetz, Florian; Spang, Rainer

    2007-01-01

    In this review we give an overview of computational and statistical methods to reconstruct cellular networks. Although this area of research is vast and fast developing, we show that most currently used methods can be organized by a few key concepts. The first part of the review deals with conditional independence models including Gaussian graphical models and Bayesian networks. The second part discusses probabilistic and graph-based methods for data from experimental interventions and perturbations. PMID:17903286

  16. Enhancing neural-network performance via assortativity

    SciTech Connect

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-03-15

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  17. A neural network approach to burst detection.

    PubMed

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system. PMID:11936639

  18. Enhancing neural-network performance via assortativity.

    PubMed

    de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J

    2011-03-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  19. Introduction to artificial neural networks.

    PubMed

    Grossi, Enzo; Buscema, Massimo

    2007-12-01

    The coupling of computer science and theoretical bases such as nonlinear dynamics and chaos theory allows the creation of 'intelligent' agents, such as artificial neural networks (ANNs), able to adapt themselves dynamically to problems of high complexity. ANNs are able to reproduce the dynamic interaction of multiple factors simultaneously, allowing the study of complexity; they can also draw conclusions on individual basis and not as average trends. These tools can offer specific advantages with respect to classical statistical techniques. This article is designed to acquaint gastroenterologists with concepts and paradigms related to ANNs. The family of ANNs, when appropriately selected and used, permits the maximization of what can be derived from available data and from complex, dynamic, and multidimensional phenomena, which are often poorly predictable in the traditional 'cause and effect' philosophy. PMID:17998827

  20. Block-based neural networks.

    PubMed

    Moon, S W; Kong, S G

    2001-01-01

    This paper presents a novel block-based neural network (BBNN) model and the optimization of its structure and weights based on a genetic algorithm. The architecture of the BBNN consists of a 2D array of fundamental blocks with four variable input/output nodes and connection weights. Each block can have one of four different internal configurations depending on the structure settings, The BBNN model includes some restrictions such as 2D array and integer weights in order to allow easier implementation with reconfigurable hardware such as field programmable logic arrays (FPGA). The structure and weights of the BBNN are encoded with bit strings which correspond to the configuration bits of FPGA. The configuration bits are optimized globally using a genetic algorithm with 2D encoding and modified genetic operators. Simulations show that the optimized BBNN can solve engineering problems such as pattern classification and mobile robot control. PMID:18244385

  1. Sunspot prediction using neural networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Baffes, Paul

    1990-01-01

    The earliest systematic observance of sunspot activity is known to have been discovered by the Chinese in 1382 during the Ming Dynasty (1368 to 1644) when spots on the sun were noticed by looking at the sun through thick, forest fire smoke. Not until after the 18th century did sunspot levels become more than a source of wonderment and curiosity. Since 1834 reliable sunspot data has been collected by the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Naval Observatory. Recently, considerable effort has been placed upon the study of the effects of sunspots on the ecosystem and the space environment. The efforts of the Artificial Intelligence Section of the Mission Planning and Analysis Division of the Johnson Space Center involving the prediction of sunspot activity using neural network technologies are described.

  2. Neural networks for damage identification

    SciTech Connect

    Paez, T.L.; Klenke, S.E.

    1997-11-01

    Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  3. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2003-12-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  4. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  5. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-03-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  6. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2002-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, online, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce {sub x} emissions and improve heat rate

  7. Neural networks and orbit control in accelerators

    SciTech Connect

    Bozoki, E.; Friedman, A.

    1994-07-01

    An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to `kicks` and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given.

  8. VLSI Cells Placement Using the Neural Networks

    SciTech Connect

    Azizi, Hacene; Zouaoui, Lamri; Mokhnache, Salah

    2008-06-12

    The artificial neural networks have been studied for several years. Their effectiveness makes it possible to expect high performances. The privileged fields of these techniques remain the recognition and classification. Various applications of optimization are also studied under the angle of the artificial neural networks. They make it possible to apply distributed heuristic algorithms. In this article, a solution to placement problem of the various cells at the time of the realization of an integrated circuit is proposed by using the KOHONEN network.

  9. Nonlinear programming with feedforward neural networks.

    SciTech Connect

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  10. Neural network models for optical computing

    SciTech Connect

    Athale, R.A. ); Davis, J. )

    1988-01-01

    This volume comprises the record of the conference on neural network models for optical computing. In keeping with the interdisciplinary nature of the field, the invited papers are from diverse research areas, such as neuroscience, parallel architectures, neural modeling, and perception. The papers consist of three major classes: applications of optical neural nets for pattern classification, analysis, and image formation; development and analysis of neural net models that are particularly suited for optical implementation; experimental demonstrations of optical neural nets, particularly with adaptive interconnects.

  11. Neural network regulation driven by autonomous neural firings

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  12. Neural network based temporal video segmentation.

    PubMed

    Cao, X; Suganthan, P N

    2002-01-01

    The organization of video information in video databases requires automatic temporal segmentation with minimal user interaction. As neural networks are capable of learning the characteristics of various video segments and clustering them accordingly, in this paper, a neural network based technique is developed to segment the video sequence into shots automatically and with a minimum number of user-defined parameters. We propose to employ growing neural gas (GNG) networks and integrate multiple frame difference features to efficiently detect shot boundaries in the video. Experimental results are presented to illustrate the good performance of the proposed scheme on real video sequences. PMID:12370954

  13. Multispectral-image fusion using neural networks

    NASA Astrophysics Data System (ADS)

    Kagel, Joseph H.; Platt, C. A.; Donaven, T. W.; Samstad, Eric A.

    1990-08-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard a circuit card assembly and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations results and a description of the prototype system are presented. 1.

  14. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  15. Neural networks techniques applied to reservoir engineering

    SciTech Connect

    Flores, M.; Barragan, C.

    1995-12-31

    Neural Networks are considered the greatest technological advance since the transistor. They are expected to be a common household item by the year 2000. An attempt to apply Neural Networks to an important geothermal problem has been made, predictions on the well production and well completion during drilling in a geothermal field. This was done in Los Humeros geothermal field, using two common types of Neural Network models, available in commercial software. Results show the learning capacity of the developed model, and its precision in the predictions that were made.

  16. Attitude control of spacecraft using neural networks

    NASA Technical Reports Server (NTRS)

    Vadali, Srinivas R.; Krishnan, S.; Singh, T.

    1993-01-01

    This paper investigates the use of radial basis function neural networks for adaptive attitude control and momentum management of spacecraft. In the first part of the paper, neural networks are trained to learn from a family of open-loop optimal controls parameterized by the initial states and times-to-go. The trained is then used for closed-loop control. In the second part of the paper, neural networks are used for direct adaptive control in the presence of unmodeled effects and parameter uncertainty. The control and learning laws are derived using the method of Lyapunov.

  17. Neural network with formed dynamics of activity

    SciTech Connect

    Dunin-Barkovskii, V.L.; Osovets, N.B.

    1995-03-01

    The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.

  18. Vectorized algorithms for spiking neural network simulation.

    PubMed

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages. PMID:21395437

  19. A neural network prototyping package within IRAF

    NASA Technical Reports Server (NTRS)

    Bazell, D.; Bankman, I.

    1992-01-01

    We outline our plans for incorporating a Neural Network Prototyping Package into the IRAF environment. The package we are developing will allow the user to choose between different types of networks and to specify the details of the particular architecture chosen. Neural networks consist of a highly interconnected set of simple processing units. The strengths of the connections between units are determined by weights which are adaptively set as the network 'learns'. In some cases, learning can be a separate phase of the user cycle of the network while in other cases the network learns continuously. Neural networks have been found to be very useful in pattern recognition and image processing applications. They can form very general 'decision boundaries' to differentiate between objects in pattern space and they can be used for associative recall of patterns based on partial cures and for adaptive filtering. We discuss the different architectures we plan to use and give examples of what they can do.

  20. Adaptive holographic implementation of a neural network

    NASA Astrophysics Data System (ADS)

    Downie, John D.; Hine, Butler P., III; Reid, Max B.

    1990-07-01

    A holographic implementation for neural networks is proposed and demonstrated as an alternative to the optical matrix-vector multiplier architecture. In comparison, the holographic architecture makes more efficient use of the system space-bandwidth product for certain types of neural networks. The principal network component is a thermoplastic hologram, used to provide both interconnection weights and beam redirection. Given the updatable nature of this type of hologram, adaptivity or network learning is possible in the optical system. Two networks with fixed weights are experimentally implemented and verified, and for one of these examples we demonstrate the advantage of the holographic implementation with respect to the matrix-vector processor.

  1. Nonequilibrium landscape theory of neural networks.

    PubMed

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  2. [Do neural networks revolutionize our predictive abilities?].

    PubMed

    Brause, R W

    1999-01-01

    This paper claims for a rational treatment of patient data. It investigates data analysis with the aid of artificial neural networks. Successful example applications show that human diagnosis abilities are significantly worse than those of neural diagnosis systems. For the example of a newer architecture using RBF nets the basic functionality is explained and it is shown how human and neural expertise can be coupled. Finally, the applications and problems of this kind of systems are discussed.

  3. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the characteristics…

  4. Applications of neural networks for gene finding.

    PubMed

    Sherriff, A; Ott, J

    2001-01-01

    A basic description of artificial neural networks is given and applications of neural nets to problems in human gene mapping are discussed. Specifically, three data types are considered: (1) affected sibpair data for nonparametric linkage analysis, (2) case-control data for disequilibrium analysis based on genetic markers, and (3) family data with trait and marker phenotypes and possibly environmental effects.

  5. Radar signal categorization using a neural network

    NASA Technical Reports Server (NTRS)

    Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.

    1991-01-01

    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.

  6. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  7. Imbibition well stimulation via neural network design

    DOEpatents

    Weiss, William

    2007-08-14

    A method for stimulation of hydrocarbon production via imbibition by utilization of surfactants. The method includes use of fuzzy logic and neural network architecture constructs to determine surfactant use.

  8. [Medical use of artificial neural networks].

    PubMed

    Molnár, B; Papik, K; Schaefer, R; Dombóvári, Z; Fehér, J; Tulassay, Z

    1998-01-01

    The main aim of the research in medical diagnostics is to develop more exact, cost-effective and handsome systems, procedures and methods for supporting the clinicians. In their paper the authors introduce a new method that recently came into the focus referred to as artificial neural networks. Based on the literature of the past 5-6 years they give a brief review--highlighting the most important ones--showing the idea behind neural networks, what they are used for in the medical field. The definition, structure and operation of neural networks are discussed. In the application part they collect examples in order to give an insight in the neural network application research. It is emphasised that in the near future basically new diagnostic equipment can be developed based on this new technology in the field of ECG, EEG and macroscopic and microscopic image analysis systems.

  9. WD40 proteins propel cellular networks.

    PubMed

    Stirnimann, Christian U; Petsalaki, Evangelia; Russell, Robert B; Müller, Christoph W

    2010-10-01

    Recent findings indicate that WD40 domains play central roles in biological processes by acting as hubs in cellular networks; however, they have been studied less intensely than other common domains, such as the kinase, PDZ or SH3 domains. As suggested by various interactome studies, they are among the most promiscuous interactors. Structural studies suggest that this property stems from their ability, as scaffolds, to interact with diverse proteins, peptides or nucleic acids using multiple surfaces or modes of interaction. A general scaffolding role is supported by the fact that no WD40 domain has been found with intrinsic enzymatic activity despite often being part of large molecular machines. We discuss the WD40 domain distributions in protein networks and structures of WD40-containing assemblies to demonstrate their versatility in mediating critical cellular functions.

  10. Neural-Network Controller For Vibration Suppression

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Wang, Shyh Jong

    1995-01-01

    Neural-network-based adaptive-control system proposed for vibration suppression of flexible space structures. Controller features three-layer neural network and utilizes output feedback. Measurements generated by various sensors on structure. Feed forward path also included to speed up response in case plant exhibits predominantly linear dynamic behavior. System applicable to single-input single-output systems. Work extended to multiple-input multiple-output systems as well.

  11. Wavelet neural networks for stock trading

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxing; Fataliyev, Kamaladdin; Wang, Lipo

    2013-05-01

    This paper explores the application of a wavelet neural network (WNN), whose hidden layer is comprised of neurons with adjustable wavelets as activation functions, to stock prediction. We discuss some basic rationales behind technical analysis, and based on which, inputs of the prediction system are carefully selected. This system is tested on Istanbul Stock Exchange National 100 Index and compared with traditional neural networks. The results show that the WNN can achieve very good prediction accuracy.

  12. Using neural networks in software repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  13. Neural network simulations of the nervous system.

    PubMed

    van Leeuwen, J L

    1990-01-01

    Present knowledge of brain mechanisms is mainly based on anatomical and physiological studies. Such studies are however insufficient to understand the information processing of the brain. The present new focus on neural network studies is the most likely candidate to fill this gap. The present paper reviews some of the history and current status of neural network studies. It signals some of the essential problems for which answers have to be found before substantial progress in the field can be made. PMID:2245130

  14. Neural network segmentation of magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Frederick, Blaise

    1990-07-01

    Neural networks are well adapted to the task of grouping input patterns into subsets which share some similarity. Moreover once trained they can generalize their classification rules to classify new data sets. Sets of pixel intensities from magnetic resonance (MR) images provide a natural input to a neural network by varying imaging parameters MR images can reflect various independent physical parameters of tissues in their pixel intensities. A neural net can then be trained to classify physically similar tissue types based on sets of pixel intensities resulting from different imaging studies on the same subject. A neural network classifier for image segmentation was implemented on a Sun 4/60 and was tested on the task of classifying tissues of canine head MR images. Four images of a transaxial slice with different imaging sequences were taken as input to the network (three spin-echo images and an inversion recovery image). The training set consisted of 691 representative samples of gray matter white matter cerebrospinal fluid bone and muscle preclassified by a neuroscientist. The network was trained using a fast backpropagation algorithm to derive the decision criteria to classify any location in the image by its pixel intensities and the image was subsequently segmented by the classifier. The classifier''s performance was evaluated as a function of network size number of network layers and length of training. A single layer neural network performed quite well at

  15. Dynamic recurrent neural networks: a dynamical analysis.

    PubMed

    Draye, J S; Pavisic, D A; Cheron, G A; Libert, G A

    1996-01-01

    In this paper, we explore the dynamical features of a neural network model which presents two types of adaptative parameters: the classical weights between the units and the time constants associated with each artificial neuron. The purpose of this study is to provide a strong theoretical basis for modeling and simulating dynamic recurrent neural networks. In order to achieve this, we study the effect of the statistical distribution of the weights and of the time constants on the network dynamics and we make a statistical analysis of the neural transformation. We examine the network power spectra (to draw some conclusions over the frequential behaviour of the network) and we compute the stability regions to explore the stability of the model. We show that the network is sensitive to the variations of the mean values of the weights and the time constants (because of the temporal aspects of the learned tasks). Nevertheless, our results highlight the improvements in the network dynamics due to the introduction of adaptative time constants and indicate that dynamic recurrent neural networks can bring new powerful features in the field of neural computing.

  16. A neural network simulation package in CLIPS

    NASA Technical Reports Server (NTRS)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  17. Optimization neural network for solving flow problems.

    PubMed

    Perfetti, R

    1995-01-01

    This paper describes a neural network for solving flow problems, which are of interest in many areas of application as in fuel, hydro, and electric power scheduling. The neural network consist of two layers: a hidden layer and an output layer. The hidden units correspond to the nodes of the flow graph. The output units represent the branch variables. The network has a linear order of complexity, it is easily programmable, and it is suited for analog very large scale integration (VLSI) realization. The functionality of the proposed network is illustrated by a simulation example concerning the maximal flow problem. PMID:18263420

  18. Logarithmic learning for generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network.

  19. Neural networks for segmentation, tracking, and identification

    NASA Astrophysics Data System (ADS)

    Rogers, Steven K.; Ruck, Dennis W.; Priddy, Kevin L.; Tarr, Gregory L.

    1992-09-01

    The main thrust of this paper is to encourage the use of neural networks to process raw data for subsequent classification. This article addresses neural network techniques for processing raw pixel information. For this paper the definition of neural networks includes the conventional artificial neural networks such as the multilayer perceptrons and also biologically inspired processing techniques. Previously, we have successfully used the biologically inspired Gabor transform to process raw pixel information and segment images. In this paper we extend those ideas to both segment and track objects in multiframe sequences. It is also desirable for the neural network processing data to learn features for subsequent recognition. A common first step for processing raw data is to transform the data and use the transform coefficients as features for recognition. For example, handwritten English characters become linearly separable in the feature space of the low frequency Fourier coefficients. Much of human visual perception can be modelled by assuming low frequency Fourier as the feature space used by the human visual system. The optimum linear transform, with respect to reconstruction, is the Karhunen-Loeve transform (KLT). It has been shown that some neural network architectures can compute approximations to the KLT. The KLT coefficients can be used for recognition as well as for compression. We tested the use of the KLT on the problem of interfacing a nonverbal patient to a computer. The KLT uses an optimal basis set for object reconstruction. For object recognition, the KLT may not be optimal.

  20. Neural-Network Object-Recognition Program

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  1. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  2. Lambda and the edge of chaos in recurrent neural networks.

    PubMed

    Seifter, Jared; Reggia, James A

    2015-01-01

    The idea that there is an edge of chaos, a region in the space of dynamical systems having special meaning for complex living entities, has a long history in artificial life. The significance of this region was first emphasized in cellular automata models when a single simple measure, λCA, identified it as a transitional region between order and chaos. Here we introduce a parameter λNN that is inspired by λCA but is defined for recurrent neural networks. We show through a series of systematic computational experiments that λNN generally orders the dynamical behaviors of randomly connected/weighted recurrent neural networks in the same way that λCA does for cellular automata. By extending this ordering to larger values of λNN than has typically been done with λCA and cellular automata, we find that a second edge-of-chaos region exists on the opposite side of the chaotic region. These basic results are found to hold under different assumptions about network connectivity, but vary substantially in their details. The results show that the basic concept underlying the lambda parameter can usefully be extended to other types of complex dynamical systems than just cellular automata.

  3. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  4. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  5. Multisensor neural network approach to mine detection

    NASA Astrophysics Data System (ADS)

    Iler, Amber L.; Marble, Jay A.; Rauss, Patrick J.

    2001-10-01

    A neural network is applied to data collected by the close-in detector for the Mine Hunter Killer (MHK) project with promising results. We use the ground penetrating radar (GPR) and metal detector to create three channels (two from the GPR) and train a basic, two layer (single hidden layer), feed-forward neural network. By experimenting with the number of hidden nodes and training goals, we were able to surpass the performance of the single sensors when we fused the three channels via our neural network and applied the trained net to different data. The fused sensors exceeded the best single sensor performance above 95 percent detection by providing a lower, but still high, false alarm rate. And though our three channel neural net worked best, we saw an increase in performance with fewer than three channels, as well.

  6. Optical neural stimulation modeling on degenerative neocortical neural networks

    NASA Astrophysics Data System (ADS)

    Zverev, M.; Fanjul-Vélez, F.; Salas-García, I.; Arce-Diego, J. L.

    2015-07-01

    Neurodegenerative diseases usually appear at advanced age. Medical advances make people live longer and as a consequence, the number of neurodegenerative diseases continuously grows. There is still no cure for these diseases, but several brain stimulation techniques have been proposed to improve patients' condition. One of them is Optical Neural Stimulation (ONS), which is based on the application of optical radiation over specific brain regions. The outer cerebral zones can be noninvasively stimulated, without the common drawbacks associated to surgical procedures. This work focuses on the analysis of ONS effects in stimulated neurons to determine their influence in neuronal activity. For this purpose a neural network model has been employed. The results show the neural network behavior when the stimulation is provided by means of different optical radiation sources and constitute a first approach to adjust the optical light source parameters to stimulate specific neocortical areas.

  7. Fuzzy logic and neural networks

    SciTech Connect

    Loos, J.R.

    1994-11-01

    Combine fuzzy logic`s fuzzy sets, fuzzy operators, fuzzy inference, and fuzzy rules - like defuzzification - with neural networks and you can arrive at very unfuzzy real-time control. Fuzzy logic, cursed with a very whimsical title, simply means multivalued logic, which includes not only the conventional two-valued (true/false) crisp logic, but also the logic of three or more values. This means one can assign logic values of true, false, and somewhere in between. This is where fuzziness comes in. Multi-valued logic avoids the black-and-white, all-or-nothing assignment of true or false to an assertion. Instead, it permits the assignment of shades of gray. When assigning a value of true or false to an assertion, the numbers typically used are {open_quotes}1{close_quotes} or {open_quotes}0{close_quotes}. This is the case for programmed systems. If {open_quotes}0{close_quotes} means {open_quotes}false{close_quotes} and {open_quotes}1{close_quotes} means {open_quotes}true,{close_quotes} then {open_quotes}shades of gray{close_quotes} are any numbers between 0 and 1. Therefore, {open_quotes}nearly true{close_quotes} may be represented by 0.8 or 0.9, {open_quotes}nearly false{close_quotes} may be represented by 0.1 or 0.2, and {close_quotes}your guess is as good as mine{close_quotes} may be represented by 0.5. The flexibility available to one is limitless. One can associate any meaning, such as {open_quotes}nearly true{close_quotes}, to any value of any granularity, such as 0.9999. 2 figs.

  8. Weight discretization paradigm for optical neural networks

    NASA Astrophysics Data System (ADS)

    Fiesler, Emile; Choudry, Amar; Caulfield, H. John

    1990-08-01

    Neural networks are a primary candidate architecture for optical computing. One of the major problems in using neural networks for optical computers is that the information holders: the interconnection strengths (or weights) are normally real valued (continuous), whereas optics (light) is only capable of representing a few distinguishable intensity levels (discrete). In this paper a weight discretization paradigm is presented for back(ward error) propagation neural networks which can work with a very limited number of discretization levels. The number of interconnections in a (fully connected) neural network grows quadratically with the number of neurons of the network. Optics can handle a large number of interconnections because of the fact that light beams do not interfere with each other. A vast amount of light beams can therefore be used per unit of area. However the number of different values one can represent in a light beam is very limited. A flexible, portable (machine independent) neural network software package which is capable of weight discretization, is presented. The development of the software and some experiments have been done on personal computers. The major part of the testing, which requires a lot of computation, has been done using a CRAY X-MP/24 super computer.

  9. Artificial Neural Networks and Instructional Technology.

    ERIC Educational Resources Information Center

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  10. Higher-Order Neural Networks Recognize Patterns

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly; Ochoa, Ellen

    1996-01-01

    Networks of higher order have enhanced capabilities to distinguish between different two-dimensional patterns and to recognize those patterns. Also enhanced capabilities to "learn" patterns to be recognized: "trained" with far fewer examples and, therefore, in less time than necessary to train comparable first-order neural networks.

  11. Neural-Network Modeling Of Arc Welding

    NASA Technical Reports Server (NTRS)

    Anderson, Kristinn; Barnett, Robert J.; Springfield, James F.; Cook, George E.; Strauss, Alvin M.; Bjorgvinsson, Jon B.

    1994-01-01

    Artificial neural networks considered for use in monitoring and controlling gas/tungsten arc-welding processes. Relatively simple network, using 4 welding equipment parameters as inputs, estimates 2 critical weld-bead paramaters within 5 percent. Advantage is computational efficiency.

  12. Orthogonal Patterns In A Binary Neural Network

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1991-01-01

    Report presents some recent developments in theory of binary neural networks. Subject matter relevant to associate (content-addressable) memories and to recognition of patterns - both of considerable importance in advancement of robotics and artificial intelligence. When probed by any pattern, network converges to one of stored patterns.

  13. Electronic device aspects of neural network memories

    NASA Technical Reports Server (NTRS)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  14. Improving neural network performance on SIMD architectures

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  15. Applying neural networks in autonomous systems

    NASA Astrophysics Data System (ADS)

    Thornbrugh, Allison L.; Layne, J. D.; Wilson, James M., III

    1992-03-01

    Autonomous and teleautonomous operations have been defined in a variety of ways by different groups involved with remote robotic operations. For example, Conway describes architectures for producing intelligent actions in teleautonomous systems. Applying neural nets in such systems is similar to applying them in general. However, for autonomy, learning or learned behavior may become a significant system driver. Thus, artificial neural networks are being evaluated as components in fully autonomous and teleautonomous systems. Feed- forward networks may be trained to perform adaptive signal processing, pattern recognition, data fusion, and function approximation -- as in control subsystems. Certain components of particular autonomous systems become more amenable to implementation using a neural net due to a match between the net's attributes and desired attributes of the system component. Criteria have been developed for distinguishing such applications and then implementing them. The success of hardware implementation is a crucial part of this application evaluation process. Three basic applications of neural nets -- autoassociation, classification, and function approximation -- are used to exemplify this process and to highlight procedures that are followed during the requirements, design, and implementation phases. This paper assumes some familiarity with basic neural network terminology and concentrates upon the use of different neural network types while citing references that cover the underlying mathematics and related research.

  16. Cellular automata modelling of biomolecular networks dynamics.

    PubMed

    Bonchev, D; Thomas, S; Apte, A; Kier, L B

    2010-01-01

    The modelling of biological systems dynamics is traditionally performed by ordinary differential equations (ODEs). When dealing with intracellular networks of genes, proteins and metabolites, however, this approach is hindered by network complexity and the lack of experimental kinetic parameters. This opened the field for other modelling techniques, such as cellular automata (CA) and agent-based modelling (ABM). This article reviews this emerging field of studies on network dynamics in molecular biology. The basics of the CA technique are discussed along with an extensive list of related software and websites. The application of CA to networks of biochemical reactions is exemplified in detail by the case studies of the mitogen-activated protein kinase (MAPK) signalling pathway, the FAS-ligand (FASL)-induced and Bcl-2-related apoptosis. The potential of the CA method to model basic pathways patterns, to identify ways to control pathway dynamics and to help in generating strategies to fight with cancer is demonstrated. The different line of CA applications presented includes the search for the best-performing network motifs, an analysis of importance for effective intracellular signalling and pathway cross-talk. PMID:20373215

  17. Neural network optimization, components, and design selection

    NASA Astrophysics Data System (ADS)

    Weller, Scott W.

    1990-07-01

    Neural Networks are part of a revived technology which has received a lot of hype in recent years. As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and noncontrived problems. For example, one net has been trained to "read", translating English text into phoneme sequences. Other applications of Neural Networks include data base manipulation and the solving of muting and classification types of optimization problems. Neural Networks are constructed from neurons, which in electronics or software attempt to model but are not constrained by the real thing, i.e., neurons in our gray matter. Neurons are simple processing units connected to many other neurons over pathways which modify the incoming signals. A single synthetic neuron typically sums its weighted inputs, runs this sum through a non-linear function, and produces an output. In the brain, neurons are connected in a complex topology: in hardware/software the topology is typically much simpler, with neurons lying side by side, forming layers of neurons which connect to the layer of neurons which receive their outputs. This simplistic model is much easier to construct than the real thing, and yet can solve real problems. The information in a network, or its "memory", is completely contained in the weights on the connections from one neuron to another. Establishing these weights is called "training" the network. Some networks are trained by design -- once constructed no further learning takes place. Other types of networks require iterative training once wired up, but are not trainable once taught Still other types of networks can continue to learn after initial construction. The main benefit to using Neural Networks is their ability to work with conflicting or incomplete ("fuzzy") data sets. This ability and its usefulness will become evident in the following

  18. Predicting lithologic parameters using artificial neural networks

    SciTech Connect

    Link, C.A.; Wideman, C.J.; Hanneman, D.L.

    1995-06-01

    Artificial neural networks (ANNs) are becoming increasingly popular as a method for parameter classification and as a tool for recognizing complex relationships in a variety of data types. The power of ANNs lies in their ability to {open_quotes}learn{close_quotes} from a set of training data and then being able to {open_quotes}generalize{close_quotes} to new data sets. In addition, ANNs are able to incorporate data over a large range of scales and are robust in the presence of noise. A back propagation artificial neural network has proved to be a useful tool for predicting sequence boundaries from well logs in a Cenozoic basin. The network was trained using the following log set: neutron porosity, bulk density, pef, and interpreted paleosol horizons from a well in the Deer Lodge Valley, southwestern Montana. After successful training, this network was applied to the same set of well logs from a nearby well minus the interpreted paleosol horizons. The trained neural network was able to produce reasonable predictions for paleosol sequence boundaries in the test well based on the previous training. In an ongoing oil reservoir characterization project, a back propagation neural network is being used to produce estimates of porosity and permeability for subsequent input into a reservoir simulator. A combination of core, well log, geological, and 3-D seismic data serves as input to a back propagation network which outputs estimates of the spatial distribution of porosity and permeability away from the well.

  19. Using neural networks to describe tracer correlations

    NASA Astrophysics Data System (ADS)

    Lary, D. J.; Müller, M. D.; Mussa, H. Y.

    2003-11-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural 5 network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co-efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the 10 dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4 (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download

  20. Using neural networks to describe tracer correlations

    NASA Astrophysics Data System (ADS)

    Lary, D. J.; Müller, M. D.; Mussa, H. Y.

    2004-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and methane volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4 (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  1. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  2. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  3. Neural network training with global optimization techniques.

    PubMed

    Yamazaki, Akio; Ludermir, Teresa B

    2003-04-01

    This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks.

  4. Estimates on compressed neural networks regression.

    PubMed

    Zhang, Yongquan; Li, Youmei; Sun, Jianyong; Ji, Jiabing

    2015-03-01

    When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.

  5. Flexible body control using neural networks

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  6. Higher-order artificial neural networks

    SciTech Connect

    Bengtsson, M.

    1990-12-01

    The report investigates the storage capacity of an artificial neural network where the state of each neuron depends on quadratic correlations of all other neurons, i.e. a third order network. This is in contrast to a standard Hopfield network where the state of each single neuron depends on the state on every other neuron, without any correlations. The storage capacity of a third order network is larger than that for standard Hopfield by one order of N. However, the number of connections is also larger by an order of N. It is shown that the storage capacity per connection is identical for standard Hopfield and for this third order network.

  7. Neural networks for sign language translation

    NASA Astrophysics Data System (ADS)

    Wilson, Beth J.; Anspach, Gretel

    1993-09-01

    A neural network is used to extract relevant features of sign language from video images of a person communicating in American Sign Language or Signed English. The key features are hand motion, hand location with respect to the body, and handshape. A modular hybrid design is under way to apply various techniques, including neural networks, in the development of a translation system that will facilitate communication between deaf and hearing people. One of the neural networks described here is used to classify video images of handshapes into their linguistic counterpart in American Sign Language. The video image is preprocessed to yield Fourier descriptors that encode the shape of the hand silhouette. These descriptors are then used as inputs to a neural network that classifies their shapes. The network is trained with various examples from different signers and is tested with new images from new signers. The results have shown that for coarse handshape classes, the network is invariant to the type of camera used to film the various signers and to the segmentation technique.

  8. Integrated semiconductor optical sensors for cellular and neural imaging

    NASA Astrophysics Data System (ADS)

    Levi, Ofer; Lee, Thomas T.; Lee, Meredith M.; Smith, Stephen J.; Harris, James S.

    2007-04-01

    We review integrated optical sensors for functional brain imaging, localized index-of-refraction sensing as part of a lab-on-a-chip, and in vivo continuous monitoring of tumor and cancer stem cells. We present semiconductor-based sensors and imaging systems for these applications. Measured intrinsic optical signals and tissue optics simulations indicate the need for high dynamic range and low dark-current neural sensors. Simulated and measured reflectance spectra from our guided resonance filter demonstrate the capability for index-of-refraction sensing on cellular scales, compatible with integrated biosensors. Finally, we characterized a thermally evaporated emission filter that can be used to improve sensitivity for in vivo fluorescence sensing.

  9. Recurrent Neural Network for Computing Outer Inverse.

    PubMed

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  10. Multitask neural network for vision machine systems

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1991-02-01

    A multi-task dynamic neural network that can be programmed for storing processing and encoding spatio-temporal visual information is presented in this paper. This dynamic neural network called the PNnetwork is comprised of numerous densely interconnected neural subpopulations which reside in one of the two coupled sublayers P or N. The subpopulations in the P-sublayer transmit an excitatory or a positive influence onto all interconnected units whereas the subpopulations in the N-sublayer transmit an inhibitory or negative influence. The dynamical activity generated by each subpopulation is given by a nonlinear first-order system. By varying the coupling strength between these different subpopulations it is possible to generate three distinct modes of dynamical behavior useful for performing vision related tasks. It is postulated that the PN-network can function as a basic programmable processor for novel vision machine systems. 1. 0

  11. Classification of radar clutter using neural networks.

    PubMed

    Haykin, S; Deng, C

    1991-01-01

    A classifier that incorporates both preprocessing and postprocessing procedures as well as a multilayer feedforward network (based on the back-propagation algorithm) in its design to distinguish between several major classes of radar returns including weather, birds, and aircraft is described. The classifier achieves an average classification accuracy of 89% on generalization for data collected during a single scan of the radar antenna. The procedures of feature selection for neural network training, the classifier design considerations, the learning algorithm development, the implementation, and the experimental results of the neural clutter classifier, which is simulated on a Warp systolic computer, are discussed. A comparative evaluation of the multilayer neural network with a traditional Bayes classifier is presented.

  12. Automatic identification of species with neural networks.

    PubMed

    Hernández-Serna, Andrés; Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  13. Alpha spectral analysis via artificial neural networks

    SciTech Connect

    Kangas, L.J.; Hashem, S.; Keller, P.E.; Kouzes, R.T.; Troyer, G.L.

    1994-10-01

    An artificial neural network system that assigns quality factors to alpha particle energy spectra is discussed. The alpha energy spectra are used to detect plutonium contamination in the work environment. The quality factors represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with a quality factor by an expert and used in training the artificial neural network expert system. The investigation shows that the expert knowledge of alpha spectra quality factors can be transferred to an ANN system.

  14. Ferroelectric Memory Capacitors For Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Moopenn, Alexander W.; Stadler, Henry L.

    1991-01-01

    Thin-film ferroelectric capacitors proposed as nonvolatile analog memory devices. Intended primarily for use as synaptic connections in electronic neural networks. Connection strengths (synaptic weights) stored as nonlinear remanent polarizations of ferroelectric films. Ferroelectric memory and interrogation capacitors combined into memory devices in vertical or lateral configurations. Photoconductive layer modulated by light provides variable resistance to alter bias signal applied to memory capacitor. Features include nondestructive readout, simplicity, and resistance to ionizing radiation. Interrogated without destroying stored analog data. Also amenable to very-large-scale integration. Allows use of ac coupling, eliminating errors caused by dc offsets in amplifier circuits of neural networks.

  15. Automatic identification of species with neural networks

    PubMed Central

    Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification. PMID:25392749

  16. Optically excited synapse for neural networks.

    PubMed

    Boyd, G D

    1987-07-15

    What can optics with its promise of parallelism do for neural networks which require matrix multipliers? An all optical approach requires optical logic devices which are still in their infancy. An alternative is to retain electronic logic while optically addressing the synapse matrix. This paper considers several versions of an optically addressed neural network compatible with VLSI that could be fabricated with the synapse connection unspecified. This optical matrix multiplier circuit is compared to an all electronic matrix multiplier. For the optical version a synapse consisting of back-to-back photodiodes is found to have a suitable i-v characteristic for optical matrix multiplication (a linear region) plus a clipping or nonlinear region as required for neural networks. Four photodiodes per synapse are required. The strength of the synapse connection is controlled by the optical power and is thus an adjustable parameter. The synapse network can be programmed in various ways such as a shadow mask of metal, imaged mask (static), or light valve or an acoustooptic scanned laser beam or array of beams (dynamic). A milliwatt from LEDs or lasers is adequate power. The neuron has a linear transfer function and is either a summing amplifier, in which case the synapse signal is current, or an integrator, in which case the synapse signal is charge, the choice of which depends on the programming mode. Optical addressing and settling times of microseconds are anticipated. Electronic neural networks using single-value resistor synapses or single-bit programmable synapses have been demonstrated in the high-gain region of discrete single-value feedback. As an alternative to these networks and the above proposed optical synapses, an electronic analog-voltage vector matrix multiplier is considered using MOSFETS as the variable conductance in CMOS VLSI. It is concluded that a shadow mask addressed (static) optical neural network is promising. PMID:20489950

  17. Autonomous robot behavior based on neural networks

    NASA Astrophysics Data System (ADS)

    Grolinger, Katarina; Jerbic, Bojan; Vranjes, Bozo

    1997-04-01

    The purpose of autonomous robot is to solve various tasks while adapting its behavior to the variable environment, expecting it is able to navigate much like a human would, including handling uncertain and unexpected obstacles. To achieve this the robot has to be able to find solution to unknown situations, to learn experienced knowledge, that means action procedure together with corresponding knowledge on the work space structure, and to recognize working environment. The planning of the intelligent robot behavior presented in this paper implements the reinforcement learning based on strategic and random attempts for finding solution and neural network approach for memorizing and recognizing work space structure (structural assignment problem). Some of the well known neural networks based on unsupervised learning are considered with regard to the structural assignment problem. The adaptive fuzzy shadowed neural network is developed. It has the additional shadowed hidden layer, specific learning rule and initialization phase. The developed neural network combines advantages of networks based on the Adaptive Resonance Theory and using shadowed hidden layer provides ability to recognize lightly translated or rotated obstacles in any direction.

  18. Porosity Log Prediction Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Dwi Saputro, Oki; Lazuardi Maulana, Zulfikar; Dzar Eljabbar Latief, Fourier

    2016-08-01

    Well logging is important in oil and gas exploration. Many physical parameters of reservoir is derived from well logging measurement. Geophysicists often use well logging to obtain reservoir properties such as porosity, water saturation and permeability. Most of the time, the measurement of the reservoir properties are considered expensive. One of method to substitute the measurement is by conducting a prediction using artificial neural network. In this paper, artificial neural network is performed to predict porosity log data from other log data. Three well from ‘yy’ field are used to conduct the prediction experiment. The log data are sonic, gamma ray, and porosity log. One of three well is used as training data for the artificial neural network which employ the Levenberg-Marquardt Backpropagation algorithm. Through several trials, we devise that the most optimal input training is sonic log data and gamma ray log data with 10 hidden layer. The prediction result in well 1 has correlation of 0.92 and mean squared error of 5.67 x10-4. Trained network apply to other well data. The result show that correlation in well 2 and well 3 is 0.872 and 0.9077 respectively. Mean squared error in well 2 and well 3 is 11 x 10-4 and 9.539 x 10-4. From the result we can conclude that sonic log and gamma ray log could be good combination for predicting porosity with neural network.

  19. Optimal flux patterns in cellular metabolic networks

    NASA Astrophysics Data System (ADS)

    Almaas, Eivind

    2007-06-01

    The availability of whole-cell-level metabolic networks of high quality has made it possible to develop a predictive understanding of bacterial metabolism. Using the optimization framework of flux balance analysis, I investigate the metabolic response and activity patterns to variations in the availability of nutrient and chemical factors such as oxygen and ammonia by simulating 30 000 random cellular environments. The distribution of reaction fluxes is heavy tailed for the bacteria H. pylori and E. coli, and the eukaryote S. cerevisiae. While the majority of flux balance investigations has relied on implementations of the simplex method, it is necessary to use interior-point optimization algorithms to adequately characterize the full range of activity patterns on metabolic networks. The interior-point activity pattern is bimodal for E. coli and S. cerevisiae, suggesting that most metabolic reactions are either in frequent use or are rarely active. The trimodal activity pattern of H. pylori indicates that a group of its metabolic reactions (20%) are active in approximately half of the simulated environments. Constructing the high-flux backbone of the network for every environment, there is a clear trend that the more frequently a reaction is active, the more likely it is a part of the backbone. Finally, I briefly discuss the predicted activity patterns of the central carbon metabolic pathways for the sample of random environments.

  20. Optimal flux patterns in cellular metabolic networks

    SciTech Connect

    Almaas, E

    2007-01-20

    The availability of whole-cell level metabolic networks of high quality has made it possible to develop a predictive understanding of bacterial metabolism. Using the optimization framework of flux balance analysis, I investigate metabolic response and activity patterns to variations in the availability of nutrient and chemical factors such as oxygen and ammonia by simulating 30,000 random cellular environments. The distribution of reaction fluxes is heavy-tailed for the bacteria H. pylori and E. coli, and the eukaryote S. cerevisiae. While the majority of flux balance investigations have relied on implementations of the simplex method, it is necessary to use interior-point optimization algorithms to adequately characterize the full range of activity patterns on metabolic networks. The interior-point activity pattern is bimodal for E. coli and S. cerevisiae, suggesting that most metabolic reaction are either in frequent use or are rarely active. The trimodal activity pattern of H. pylori indicates that a group of its metabolic reactions (20%) are active in approximately half of the simulated environments. Constructing the high-flux backbone of the network for every environment, there is a clear trend that the more frequently a reaction is active, the more likely it is a part of the backbone. Finally, I briefly discuss the predicted activity patterns of the central-carbon metabolic pathways for the sample of random environments.

  1. Experimental fault characterization of a neural network

    NASA Technical Reports Server (NTRS)

    Tan, Chang-Huong

    1990-01-01

    The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.

  2. A Wireless Communications Laboratory on Cellular Network Planning

    ERIC Educational Resources Information Center

    Dawy, Z.; Husseini, A.; Yaacoub, E.; Al-Kanj, L.

    2010-01-01

    The field of radio network planning and optimization (RNPO) is central for wireless cellular network design, deployment, and enhancement. Wireless cellular operators invest huge sums of capital on deploying, launching, and maintaining their networks in order to ensure competitive performance and high user satisfaction. This work presents a lab…

  3. A neural network with modular hierarchical learning

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)

    1994-01-01

    This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.

  4. Development of programmable artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J.

    1993-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  5. Artificial neural networks for classifying olfactory signals.

    PubMed

    Linder, R; Pöppl, S J

    2000-01-01

    For practical applications, artificial neural networks have to meet several requirements: Mainly they should learn quick, classify accurate and behave robust. Programs should be user-friendly and should not need the presence of an expert for fine tuning diverse learning parameters. The present paper demonstrates an approach using an oversized network topology, adaptive propagation (APROP), a modified error function, and averaging outputs of four networks described for the first time. As an example, signals from different semiconductor gas sensors of an electronic nose were classified. The electronic nose smelt different types of edible oil with extremely different a-priori-probabilities. The fully-specified neural network classifier fulfilled the above mentioned demands. The new approach will be helpful not only for classifying olfactory signals automatically but also in many other fields in medicine, e.g. in data mining from medical databases.

  6. An introduction to neural networks: A tutorial

    SciTech Connect

    Walker, J.L.; Hill, E.V.K.

    1994-12-31

    Neural networks are a powerful set of mathematical techniques used for solving linear and nonlinear classification and prediction (function approximation) problems. Inspired by studies of the brain, these series and parallel combinations of simple functional units called artificial neurons have the ability to learn or be trained to solve very complex problems. Fundamental aspects of artificial neurons are discussed, including their activation functions, their combination into multilayer feedforward networks with hidden layers, and the use of bias neurons to reduce training time. The back propagation (of errors) paradigm for supervised training of feedforward networks is explained. Then, the architecture and mathematics of a Kohonen self organizing map for unsupervised learning are discussed. Two example problems are given. The first is for the application of a back propagation neural network to learn the correct response to an input vector using supervised training. The second is a classification problem using a self organizing map and unsupervised training.

  7. Auto-associative nanoelectronic neural network

    SciTech Connect

    Nogueira, C. P. S. M.; Guimarães, J. G.

    2014-05-15

    In this paper, an auto-associative neural network using single-electron tunneling (SET) devices is proposed and simulated at low temperature. The nanoelectronic auto-associative network is able to converge to a stable state, previously stored during training. The recognition of the pattern involves decreasing the energy of the input state until it achieves a point of local minimum energy, which corresponds to one of the stored patterns.

  8. Are artificial neural networks black boxes?

    PubMed

    Benitez, J M; Castro, J L; Requena, I

    1997-01-01

    Artificial neural networks are efficient computing models which have shown their strengths in solving hard problems in artificial intelligence. They have also been shown to be universal approximators. Notwithstanding, one of the major criticisms is their being black boxes, since no satisfactory explanation of their behavior has been offered. In this paper, we provide such an interpretation of neural networks so that they will no longer be seen as black boxes. This is stated after establishing the equality between a certain class of neural nets and fuzzy rule-based systems. This interpretation is built with fuzzy rules using a new fuzzy logic operator which is defined after introducing the concept of f-duality. In addition, this interpretation offers an automated knowledge acquisition procedure.

  9. Negative transfer problem in neural networks

    NASA Astrophysics Data System (ADS)

    Abunawass, Adel M.

    1992-07-01

    Harlow, 1949, observed that when human subjects were trained to perform simple discrimination tasks over a sequence of successive training sessions (trials), their performance improved as a function of the successive sessions. Harlow called this phenomena `learning-to- learn.' The subjects acquired knowledge and improved their ability to learn in future training sessions. It seems that previous training sessions contribute positively to the current one. Abunawass & Maki, 1989, observed that when a neural network (using the back-propagation model) is trained over successive sessions, the performance and learning ability of the network degrade as a function of the training sessions. In some cases this leads to a complete paralysis of the network. Abunawass & Maki called this phenomena the `negative transfer' problem, since previous training sessions contribute negatively to the current one. The effect of the negative transfer problem is in clear contradiction to that reported by Harlow on human subjects. Since the ability to model human cognition and learning is one of the most important goals (and claims) of neural networks, the negative transfer problem represents a clear limitation to this ability. This paper describes a new neural network sequential learning model known as Adaptive Memory Consolidation. In this model the network uses its past learning experience to enhance its future learning ability. Adaptive Memory Consolidation has led to the elimination and reversal of the effect of the negative transfer problem. Thus producing a `positive transfer' effect similar to Harlow's learning-to-learn phenomena.

  10. Dynamic process modeling with recurrent neural networks

    SciTech Connect

    You, Yong; Nikolaou, M. . Dept. of Chemical Engineering)

    1993-10-01

    Mathematical models play an important role in control system synthesis. However, due to the inherent nonlinearity, complexity and uncertainty of chemical processes, it is usually difficult to obtain an accurate model for a chemical engineering system. A method of nonlinear static and dynamic process modeling via recurrent neural networks (RNNs) is studied. An RNN model is a set of coupled nonlinear ordinary differential equations in continuous time domain with nonlinear dynamic node characteristics as well as both feed forward and feedback connections. For such networks, each physical input to a system corresponds to exactly one input to the network. The system's dynamics are captured by the internal structure of the network. The structure of RNN models may be more natural and attractive than that of feed forward neural network models, but computation time for training is longer. Simulation results show that RNNs can learn both steady-state relationships and process dynamics of continuous and batch, single-input/single-output and multi-input/multi-output systems in a simple and direct manner. Training of RNNs shows only small degradation in the presence of noise in the training data. Thus, RNNs constitute a feasible alternative to layered feed forward back propagation neural networks in steady-state and dynamic process modeling and model-based control.

  11. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  12. Localizing Tortoise Nests by Neural Networks.

    PubMed

    Barbuti, Roberto; Chessa, Stefano; Micheli, Alessio; Pucci, Rita

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  13. Active Sampling in Evolving Neural Networks.

    ERIC Educational Resources Information Center

    Parisi, Domenico

    1997-01-01

    Comments on Raftopoulos article (PS 528 649) on facilitative effect of cognitive limitation in development and connectionist models. Argues that the use of neural networks within an "Artificial Life" perspective can more effectively contribute to the study of the role of cognitive limitations in development and their genetic basis than can using…

  14. Neural network application to comprehensive engine diagnostics

    NASA Technical Reports Server (NTRS)

    Marko, Kenneth A.

    1994-01-01

    We have previously reported on the use of neural networks for detection and identification of faults in complex microprocessor controlled powertrain systems. The data analyzed in those studies consisted of the full spectrum of signals passing between the engine and the real-time microprocessor controller. The specific task of the classification system was to classify system operation as nominal or abnormal and to identify the fault present. The primary concern in earlier work was the identification of faults, in sensors or actuators in the powertrain system as it was exercised over its full operating range. The use of data from a variety of sources, each contributing some potentially useful information to the classification task, is commonly referred to as sensor fusion and typifies the type of problems successfully addressed using neural networks. In this work we explore the application of neural networks to a different diagnostic problem, the diagnosis of faults in newly manufactured engines and the utility of neural networks for process control.

  15. Nonlinear Time Series Analysis via Neural Networks

    NASA Astrophysics Data System (ADS)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  16. Localizing Tortoise Nests by Neural Networks.

    PubMed

    Barbuti, Roberto; Chessa, Stefano; Micheli, Alessio; Pucci, Rita

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  17. Localizing Tortoise Nests by Neural Networks

    PubMed Central

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  18. Multidimensional neural growing networks and computer intelligence

    SciTech Connect

    Yashchenko, V.A.

    1995-03-01

    This paper examines information-computation processes in time and in space and some aspects of computer intelligence using multidimensional matrix neural growing networks. In particular, issues of object-oriented {open_quotes}thinking{close_quotes} of computers are considered.

  19. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    PubMed

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks. PMID:26736358

  20. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    PubMed

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  1. A novel method combining cellular neural networks and the coupled nonlinear oscillators' paradigm involving a related bifurcation analysis for robust image contrast enhancement in dynamically changing difficult visual environments

    NASA Astrophysics Data System (ADS)

    Chamberlain Chedjou, Jean; Kyamakya, Kyandoghere

    2010-10-01

    It is well known that a machine vision-based analysis of a dynamic scene, for example in the context of advanced driver assistance systems (ADAS), does require real-time processing capabilities. Therefore, the system used must be capable of performing both robust and ultrafast analyses. Machine vision in ADAS must fulfil the above requirements when dealing with a dynamically changing visual context (i.e. driving in darkness or in a foggy environment, etc). Among the various challenges related to the analysis of a dynamic scene, this paper focuses on contrast enhancement, which is a well-known basic operation to improve the visual quality of an image (dynamic or static) suffering from poor illumination. The key objective is to develop a systematic and fundamental concept for image contrast enhancement that should be robust despite a dynamic environment and that should fulfil the real-time constraints by ensuring an ultrafast analysis. It is demonstrated that the new approach developed in this paper is capable of fulfilling the expected requirements. The proposed approach combines the good features of the 'coupled oscillators'-based signal processing paradigm with the good features of the 'cellular neural network (CNN)'-based one. The first paradigm in this combination is the 'master system' and consists of a set of coupled nonlinear ordinary differential equations (ODEs) that are (a) the so-called 'van der Pol oscillator' and (b) the so-called 'Duffing oscillator'. It is then implemented or realized on top of a 'slave system' platform consisting of a CNN-processors platform. An offline bifurcation analysis is used to find out, a priori, the windows of parameter settings in which the coupled oscillator system exhibits the best and most appropriate behaviours of interest for an optimal resulting image processing quality. In the frame of the extensive bifurcation analysis carried out, analytical formulae have been derived, which are capable of determining the various

  2. Stability analysis of Markovian jumping stochastic Cohen—Grossberg neural networks with discrete and distributed time varying delays

    NASA Astrophysics Data System (ADS)

    M. Syed, Ali

    2014-06-01

    In this paper, the global asymptotic stability problem of Markovian jumping stochastic Cohen—Grossberg neural networks with discrete and distributed time-varying delays (MJSCGNNs) is considered. A novel LMI-based stability criterion is obtained by constructing a new Lyapunov functional to guarantee the asymptotic stability of MJSCGNNs. Our results can be easily verified and they are also less restrictive than previously known criteria and can be applied to Cohen—Grossberg neural networks, recurrent neural networks, and cellular neural networks. Finally, the proposed stability conditions are demonstrated with numerical examples.

  3. Intrinsic adaptation in autonomous recurrent neural networks.

    PubMed

    Marković, Dimitrije; Gros, Claudius

    2012-02-01

    A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the quality of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics. PMID:22091667

  4. Unsupervised orthogonalization neural network for image compression

    NASA Astrophysics Data System (ADS)

    Liu, Lurng-Kuo; Ligomenides, Panos A.

    1992-11-01

    In this paper, we present a unsupervised orthogonalization neural network, which, based on Principal Component (PC) analysis, acts as an orthonormal feature detector and decorrelation network. As in the PC analysis, this network involves extracting the most heavily information- loaded features that contained in the set of input training patterns. The network self-organizes its weight vectors so that they converge to a set of orthonormal weight vectors that span the eigenspace of the correlation matrix in the input patterns. Therefore, the network is applicable to practical image transmission problems for exploiting the natural redundancy that exists in most images and for preserving the quality of the compressed-decompressed image. We have applied the proposed neural model to the problem of image compression for visual communications. Simulation results have shown that the proposed neural model provides a high compression ratio and yields excellent perceptual visual quality of the reconstructed images, and a small mean square error. Generalization performance and convergence speed are also investigated.

  5. Back propagation neural networks for facial verification

    SciTech Connect

    Garnett, A.E.; Solheim, I.; Payne, T.; Castain, R.H.

    1992-10-01

    We conducted a test to determine the aptitude of neural networks to recognize human faces. The pictures we collected of 511 subjects captured both profiles and many natural expressions. Some of the subjects were wearing glasses, sunglasses, or hats in some of the pictures. The images were compressed by a factor of 100 and converted into image vectors of 1400 pixels. The image vectors were fed into a back propagation neural network with one hidden layer and one output node. The networks were trained to recognize one target person and to reject all other persons. Neural networks for 37 target subjects were trained with 8 different training sets that consisted of different subsets of the data. The networks were then tested on the rest of the data, which consisted of 7000 or more unseen pictures. Results indicate that a false acceptance rate of less than 1 percent can be obtained, and a false rejection rate of 2 percent can be obtained when certain restrictions are followed.

  6. Classifying multispectral data by neural networks

    NASA Technical Reports Server (NTRS)

    Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.

    1993-01-01

    Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.

  7. Color control of printers by neural networks

    NASA Astrophysics Data System (ADS)

    Tominaga, Shoji

    1998-07-01

    A method is proposed for solving the mapping problem from the 3D color space to the 4D CMYK space of printer ink signals by means of a neural network. The CIE-L*a*b* color system is used as the device-independent color space. The color reproduction problem is considered as the problem of controlling an unknown static system with four inputs and three outputs. A controller determines the CMYK signals necessary to produce the desired L*a*b* values with a given printer. Our solution method for this control problem is based on a two-phase procedure which eliminates the need for UCR and GCR. The first phase determines a neural network as a model of the given printer, and the second phase determines the combined neural network system by combining the printer model and the controller in such a way that it represents an identity mapping in the L*a*b* color space. Then the network of the controller part realizes the mapping from the L*a*b* space to the CMYK space. Practical algorithms are presented in the form of multilayer feedforward networks. The feasibility of the proposed method is shown in experiments using a dye sublimation printer and an ink jet printer.

  8. A Topological Perspective of Neural Network Structure

    NASA Astrophysics Data System (ADS)

    Sizemore, Ann; Giusti, Chad; Cieslak, Matthew; Grafton, Scott; Bassett, Danielle

    The wiring patterns of white matter tracts between brain regions inform functional capabilities of the neural network. Indeed, densely connected and cyclically arranged cognitive systems may communicate and thus perform distinctly. However, previously employed graph theoretical statistics are local in nature and thus insensitive to such global structure. Here we present an investigation of the structural neural network in eight healthy individuals using persistent homology. An extension of homology to weighted networks, persistent homology records both circuits and cliques (all-to-all connected subgraphs) through a repetitive thresholding process, thus perceiving structural motifs. We report structural features found across patients and discuss brain regions responsible for these patterns, finally considering the implications of such motifs in relation to cognitive function.

  9. a Heterosynaptic Learning Rule for Neural Networks

    NASA Astrophysics Data System (ADS)

    Emmert-Streib, Frank

    In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.

  10. Controlling neural network responsiveness: tradeoffs and constraints.

    PubMed

    Keren, Hanna; Marom, Shimon

    2014-01-01

    In recent years much effort is invested in means to control neural population responses at the whole brain level, within the context of developing advanced medical applications. The tradeoffs and constraints involved, however, remain elusive due to obvious complications entailed by studying whole brain dynamics. Here, we present effective control of response features (probability and latency) of cortical networks in vitro over many hours, and offer this approach as an experimental toy for studying controllability of neural networks in the wider context. Exercising this approach we show that enforcement of stable high activity rates by means of closed loop control may enhance alteration of underlying global input-output relations and activity dependent dispersion of neuronal pair-wise correlations across the network. PMID:24808860

  11. Neural networks: Application to medical imaging

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  12. Supervised learning in multilayer spiking neural networks.

    PubMed

    Sporea, Ioana; Grüning, André

    2013-02-01

    We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has been successfully tested in the presence of noise, requires smaller networks than reservoir computing, and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.

  13. Fuzzy logic and neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  14. Combinative neural network and its applications.

    PubMed

    Chen, Yaqiu; Hu, Shangxu; Chen, Dezhao

    2003-07-01

    A new approach named combinative neural network (CN) using partial least squares (PLS) analysis to modify the hidden layer in the multi-layered feed forward (MLFF) neural networks (NN) was proposed in this paper. The significant contributions of PLS in the proposed CN are to reorganize the outputs of hidden nodes such that the correlation of variables could be circumvented, to make the network meet the non-linear relationship best between the input and output data of the NN, and to eliminate the risk of over-fitting problem at the same time. The performance of the proposed approach was demonstrated through two examples, a well defined nonlinear approximation problem, and a practical nonlinear pattern classification problem with unknown relationship between the input and output data. The results were compared with those by conventional MLFF NNs. Good performance and time-saving implementation make the proposed method an attractive approach for non-linear mapping and classification. PMID:12927103

  15. Computationally Efficient Neural Network Intrusion Security Awareness

    SciTech Connect

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  16. Neural network construction via back-propagation

    SciTech Connect

    Burwick, T.T.

    1994-06-01

    A method is presented that combines back-propagation with multi-layer neural network construction. Back-propagation is used not only to adjust the weights but also the signal functions. Going from one network to an equivalent one that has additional linear units, the non-linearity of these units and thus their effective presence is then introduced via back-propagation (weight-splitting). The back-propagated error causes the network to include new units in order to minimize the error function. We also show how this formalism allows to escape local minima.

  17. Multiscale Modeling of Cortical Neural Networks

    NASA Astrophysics Data System (ADS)

    Torben-Nielsen, Benjamin; Stiefel, Klaus M.

    2009-09-01

    In this study, we describe efforts at modeling the electrophysiological dynamics of cortical networks in a multi-scale manner. Specifically, we describe the implementation of a network model composed of simple single-compartmental neuron models, in which a single complex multi-compartmental model of a pyramidal neuron is embedded. The network is capable of generating Δ (2 Hz, observed during deep sleep states) and γ (40 Hz, observed during wakefulness) oscillations, which are then imposed onto the multi-compartmental model, thus providing realistic, dynamic boundary conditions. We furthermore discuss the challenges and chances involved in multi-scale modeling of neural function.

  18. The importance of artificial neural networks in biomedicine

    SciTech Connect

    Burke, H.B.

    1995-12-31

    The future explanatory power in biomedicine will be at the molecular-genetic level of analysis (rather than the epidemiologic-demographic or anatomic-cellular levels). This is the level of complex systems. Complex systems are characterized by nonlinearity and complex interactions. It is difficult for traditional statistical methods to capture complex systems because traditional methods attempt to find the model that best fits the statistician`s understanding of the phenomenon; complex systems are difficult to understand and therefore difficult to fit with a simple model. Artificial neural networks are nonparametric regression models. They can capture any phenomena, to any degree of accuracy (depending on the adequacy of the data and the power of the predictors), without prior knowledge of the phenomena. Further, artificial neural networks can be represented, not only as formulae, but also as graphical models. Graphical models can increase analytic power and flexibility. Artificial neural networks are a powerful method for capturing complex phenomena, but their use requires a paradigm shift, from exploratory analysis of the data to exploratory analysis of the model.

  19. Do neural networks offer something for you?

    SciTech Connect

    Ramchandran, S.; Rhinehart, R.R.

    1995-11-01

    The concept of neural network computation was inspired by the hope to artifically reproduce some of the flexibility and power of the human brain. Human beings can recognize different patterns and voices even though these signals do not have a simple phenomenological understanding. Scientists have developed artificial neural networks (ANNs) for modeling processes that do not have a simple phenomenological explanation, such as voice recognition. Consequently, ANN jargon can be confusing to process and control engineers. In simple terms, ANNs take a nonlinear regression modeling approach. Like any regression curve-fitting approach, a least-squares optimization can generate model parameters. One advantage of ANNs is that they require neither a priori understanding of the process behavior nor phenomenological understanding of the process. ANNs use data describing the input/output relationship in a process to {open_quotes}learn{close_quotes} about the underlying process behavior. As a result of this, ANNs have a wide range of applicability. Furthermore, ANNs are computationally efficient and can replace models that are computationally intensive. This can make real-time online model-based applications practicable. A neural network is a dense mesh of nodes and connections. The basic processing elements of a network are called neurons. Neural networks are organized in layers, and typically consist of at least three layers: an input layer, one or more hidden layers, and an output layer. The input and output layers serve as interfaces that perform appropriate scaling between `real-world` and network data. Hidden layers are so termed because their neurons are hidden to the real-world data. Connections are the means for information flow. Each connection has an associated adjustable weight, w{sub i}. The weight can be regarded as a measure of the importance of the signals between the two neurons. 7 figs.

  20. Adaptive Neural Networks for Automatic Negotiation

    SciTech Connect

    Sakas, D. P.; Vlachos, D. S.; Simos, T. E.

    2007-12-26

    The use of fuzzy logic and fuzzy neural networks has been found effective for the modelling of the uncertain relations between the parameters of a negotiation procedure. The problem with these configurations is that they are static, that is, any new knowledge from theory or experiment lead to the construction of entirely new models. To overcome this difficulty, we apply in this work, an adaptive neural topology to model the negotiation process. Finally a simple simulation is carried in order to test the new method.

  1. Cancer classification based on gene expression using neural networks.

    PubMed

    Hu, H P; Niu, Z J; Bai, Y P; Tan, X H

    2015-12-21

    Based on gene expression, we have classified 53 colon cancer patients with UICC II into two groups: relapse and no relapse. Samples were taken from each patient, and gene information was extracted. Of the 53 samples examined, 500 genes were considered proper through analyses by S-Kohonen, BP, and SVM neural networks. Classification accuracy obtained by S-Kohonen neural network reaches 91%, which was more accurate than classification by BP and SVM neural networks. The results show that S-Kohonen neural network is more plausible for classification and has a certain feasibility and validity as compared with BP and SVM neural networks.

  2. Pruning Neural Networks with Distribution Estimation Algorithms

    SciTech Connect

    Cantu-Paz, E

    2003-01-15

    This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than the original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.

  3. Computational capabilities of recurrent NARX neural networks.

    PubMed

    Siegelmann, H T; Horne, B G; Giles, C L

    1997-01-01

    Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=Psi(u(t-n(u)), ..., u(t-1), u(t), y(t-n(y)), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n(u) and n(y) are the input and output order, and the function Psi is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power. PMID:18255858

  4. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    PubMed

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  5. Diagnostic ECG classification based on neural networks.

    PubMed

    Bortolan, G; Willems, J L

    1993-01-01

    This study illustrates the use of the neural network approach in the problem of diagnostic classification of resting 12-lead electrocardiograms. A large electrocardiographic library (the CORDA database established at the University of Leuven, Belgium) has been utilized in this study, whose classification is validated by electrocardiographic-independent clinical data. In particular, a subset of 3,253 electrocardiographic signals with single diseases has been selected. Seven diagnostic classes have been considered: normal, left, right, and biventricular hypertrophy, and anterior, inferior, and combined myocardial infarction. The basic architecture used is a feed-forward neural network and the backpropagation algorithm for the training phase. Sensitivity, specificity, total accuracy, and partial accuracy are the indices used for testing and comparing the results with classical methodologies. In order to validate this approach, the accuracy of two statistical models (linear discriminant analysis and logistic discriminant analysis) tuned on the same dataset have been taken as the reference point. Several nets have been trained, either adjusting some components of the architecture of the networks, considering subsets and clusters of the original learning set, or combining different neural networks. The results have confirmed the potentiality and good performance of the connectionist approach when compared with classical methodologies.

  6. Applications of neural networks in training science.

    PubMed

    Pfeiffer, Mark; Hohmann, Andreas

    2012-04-01

    Training science views itself as an integrated and applied science, developing practical measures founded on scientific method. Therefore, it demands consideration of a wide spectrum of approaches and methods. Especially in the field of competitive sports, research questions are usually located in complex environments, so that mainly field studies are drawn upon to obtain broad external validity. Here, the interrelations between different variables or variable sets are mostly of a nonlinear character. In these cases, methods like neural networks, e.g., the pattern recognizing methods of Self-Organizing Kohonen Feature Maps or similar instruments to identify interactions might be successfully applied to analyze data. Following on from a classification of data analysis methods in training-science research, the aim of the contribution is to give examples of varied sports in which network approaches can be effectually used in training science. First, two examples are given in which neural networks are employed for pattern recognition. While one investigation deals with the detection of sporting talent in swimming, the other is located in game sports research, identifying tactical patterns in team handball. The third and last example shows how an artificial neural network can be used to predict competitive performance in swimming.

  7. Functional expansion representations of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1992-01-01

    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  8. Convolutional Neural Network Based dem Super Resolution

    NASA Astrophysics Data System (ADS)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  9. Correcting wave predictions with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Makarynskyy, O.; Makarynska, D.

    2003-04-01

    The predictions of wind waves with different lead times are necessary in a large scope of coastal and open ocean activities. Numerical wave models, which usually provide this information, are based on deterministic equations that do not entirely account for the complexity and uncertainty of the wave generation and dissipation processes. An attempt to improve wave parameters short-term forecasts based on artificial neural networks is reported. In recent years, artificial neural networks have been used in a number of coastal engineering applications due to their ability to approximate the nonlinear mathematical behavior without a priori knowledge of interrelations among the elements within a system. The common multilayer feed-forward networks, with a nonlinear transfer functions in the hidden layers, were developed and employed to forecast the wave characteristics over one hour intervals starting from one up to 24 hours, and to correct these predictions. Three non-overlapping data sets of wave characteristics, both from a buoy, moored roughly 60 miles west of the Aran Islands, west coast of Ireland, were used to train and validate the neural nets involved. The networks were trained with error back propagation algorithm. Time series plots and scatterplots of the wave characteristics as well as tables with statistics show an improvement of the results achieved due to the correction procedure employed.

  10. Neural network for tsunami and runup forecast

    NASA Astrophysics Data System (ADS)

    Namekar, Shailesh; Yamazaki, Yoshiki; Cheung, Kwok Fai

    2009-04-01

    This paper examines the use of neural network to model nonlinear tsunami processes for forecasting of coastal waveforms and runup. The three-layer network utilizes a radial basis function in the hidden, middle layer for nonlinear transformation of input waveforms near the tsunami source. Events based on the 2006 Kuril Islands tsunami demonstrate the implementation and capability of the network. Division of the Kamchatka-Kuril subduction zone into a number of subfaults facilitates development of a representative tsunami dataset using a nonlinear long-wave model. The computed waveforms near the tsunami source serve as the input and the far-field waveforms and runup provide the target output for training of the network through a back-propagation algorithm. The trained network reproduces the resonance of tsunami waves and the topography-dominated runup patterns at Hawaii's coastlines from input water-level data off the Aleutian Islands.

  11. Character Recognition Using Genetically Trained Neural Networks

    SciTech Connect

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the amount of

  12. On lateral competition in dynamic neural networks

    SciTech Connect

    Bellyustin, N.S.

    1995-02-01

    Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.

  13. Neural networks as a control methodology

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1990-01-01

    While conventional computers must be programmed in a logical fashion by a person who thoroughly understands the task to be performed, the motivation behind neural networks is to develop machines which can train themselves to perform tasks, using available information about desired system behavior and learning from experience. There are three goals of this fellowship program: (1) to evaluate various neural net methods and generate computer software to implement those deemed most promising on a personal computer equipped with Matlab; (2) to evaluate methods currently in the professional literature for system control using neural nets to choose those most applicable to control of flexible structures; and (3) to apply the control strategies chosen in (2) to a computer simulation of a test article, the Control Structures Interaction Suitcase Demonstrator, which is a portable system consisting of a small flexible beam driven by a torque motor and mounted on springs tuned to the first flexible mode of the beam. Results of each are discussed.

  14. Artificial neural networks for decision support in clinical medicine.

    PubMed

    Forsström, J J; Dalton, K J

    1995-10-01

    Connectionist models such as neural networks are alternatives to linear, parametric statistical methods. Neural networks are computer-based pattern recognition methods with loose similarities with the nervous system. Individual variables of the network, usually called 'neurones', can receive inhibitory and excitatory inputs from other neurones. The networks can define relationships among input data that are not apparent when using other approaches, and they can use these relationships to improve accuracy. Thus, neural nets have substantial power to recognize patterns even in complex datasets. Neural network methodology has outperformed classical statistical methods in cases where the input variables are interrelated. Because clinical measurements usually derive from multiple interrelated systems it is evident that neural networks might be more accurate than classical methods in multivariate analysis of clinical data. This paper reviews the use of neural networks in medical decision support. A short introduction to the basics of neural networks is given, and some practical issues in applying the networks are highlighted. The current use of neural networks in image analysis, signal processing and laboratory medicine is reviewed. It is concluded that neural networks have an important role in image analysis and in signal processing. However, further studies are needed to determine the value of neural networks in the analysis of laboratory data.

  15. Analysis of Stochastic Response of Neural Networks with Stochastic Input

    1996-10-10

    Software permits the user to extend capability of his/her neural network to include probablistic characteristics of input parameter. User inputs topology and weights associated with neural network along with distributional characteristics of input parameters. Network response is provided via a cumulative density function of network response variable.

  16. Neural networks for aerosol particles characterization

    NASA Astrophysics Data System (ADS)

    Berdnik, V. V.; Loiko, V. A.

    2016-11-01

    Multilayer perceptron neural networks with one, two and three inputs are built to retrieve parameters of spherical homogeneous nonabsorbing particle. The refractive index ranges from 1.3 to 1.7; particle radius ranges from 0.251 μm to 56.234 μm. The logarithms of the scattered radiation intensity are used as input signals. The problem of the most informative scattering angles selection is elucidated. It is shown that polychromatic illumination helps one to increase significantly the retrieval accuracy. In the absence of measurement errors relative error of radius retrieval by the neural network with three inputs is 0.54%, relative error of the refractive index retrieval is 0.84%. The effect of measurement errors on the result of retrieval is simulated.

  17. Application of neural networks in space construction

    NASA Technical Reports Server (NTRS)

    Thilenius, Stephen C.; Barnes, Frank

    1990-01-01

    When trying to decide what task should be done by robots and what tasks should be done by humans with respect to space construction, there has been one decisive barrier which ultimately divides the tasks: can a computer do the job? Von Neumann type computers have great difficulty with problems that the human brain seems to do instantaneously and with little effort. Some of these problems are pattern recognition, speech recognition, content addressable memories, and command interpretation. In an attempt to simulate these talents of the human brain, much research was currently done into the operations and construction of artificial neural networks. The efficiency of the interface between man and machine, robots in particular, can therefore be greatly improved with the use of neural networks. For example, wouldn't it be easier to command a robot to 'fetch an object' rather then having to remotely control the entire operation with remote control?

  18. Convex quadratic optimization on artificial neural networks

    SciTech Connect

    Adler, I.; Verma, S.

    1994-12-31

    We present continuous-valued Hopfield recurrent neural networks on which we map convex quadratic optimization problems. We consider two different convex quadratic programs, each of which is mapped to a different neural network. Activation functions are shown to play a key role in the mapping under each model. The class of activation functions which can be used in this mapping is characterized in terms of the properties needed. It is shown that the first derivatives of penalty as well as barrier functions belong to this class. The trajectories of dynamics under the first model are shown to be closely related to affine-scaling trajectories of interior-point methods. On the other hand, the trajectories of dynamics under the second model correspond to projected steepest descent pathways.

  19. Adaptive computation algorithm for RBF neural network.

    PubMed

    Han, Hong-Gui; Qiao, Jun-Fei

    2012-02-01

    A novel learning algorithm is proposed for nonlinear modelling and identification using radial basis function neural networks. The proposed method simplifies neural network training through the use of an adaptive computation algorithm (ACA). In addition, the convergence of the ACA is analyzed by the Lyapunov criterion. The proposed algorithm offers two important advantages. First, the model performance can be significantly improved through ACA, and the modelling error is uniformly ultimately bounded. Secondly, the proposed ACA can reduce computational cost and accelerate the training speed. The proposed method is then employed to model classical nonlinear system with limit cycle and to identify nonlinear dynamic system, exhibiting the effectiveness of the proposed algorithm. Computational complexity analysis and simulation results demonstrate its effectiveness.

  20. Short term energy forecasting with neural networks

    SciTech Connect

    McMenamin, J.S.; Monforte, F.A. )

    1998-01-01

    Artificial neural networks are beginning to be used by electric utilities to forecast hourly system loads on a day-ahead basis. This paper discusses the neural network specification in terms of conventional econometric language, providing parallel concepts for terms such as training, learning, and nodes in the hidden layer. It is shown that these models are flexible nonlinear equations that can be estimated using nonlinear least squares. It is argued that these models are especially well suited to hourly load forecasting, reflecting the presence of important nonlinearities and variable interactions. The paper proceeds to show how conventional statistics, such as the BIC and MAPE statistics can be used to select the number of nodes in the hidden layer. It is concluded that these models provide a powerful, robust and sensible approach to hourly load forecasting that will provide modest improvements in forecast accuracy relative to well-specified regression models.

  1. Automatic breast density classification using neural network

    NASA Astrophysics Data System (ADS)

    Arefan, D.; Talebpour, A.; Ahmadinejhad, N.; Kamali Asl, A.

    2015-12-01

    According to studies, the risk of breast cancer directly associated with breast density. Many researches are done on automatic diagnosis of breast density using mammography. In the current study, artifacts of mammograms are removed by using image processing techniques and by using the method presented in this study, including the diagnosis of points of the pectoral muscle edges and estimating them using regression techniques, pectoral muscle is detected with high accuracy in mammography and breast tissue is fully automatically extracted. In order to classify mammography images into three categories: Fatty, Glandular, Dense, a feature based on difference of gray-levels of hard tissue and soft tissue in mammograms has been used addition to the statistical features and a neural network classifier with a hidden layer. Image database used in this research is the mini-MIAS database and the maximum accuracy of system in classifying images has been reported 97.66% with 8 hidden layers in neural network.

  2. Multiresolution dynamic predictor based on neural networks

    NASA Astrophysics Data System (ADS)

    Tsui, Fu-Chiang; Li, Ching-Chung; Sun, Mingui; Sclabassi, Robert J.

    1996-03-01

    We present a multiresolution dynamic predictor (MDP) based on neural networks for multi- step prediction of a time series. The MDP utilizes the discrete biorthogonal wavelet transform to compute wavelet coefficients at several scale levels and recurrent neural networks (RNNs) to form a set of dynamic nonlinear models for prediction of the time series. By employing RNNs in wavelet coefficient space, the MDP is capable of predicting a time series for both the long-term (with coarse resolution) and short-term (with fine resolution). Experimental results have demonstrated the effectiveness of the MDP for multi-step prediction of intracranial pressure (ICP) recorded from head-trauma patients. This approach has applicability to quasi- stationary signals and is suitable for on-line computation.

  3. Toward modeling a dynamic biological neural network.

    PubMed

    Ross, M D; Dayhoff, J E; Mugler, D H

    1990-01-01

    Mammalian macular endorgans are linear bioaccelerometers located in the vestibular membranous labyrinth of the inner ear. In this paper, the organization of the endorgan is interpreted on physical and engineering principles. This is a necessary prerequisite to mathematical and symbolic modeling of information processing by the macular neural network. Mathematical notations that describe the functioning system were used to produce a novel, symbolic model. The model is six-tiered and is constructed to mimic the neural system. Initial simulations show that the network functions best when some of the detecting elements (type I hair cells) are excitatory and others (type II hair cells) are weakly inhibitory. The simulations also illustrate the importance of disinhibition of receptors located in the third tier in shaping nerve discharge patterns at the sixth tier in the model system. PMID:11538873

  4. Evaluating neural networks and artificial intelligence systems

    NASA Astrophysics Data System (ADS)

    Alberts, David S.

    1994-02-01

    Systems have no intrinsic value in and of themselves, but rather derive value from the contributions they make to the missions, decisions, and tasks they are intended to support. The estimation of the cost-effectiveness of systems is a prerequisite for rational planning, budgeting, and investment documents. Neural network and expert system applications, although similar in their incorporation of a significant amount of decision-making capability, differ from each other in ways that affect the manner in which they can be evaluated. Both these types of systems are, by definition, evolutionary systems, which also impacts their evaluation. This paper discusses key aspects of neural network and expert system applications and their impact on the evaluation process. A practical approach or methodology for evaluating a certain class of expert systems that are particularly difficult to measure using traditional evaluation approaches is presented.

  5. Representing Shape Primitives In Neural Networks

    NASA Astrophysics Data System (ADS)

    Pawlicki, Ted

    1988-08-01

    Parallel distributed, connectionist, neural networks present powerful computational metaphors for diverse applications ranging from machine perception to artificial intelligence [1-3,6]. Historically, such systems have been appealing for their ability to perform self-organization and learning[7, 8, 11]. However, while simple systems of this type can perform interesting tasks, results from such systems perform little better than existing template matchers in some real world applications [9,10]. The definition of a more complex structure made from simple units can be used to enhance performance of these models [4, 5], but the addition of extra complexity raises representational issues. This paper reports on attempts to code information and features which have classically been useful to shape analysis into a neural network system.

  6. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  7. Applying neural networks to optimize instrumentation performance

    SciTech Connect

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  8. Neural network architectures to analyze OPAD data

    NASA Technical Reports Server (NTRS)

    Whitaker, Kevin W.

    1992-01-01

    A prototype Optical Plume Anomaly Detection (OPAD) system is now installed on the space shuttle main engine (SSME) Technology Test Bed (TTB) at MSFC. The OPAD system requirements dictate the need for fast, efficient data processing techniques. To address this need of the OPAD system, a study was conducted into how artificial neural networks could be used to assist in the analysis of plume spectral data.

  9. Artificial neural network cardiopulmonary modeling and diagnosis

    DOEpatents

    Kangas, L.J.; Keller, P.E.

    1997-10-28

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis. 12 figs.

  10. Neural-Network Processor Would Allocate Resources

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.; Moopenn, Alexander W.

    1990-01-01

    Global optimization problems solved quickly. Neural-network processor optimizes allocation of M resources among N expenditures according to cost of pairing each resource with each expenditure and subject to limit on number of resources feeding into each expenditure and/or limit on number of expenditures to which each resource allocated. One cell performs several analog and digital functions. Potential applications include assignment of jobs, scheduling, dispatching, and planning of military maneuvers.

  11. Learning in Neural Networks: VLSI Implementation Strategies

    NASA Technical Reports Server (NTRS)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  12. Neural Network Solves "Traveling-Salesman" Problem

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  13. Artificial neural network cardiopulmonary modeling and diagnosis

    DOEpatents

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  14. Neural network with dynamically adaptable neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.

  15. Nonvolatile Array Of Synapses For Neural Network

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Elements of array programmed with help of ultraviolet light. A 32 x 32 very-large-scale integrated-circuit array of electronic synapses serves as building-block chip for analog neural-network computer. Synaptic weights stored in nonvolatile manner. Makes information content of array invulnerable to loss of power, and, by eliminating need for circuitry to refresh volatile synaptic memory, makes architecture simpler and more compact.

  16. Analog hardware for learning neural networks

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P. (Inventor)

    1991-01-01

    This is a recurrent or feedforward analog neural network processor having a multi-level neuron array and a synaptic matrix for storing weighted analog values of synaptic connection strengths which is characterized by temporarily changing one connection strength at a time to determine its effect on system output relative to the desired target. That connection strength is then adjusted based on the effect, whereby the processor is taught the correct response to training examples connection by connection.

  17. Neural network error correction for solving coupled ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  18. Analog compound orthogonal neural network control of robotic manipulators

    NASA Astrophysics Data System (ADS)

    Jun, Ye

    2005-12-01

    An analog compound orthogonal neural network is presented which is based on digital compound orthogonal neural networks. The compound neural network's control performance was investigated as applied to a robot control problem. The analog neural network is a Chebyshev neural network with a high speed-learning rate in an on-line manner. Its control algorithm does not relate to controlled plant models. The analog neural network is used as the feedforward controller, and PD is used as the feedback controller in the control system of robots. The excellent performance in system response, tracking accuracy, and robustness was verified through a simulation experiment applied to a robotic manipulator with friction and nonlinear disturbances. The trajectory tracking control showed results in satisfactory effectiveness. This analog neural controller provides a novel approach for the control of uncertain or unknown systems.

  19. A space-time neural network

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1991-01-01

    Introduced here is a novel technique which adds the dimension of time to the well known back propagation neural network algorithm. Cited here are several reasons why the inclusion of automated spatial and temporal associations are crucial to effective systems modeling. An overview of other works which also model spatiotemporal dynamics is furnished. A detailed description is given of the processes necessary to implement the space-time network algorithm. Several demonstrations that illustrate the capabilities and performance of this new architecture are given.

  20. Deep learning in neural networks: an overview.

    PubMed

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. PMID:25462637

  1. Deep learning in neural networks: an overview.

    PubMed

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

  2. File access prediction using neural networks.

    PubMed

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors.

  3. The next generation of neural network chips

    SciTech Connect

    Beiu, V.

    1997-08-01

    There have been many national and international neural networks research initiatives: USA (DARPA, NIBS), Canada (IRIS), Japan (HFSP) and Europe (BRAIN, GALA TEA, NERVES, ELENE NERVES 2) -- just to mention a few. Recent developments in the field of neural networks, cognitive science, bioengineering and electrical engineering have made it possible to understand more about the functioning of large ensembles of identical processing elements. There are more research papers than ever proposing solutions and hardware implementations are by no means an exception. Two fields (computing and neuroscience) are interacting in ways nobody could imagine just several years ago, and -- with the advent of new technologies -- researchers are focusing on trying to copy the Brain. Such an exciting confluence may quite shortly lead to revolutionary new computers and it is the aim of this invited session to bring to light some of the challenging research aspects dealing with the hardware realizability of future intelligent chips. Present-day (conventional) technology is (still) mostly digital and, thus, occupies wider areas and consumes much more power than the solutions envisaged. The innovative algorithmic and architectural ideals should represent important breakthroughs, paving the way towards making neural network chips available to the industry at competitive prices, in relatively small packages and consuming a fraction of the power required by equivalent digital solutions.

  4. CALIBRATION OF ONLINE ANALYZERS USING NEURAL NETWORKS

    SciTech Connect

    Rajive Ganguli; Daniel E. Walsh; Shaohai Yu

    2003-12-05

    Neural networks were used to calibrate an online ash analyzer at the Usibelli Coal Mine, Healy, Alaska, by relating the Americium and Cesium counts to the ash content. A total of 104 samples were collected from the mine, with 47 being from screened coal, and the rest being from unscreened coal. Each sample corresponded to 20 seconds of coal on the running conveyor belt. Neural network modeling used the quick stop training procedure. Therefore, the samples were split into training, calibration and prediction subsets. Special techniques, using genetic algorithms, were developed to representatively split the sample into the three subsets. Two separate approaches were tried. In one approach, the screened and unscreened coal was modeled separately. In another, a single model was developed for the entire dataset. No advantage was seen from modeling the two subsets separately. The neural network method performed very well on average but not individually, i.e. though each prediction was unreliable, the average of a few predictions was close to the true average. Thus, the method demonstrated that the analyzers were accurate at 2-3 minutes intervals (average of 6-9 samples), but not at 20 seconds (each prediction).

  5. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. . Dept. of Nuclear Engineering Oak Ridge National Lab., TN )

    1992-01-01

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  6. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. |

    1992-12-31

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  7. Functional model of biological neural networks

    PubMed Central

    2010-01-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks. PMID:22132040

  8. A new approach to artificial neural networks.

    PubMed

    Baptista Filho, B D; Cabral, E L; Soares, A J

    1998-01-01

    A novel approach to artificial neural networks is presented. The philosophy of this approach is based on two aspects: the design of task-specific networks, and a new neuron model with multiple synapses. The synapses' connective strengths are modified through selective and cumulative processes conducted by axo-axonic connections from a feedforward circuit. This new concept was applied to the position control of a planar two-link manipulator exhibiting excellent results on learning capability and generalization when compared with a conventional feedforward network. In the present paper, the example shows only a network developed from a neuronal reflexive circuit with some useful artifices, nevertheless without the intention of covering all possibilities devised.

  9. Microscopic instability in recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Yamanaka, Yuzuru; Amari, Shun-ichi; Shinomoto, Shigeru

    2015-03-01

    In a manner similar to the molecular chaos that underlies the stable thermodynamics of gases, a neuronal system may exhibit microscopic instability in individual neuronal dynamics while a macroscopic order of the entire population possibly remains stable. In this study, we analyze the microscopic stability of a network of neurons whose macroscopic activity obeys stable dynamics, expressing either monostable, bistable, or periodic state. We reveal that the network exhibits a variety of dynamical states for microscopic instability residing in a given stable macroscopic dynamics. The presence of a variety of dynamical states in such a simple random network implies more abundant microscopic fluctuations in real neural networks which consist of more complex and hierarchically structured interactions.

  10. Random interactions in higher order neural networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre; Venkatesh, Santosh S.

    1993-01-01

    Recurrent networks of polynomial threshold elements with random symmetric interactions are studied. Precise asymptotic estimates are derived for the expected number of fixed points as a function of the margin of stability. In particular, it is shown that there is a critical range of margins of stability (depending on the degree of polynomial interaction) such that the expected number of fixed points with margins below the critical range grows exponentially with the number of nodes in the network, while the expected number of fixed points with margins above the critical range decreases exponentially with the number of nodes in the network. The random energy model is also briefly examined and links with higher order neural networks and higher order spin glass models made explicit.

  11. Desynchronization in diluted neural networks

    SciTech Connect

    Zillmer, Ruediger; Livi, Roberto; Politi, Antonio; Torcini, Alessandro

    2006-09-15

    The dynamical behavior of a weakly diluted fully inhibitory network of pulse-coupled spiking neurons is investigated. Upon increasing the coupling strength, a transition from regular to stochasticlike regime is observed. In the weak-coupling phase, a periodic dynamics is rapidly approached, with all neurons firing with the same rate and mutually phase locked. The strong-coupling phase is characterized by an irregular pattern, even though the maximum Lyapunov exponent is negative. The paradox is solved by drawing an analogy with the phenomenon of 'stable chaos', i.e., by observing that the stochasticlike behavior is 'limited' to an exponentially long (with the system size) transient. Remarkably, the transient dynamics turns out to be stationary.

  12. A global competitive neural network.

    PubMed

    Taylor, J G; Alavi, F N

    1995-01-01

    A study is presented of a set of coupled nets proposed to function as a global competitive network. One net, of hidden nodes, is composed solely of inhibitory neurons and is excitatorily driven and feeds back in a disinhibitory manner to an input net which itself feeds excitatorily to a (cortical) output net. The manner in which the former input and hidden inhibitory net function so as to enhance outputs as compared with inputs, and the further enhancements when the cortical net is added, are explored both mathematically and by simulation. This is extended to learning on cortical afferent and lateral connections. A global wave structure, arising on the inhibitory net in a similar manner to that of pattern formation in a negative laplacian net, is seen to be important to all of these activities. Simulations are only performed in one dimension, although the global nature of the activity is expected to extend to higher dimensions. Possible implications are briefly discussed.

  13. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  14. Optimizing Cellular Networks Enabled with Renewal Energy via Strategic Learning.

    PubMed

    Sohn, Insoo; Liu, Huaping; Ansari, Nirwan

    2015-01-01

    An important issue in the cellular industry is the rising energy cost and carbon footprint due to the rapid expansion of the cellular infrastructure. Greening cellular networks has thus attracted attention. Among the promising green cellular network techniques, the renewable energy-powered cellular network has drawn increasing attention as a critical element towards reducing carbon emissions due to massive energy consumption in the base stations deployed in cellular networks. Game theory is a branch of mathematics that is used to evaluate and optimize systems with multiple players with conflicting objectives and has been successfully used to solve various problems in cellular networks. In this paper, we model the green energy utilization and power consumption optimization problem of a green cellular network as a pilot power selection strategic game and propose a novel distributed algorithm based on a strategic learning method. The simulation results indicate that the proposed algorithm achieves correlated equilibrium of the pilot power selection game, resulting in optimum green energy utilization and power consumption reduction. PMID:26167934

  15. Optimizing Cellular Networks Enabled with Renewal Energy via Strategic Learning

    PubMed Central

    Sohn, Insoo; Liu, Huaping; Ansari, Nirwan

    2015-01-01

    An important issue in the cellular industry is the rising energy cost and carbon footprint due to the rapid expansion of the cellular infrastructure. Greening cellular networks has thus attracted attention. Among the promising green cellular network techniques, the renewable energy-powered cellular network has drawn increasing attention as a critical element towards reducing carbon emissions due to massive energy consumption in the base stations deployed in cellular networks. Game theory is a branch of mathematics that is used to evaluate and optimize systems with multiple players with conflicting objectives and has been successfully used to solve various problems in cellular networks. In this paper, we model the green energy utilization and power consumption optimization problem of a green cellular network as a pilot power selection strategic game and propose a novel distributed algorithm based on a strategic learning method. The simulation results indicate that the proposed algorithm achieves correlated equilibrium of the pilot power selection game, resulting in optimum green energy utilization and power consumption reduction. PMID:26167934

  16. Automated brain segmentation using neural networks

    NASA Astrophysics Data System (ADS)

    Powell, Stephanie; Magnotta, Vincent; Johnson, Hans; Andreasen, Nancy

    2006-03-01

    Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like that being collected in several on going multi-center studies. We have previously reported on using artificial neural networks (ANN) to define subcortical brain structures such as the thalamus (0.825), caudate (0.745), and putamen (0.755). One of the inputs into the ANN is the apriori probability of a structure existing at a given location. In this previous work, the apriori probability information was generated in Talairach space using a piecewise linear registration. In this work we have increased the dimensionality of this registration using Thirion's demons registration algorithm. The input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. The output of the neural network determined if the voxel was defined as one of the N regions used for training. Training was performed using a standard back propagation algorithm. The ANN was trained on a set of 15 images for 750,000,000 iterations. The resulting ANN weights were then applied to 6 test images not part of the training set. Relative overlap calculated for each structure was 0.875 for the thalamus, 0.845 for the caudate, and 0.814 for the putamen. With the modifications on the neural net algorithm and the use of multi-dimensional registration, we found substantial improvement in the automated segmentation method. The resulting segmented structures are as reliable as manual raters and the output of the neural network can be used without additional rater intervention.

  17. A solution method of unit commitment by artificial neural networks

    SciTech Connect

    Yokoyama, R. )

    1992-08-01

    This paper explores the possibility of applying the Hopfield neural network to combinatorial optimization problems in power systems, in particular to unit commitment. A large number of inequality constraints included in unit commitment are handled by dedicated neural networks. As an exact mapping of the problem onto the neural network is impossible with the state of the art, the authors have developed a two step solution method: firstly, generators to start up at each period are determined by the network and then their outputs are adjusted by a conventional algorithm. The proposed neural network could solve a unit commitment of 30 units over 24 periods, and results obtained are very encouraging.

  18. A neural network short-term forecast of significant thunderstorms

    SciTech Connect

    Mccann, D.W. )

    1992-09-01

    Neural networks, an artificial-intelligence tools that excels in pattern recognition, are reviewed, and a 3-7-h significant thunderstorm forecast developed with this technique is discussed. Two neural networks learned to forecast significant thunderstorms from fields of surface-based lifted index and surface moisture convergence. These networks are sensitive to the patterns that skilled forecasters recognize as occurring prior to strong thunderstorms. The two neural networks are combined operationally at the National Severe Storm Forecast Center into a single hourly product that enhances pattern-recognition skills. Examples of neural network products are shown, and their potential impact on significant thunderstorm forecasting is demonstrated. 22 refs.

  19. Detection of Wildfires with Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Umphlett, B.; Leeman, J.; Morrissey, M. L.

    2011-12-01

    Currently fire detection for the National Oceanic and Atmospheric Administration (NOAA) using satellite data is accomplished with algorithms and error checking human analysts. Artificial neural networks (ANNs) have been shown to be more accurate than algorithms or statistical methods for applications dealing with multiple datasets of complex observed data in the natural sciences. ANNs also deal well with multiple data sources that are not all equally reliable or equally informative to the problem. An ANN was tested to evaluate its accuracy in detecting wildfires utilizing polar orbiter numerical data from the Advanced Very High Resolution Radiometer (AVHRR). Datasets containing locations of known fires were gathered from the NOAA's polar orbiting satellites via the Comprehensive Large Array-data Stewardship System (CLASS). The data was then calibrated and navigation corrected using the Environment for Visualizing Images (ENVI). Fires were located with the aid of shapefiles generated via ArcGIS. Afterwards, several smaller ten pixel by ten pixel datasets were created for each fire (using the ENVI corrected data). Several datasets were created for each fire in order to vary fire position and avoid training the ANN to look only at fires in the center of an image. Datasets containing no fires were also created. A basic pattern recognition neural network was established with the MATLAB neural network toolbox. The datasets were then randomly separated into categories used to train, validate, and test the ANN. To prevent over fitting of the data, the mean squared error (MSE) of the network was monitored and training was stopped when the MSE began to rise. Networks were tested using each channel of the AVHRR data independently, channels 3a and 3b combined, and all six channels. The number of hidden neurons for each input set was also varied between 5-350 in steps of 5 neurons. Each configuration was run 10 times, totaling about 4,200 individual network evaluations. Thirty

  20. Adaptive neural network for image enhancement

    NASA Astrophysics Data System (ADS)

    Perl, Dan; Marsland, T. A.

    1992-09-01

    ANNIE is a neural network that removes noise and sharpens edges in digital images. For noise removal, ANNIE makes a weighted average of the values of the pixels over a certain neighborhood. For edge sharpening, ANNIE detects edges and applies a correction around them. Although averaging is a simple operation and needs only a two-layer neural network, detecting edges is more complex and demands several hidden layers. Based on Marr's theory of natural vision, the edge detection method uses zero-crossings in the image filtered by the ∇2G operator (where ∇2 is the Laplacian operator and G stands for a two- dimensional Gaussian distribution), and uses two channels with different spatial frequencies. Edge detectors are tuned for vertical and horizontal orientations. Lateral inhibition implemented through one-step recursion achieves both edge relaxation and correlation of the two channels. Training by means of the quickprop algorithm determines the shapes of the weighted averaging filter and the edge correction filters, and the rules for edge relaxation and channel interaction. ANNIE uses pairs of pictures as training patterns: one picture is a reference for the output of the network and the same picture deteriorated by noise and/or blur is the input of the network.

  1. Neural Network Model of Memory Retrieval

    PubMed Central

    Recanatesi, Stefano; Katkov, Mikhail; Romani, Sandro; Tsodyks, Misha

    2015-01-01

    Human memory can store large amount of information. Nevertheless, recalling is often a challenging task. In a classical free recall paradigm, where participants are asked to repeat a briefly presented list of words, people make mistakes for lists as short as 5 words. We present a model for memory retrieval based on a Hopfield neural network where transition between items are determined by similarities in their long-term memory representations. Meanfield analysis of the model reveals stable states of the network corresponding (1) to single memory representations and (2) intersection between memory representations. We show that oscillating feedback inhibition in the presence of noise induces transitions between these states triggering the retrieval of different memories. The network dynamics qualitatively predicts the distribution of time intervals required to recall new memory items observed in experiments. It shows that items having larger number of neurons in their representation are statistically easier to recall and reveals possible bottlenecks in our ability of retrieving memories. Overall, we propose a neural network model of information retrieval broadly compatible with experimental observations and is consistent with our recent graphical model (Romani et al., 2013). PMID:26732491

  2. Neural Network Model of Memory Retrieval.

    PubMed

    Recanatesi, Stefano; Katkov, Mikhail; Romani, Sandro; Tsodyks, Misha

    2015-01-01

    Human memory can store large amount of information. Nevertheless, recalling is often a challenging task. In a classical free recall paradigm, where participants are asked to repeat a briefly presented list of words, people make mistakes for lists as short as 5 words. We present a model for memory retrieval based on a Hopfield neural network where transition between items are determined by similarities in their long-term memory representations. Meanfield analysis of the model reveals stable states of the network corresponding (1) to single memory representations and (2) intersection between memory representations. We show that oscillating feedback inhibition in the presence of noise induces transitions between these states triggering the retrieval of different memories. The network dynamics qualitatively predicts the distribution of time intervals required to recall new memory items observed in experiments. It shows that items having larger number of neurons in their representation are statistically easier to recall and reveals possible bottlenecks in our ability of retrieving memories. Overall, we propose a neural network model of information retrieval broadly compatible with experimental observations and is consistent with our recent graphical model (Romani et al., 2013). PMID:26732491

  3. Sparse coding for layered neural networks

    NASA Astrophysics Data System (ADS)

    Katayama, Katsuki; Sakata, Yasuo; Horiguchi, Tsuyoshi

    2002-07-01

    We investigate storage capacity of two types of fully connected layered neural networks with sparse coding when binary patterns are embedded into the networks by a Hebbian learning rule. One of them is a layered network, in which a transfer function of even layers is different from that of odd layers. The other is a layered network with intra-layer connections, in which the transfer function of inter-layer is different from that of intra-layer, and inter-layered neurons and intra-layered neurons are updated alternately. We derive recursion relations for order parameters by means of the signal-to-noise ratio method, and then apply the self-control threshold method proposed by Dominguez and Bollé to both layered networks with monotonic transfer functions. We find that a critical value αC of storage capacity is about 0.11|a ln a| -1 ( a≪1) for both layered networks, where a is a neuronal activity. It turns out that the basin of attraction is larger for both layered networks when the self-control threshold method is applied.

  4. Facial expression recognition using constructive neural networks

    NASA Astrophysics Data System (ADS)

    Ma, Liying; Khorasani, Khashayar

    2001-08-01

    The computer-based recognition of facial expressions has been an active area of research for quite a long time. The ultimate goal is to realize intelligent and transparent communications between human beings and machines. The neural network (NN) based recognition methods have been found to be particularly promising, since NN is capable of implementing mapping from the feature space of face images to the facial expression space. However, finding a proper network size has always been a frustrating and time consuming experience for NN developers. In this paper, we propose to use the constructive one-hidden-layer feed forward neural networks (OHL-FNNs) to overcome this problem. The constructive OHL-FNN will obtain in a systematic way a proper network size which is required by the complexity of the problem being considered. Furthermore, the computational cost involved in network training can be considerably reduced when compared to standard back- propagation (BP) based FNNs. In our proposed technique, the 2-dimensional discrete cosine transform (2-D DCT) is applied over the entire difference face image for extracting relevant features for recognition purpose. The lower- frequency 2-D DCT coefficients obtained are then used to train a constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive learning process to reduce the network size without sacrificing the performance of the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having 5 facial expression images (neutral, smile, anger, sadness, and surprise). Images of 40 men are used for network training, and the remaining images are used for generalization and

  5. Advances in Artificial Neural Networks - Methodological Development and Application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  6. Solving quadratic programming problems by delayed projection neural network.

    PubMed

    Yang, Yongqing; Cao, Jinde

    2006-11-01

    In this letter, the delayed projection neural network for solving convex quadratic programming problems is proposed. The neural network is proved to be globally exponentially stable and can converge to an optimal solution of the optimization problem. Three examples show the effectiveness of the proposed network.

  7. Handwritten character recognition based on hybrid neural networks

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Sun, Guangmin; Zhang, Xinming

    2001-09-01

    A hybrid neural network system for the recognition of handwritten character using SOFM,BP and Fuzzy network is presented. The horizontal and vertical project of preprocessed character and 4_directional edge project are used as feature vectors. In order to improve the recognition effect, the GAT algorithm is applied. Through the hybrid neural network system, the recognition rate is improved visibly.

  8. Dynamic artificial neural networks with affective systems.

    PubMed

    Schuman, Catherine D; Birdwell, J Douglas

    2013-01-01

    Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance.

  9. Applying neural networks to ultrasonographic texture recognition

    NASA Astrophysics Data System (ADS)

    Gallant, Jean-Francois; Meunier, Jean; Stampfler, Robert; Cloutier, Jocelyn

    1993-09-01

    A neural network was trained to classify ultrasound image samples of normal, adenomatous (benign tumor) and carcinomatous (malignant tumor) thyroid gland tissue. The samples themselves, as well as their Fourier spectrum, miscellaneous cooccurrence matrices and 'generalized' cooccurrence matrices, were successively submitted to the network, to determine if it could be trained to identify discriminating features of the texture of the image, and if not, which feature extractor would give the best results. Results indicate that the network could indeed extract some distinctive features from the textures, since it could accomplish a partial classification when trained with the samples themselves. But a significant improvement both in learning speed and performance was observed when it was trained with the generalized cooccurrence matrices of the samples.

  10. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    PubMed

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  11. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    PubMed Central

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  12. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    PubMed

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  13. Survey on Neural Networks Used for Medical Image Processing

    PubMed Central

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2010-01-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques. PMID:26740861

  14. Building blocks for electronic spiking neural networks.

    PubMed

    van Schaik, A

    2001-01-01

    We present an electronic circuit modelling the spike generation process in the biological neuron. This simple circuit is capable of simulating the spiking behaviour of several different types of biological neurons. At the same time, the circuit is small so that many neurons can be implemented on a single silicon chip. This is important, as neural computation obtains its power not from a single neuron, but from the interaction between a large number of neurons. Circuits that model these interactions are also presented in this paper. They include the circuits for excitatory, inhibitory and shunting inhibitory synapses, a circuit which models the regeneration of spikes on the axon, and a circuit which models the reduction of input strength with the distance of the synapse to the cell body on the dendrite of the cell. Together these building blocks allow the implementation of electronic spiking neural networks.

  15. Classification of behavior using unsupervised temporal neural networks

    SciTech Connect

    Adair, K.L.; Argo, P.

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem.

  16. One pass learning for generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance.

  17. Geophysical phenomena classification by artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gough, M. P.; Bruckner, J. R.

    1995-01-01

    Space science information systems involve accessing vast data bases. There is a need for an automatic process by which properties of the whole data set can be assimilated and presented to the user. Where data are in the form of spectrograms, phenomena can be detected by pattern recognition techniques. Presented are the first results obtained by applying unsupervised Artificial Neural Networks (ANN's) to the classification of magnetospheric wave spectra. The networks used here were a simple unsupervised Hamming network run on a PC and a more sophisticated CALM network run on a Sparc workstation. The ANN's were compared in their geophysical data recognition performance. CALM networks offer such qualities as fast learning, superiority in generalizing, the ability to continuously adapt to changes in the pattern set, and the possibility to modularize the network to allow the inter-relation between phenomena and data sets. This work is the first step toward an information system interface being developed at Sussex, the Whole Information System Expert (WISE). Phenomena in the data are automatically identified and provided to the user in the form of a data occurrence morphology, the Whole Information System Data Occurrence Morphology (WISDOM), along with relationships to other parameters and phenomena.

  18. Geophysical phenomena classification by artificial neural networks

    SciTech Connect

    Gough, M.P.; Bruckner, J.R.

    1995-01-01

    Space science information systems involve accessing vast data bases. There is a need for an automatic process by which properties of the whole data set can be assimilated and presented to the user. Where data are in the form of spectrograms, phenomena can be detected by pattern recognition techniques. Presented are the first results obtained by applying unsupervised Artificial Neural Networks (ANN`s) to the classification of magnetospheric wave spectra. The networks used here were a simple unsupervised Hamming network run on a PC and a more sophisticated CALM network run on a Sparc workstation. The ANN`s were compared in their geophysical data recognition performance. CALM networks offer such qualities as fast learning, superiority in generalizing, the ability to continuously adapt to changes in the pattern set, and the possibility to modularize the network to allow the inter-relation between phenomena and data sets. This work is the first step toward an information system interface being developed at Sussex, the Whole Information System Expert (WISE). Phenomena in the data are automatically identified and provided to the user in the form of a data occurrence morphology, the Whole Information System Data Occurrence Morphology (WISDOM), along with relationships to other parameters and phenomena.

  19. Neural networks for LED color control

    NASA Astrophysics Data System (ADS)

    Ashdown, Ian E.

    2004-01-01

    The design and implementation of an architectural dimming control for multicolor LED-based lighting fixtures is complicated by the need to maintain a consistent color balance under a wide variety of operating conditions. Factors to consider include nonlinear relationships between luminous flux intensity and drive current, junction temperature dependencies, LED manufacturing tolerances and binning parameters, device aging characteristics, variations in color sensor spectral responsitivities, and the approximations introduced by linear color space models. In this paper we formulate this problem as a nonlinear multidimensional function, where maintaining a consistent color balance is equivalent to determining the hyperplane representing constant chromaticity. To be useful for an architectural dimming control design, this determination must be made in real time as the lighting fixture intensity is adjusted. Further, the LED drive current must be continuously adjusted in response to color sensor inputs to maintain constant chromaticity for a given intensity setting. Neural networks are known to be universal approximators capable of representing any continuously differentiable bounded function. We therefore use a radial basis function neural network to represent the multidimensional function and provide the feedback signals needed to maintain constant chromaticity. The network can be trained on the factory floor using individual device measurements such as spectral radiant intensity and color sensor characteristics. This provides a flexible solution that is mostly independent of LED manufacturing tolerances and binning parameters.

  20. Introduction to spiking neural networks: Information processing, learning and applications.

    PubMed

    Ponulak, Filip; Kasinski, Andrzej

    2011-01-01

    The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.

  1. Using Neural Networks to Describe Complex Phase Transformation Behavior

    SciTech Connect

    Vitek, J.M.; David, S.A.

    1999-05-24

    Final microstructures can often be the end result of a complex sequence of phase transformations. Fundamental analyses may be used to model various stages of the overall behavior but they are often impractical or cumbersome when considering multicomponent systems covering a wide range of compositions. Neural network analysis may be a useful alternative method of identifying and describing phase transformation beavior. A neural network model for ferrite prediction in stainless steel welds is described. It is shown that the neural network analysis provides valuable information that accounts for alloying element interactions. It is suggested that neural network analysis may be extremely useful for analysis when more fundamental approaches are unavailable or overly burdensome.

  2. Neural network approach for differential diagnosis of interstitial lung diseases

    NASA Astrophysics Data System (ADS)

    Asada, Naoki; Doi, Kunio; MacMahon, Heber; Montner, Steven M.; Giger, Maryellen L.; Abe, Chihiro; Wu, Chris Y.

    1990-07-01

    A neural network approach was applied for the differential diagnosis of interstitial lung diseases. The neural network was designed for distinguishing between 9 types of interstitial lung diseases based on 20 items of clinical and radiographic information. A database for training and testing the neural network was created with 10 hypothetical cases for each of the 9 diseases. The performance of the neural network was evaluated by ROC analysis. The optimal parameters for the current neural network were determined by selecting those yielding the highest ROC curves. In this case the neural network consisted of one hidden layer including 6 units and was trained with 200 learning iterations. When the decision performances of the neural network chest radiologists and senior radiology residents were compared the neural network indicated high performance comparable to that of chest radiologists and superior to that of senior radiology residents. Our preliminary results suggested strongly that the neural network approach had potential utility in the computer-aided differential diagnosis of interstitial lung diseases. 1_

  3. Neural network models: Insights and prescriptions from practical applications

    SciTech Connect

    Samad, T.

    1995-12-31

    Neural networks are no longer just a research topic; numerous applications are now testament to their practical utility. In the course of developing these applications, researchers and practitioners have been faced with a variety of issues. This paper briefly discusses several of these, noting in particular the rich connections between neural networks and other, more conventional technologies. A more comprehensive version of this paper is under preparation that will include illustrations on real examples. Neural networks are being applied in several different ways. Our focus here is on neural networks as modeling technology. However, much of the discussion is also relevant to other types of applications such as classification, control, and optimization.

  4. Neural network and its application to CT imaging

    SciTech Connect

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W.

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  5. Application of artificial neural networks to composite ply micromechanics

    NASA Technical Reports Server (NTRS)

    Brown, D. A.; Murthy, P. L. N.; Berke, L.

    1991-01-01

    Artificial neural networks can provide improved computational efficiency relative to existing methods when an algorithmic description of functional relationships is either totally unavailable or is complex in nature. For complex calculations, significant reductions in elapsed computation time are possible. The primary goal is to demonstrate the applicability of artificial neural networks to composite material characterization. As a test case, a neural network was trained to accurately predict composite hygral, thermal, and mechanical properties when provided with basic information concerning the environment, constituent materials, and component ratios used in the creation of the composite. A brief introduction on neural networks is provided along with a description of the project itself.

  6. Neural networks and their application to nuclear power plant diagnosis

    SciTech Connect

    Reifman, J.

    1997-10-01

    The authors present a survey of artificial neural network-based computer systems that have been proposed over the last decade for the detection and identification of component faults in thermal-hydraulic systems of nuclear power plants. The capabilities and advantages of applying neural networks as decision support systems for nuclear power plant operators and their inherent characteristics are discussed along with their limitations and drawbacks. The types of neural network structures used and their applications are described and the issues of process diagnosis and neural network-based diagnostic systems are identified. A total of thirty-four publications are reviewed.

  7. Neural network for solving convex quadratic bilevel programming problems.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network.

  8. Bacterial colony counting by Convolutional Neural Networks.

    PubMed

    Ferrari, Alessandro; Lombardi, Stefano; Signoroni, Alberto

    2015-01-01

    Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless fundamental task in microbiology. Computer vision based approaches can increase the efficiency and the reliability of the process, but accurate counting is challenging, due to the high degree of variability of agglomerated colonies. In this paper, we propose a solution which adopts Convolutional Neural Networks (CNN) for counting the number of colonies contained in confluent agglomerates, that scored an overall accuracy of the 92.8% on a large challenging dataset. The proposed CNN-based technique for estimating the cardinality of colony aggregates outperforms traditional image processing approaches, becoming a promising approach to many related applications.

  9. Experiments in finding neural network weights

    SciTech Connect

    Thomas, T.R.; Brewster, T.L.

    1990-04-01

    This report compares the speed with which back-propagation, conjugate gradient, and Quickprop algorithms find the neural network weights that solve: a small-scale nonlinear classification problem and a mapping from a vector-valued time series to a one-dimensional signal. The problem of tuning each algorithm for these problems (i.e., selecting a good set of control parameters) is also discussed. The most efficient algorithm was found to be an enhanced form of back-propagation using large momentum and learning rates, with weight updates after each pattern presentation and a small constant added to the error derivatives. 3 refs., 6 tabs.

  10. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  11. Demonstrations of neural network computations involving students.

    PubMed

    May, Christopher J

    2010-01-01

    David Marr famously proposed three levels of analysis (implementational, algorithmic, and computational) for understanding information processing systems such as the brain. While two of these levels are commonly taught in neuroscience courses (the implementational level through neurophysiology and the computational level through systems/cognitive neuroscience), the algorithmic level is typically neglected. This leaves an explanatory gap in students' understanding of how, for example, the flow of sodium ions enables cognition. Neural networks bridge these two levels by demonstrating how collections of interacting neuron-like units can give rise to more overtly cognitive phenomena. The demonstrations in this paper are intended to facilitate instructors' introduction and exploration of how neurons "process information."

  12. Convolution neural networks for ship type recognition

    NASA Astrophysics Data System (ADS)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  13. Artificial Neural Network applied to lightning flashes

    NASA Astrophysics Data System (ADS)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  14. Resource constrained design of artificial neural networks using comparator neural network

    NASA Technical Reports Server (NTRS)

    Wah, Benjamin W.; Karnik, Tanay S.

    1992-01-01

    We present a systematic design method executed under resource constraints for automating the design of artificial neural networks using the back error propagation algorithm. Our system aims at finding the best possible configuration for solving the given application with proper tradeoff between the training time and the network complexity. The design of such a system is hampered by three related problems. First, there are infinitely many possible network configurations, each may take an exceedingly long time to train; hence, it is impossible to enumerate and train all of them to completion within fixed time, space, and resource constraints. Second, expert knowledge on predicting good network configurations is heuristic in nature and is application dependent, rendering it difficult to characterize fully in the design process. A learning procedure that refines this knowledge based on examples on training neural networks for various applications is, therefore, essential. Third, the objective of the network to be designed is ill-defined, as it is based on a subjective tradeoff between the training time and the network cost. A design process that proposes alternate configurations under different cost-performance tradeoff is important. We have developed a Design System which schedules the available time, divided into quanta, for testing alternative network configurations. Its goal is to select/generate and test alternative network configurations in each quantum, and find the best network when time is expended. Since time is limited, a dynamic schedule that determines the network configuration to be tested in each quantum is developed. The schedule is based on relative comparison of predicted training times of alternative network configurations using comparator network paradigm. The comparator network has been trained to compare training times for a large variety of traces of TSSE-versus-time collected during back-propagation learning of various applications.

  15. Predicate calculus for an architecture of multiple neural networks

    NASA Astrophysics Data System (ADS)

    Consoli, Robert H.

    1990-08-01

    Future projects with neural networks will require multiple individual network components. Current efforts along these lines are ad hoc. This paper relates the neural network to a classical device and derives a multi-part architecture from that model. Further it provides a Predicate Calculus variant for describing the location and nature of the trainings and suggests Resolution Refutation as a method for determining the performance of the system as well as the location of needed trainings for specific proofs. 2. THE NEURAL NETWORK AND A CLASSICAL DEVICE Recently investigators have been making reports about architectures of multiple neural networksL234. These efforts are appearing at an early stage in neural network investigations they are characterized by architectures suggested directly by the problem space. Touretzky and Hinton suggest an architecture for processing logical statements1 the design of this architecture arises from the syntax of a restricted class of logical expressions and exhibits syntactic limitations. In similar fashion a multiple neural netword arises out of a control problem2 from the sequence learning problem3 and from the domain of machine learning. 4 But a general theory of multiple neural devices is missing. More general attempts to relate single or multiple neural networks to classical computing devices are not common although an attempt is made to relate single neural devices to a Turing machines and Sun et a!. develop a multiple neural architecture that performs pattern classification.

  16. Phase diagram of spiking neural networks

    PubMed Central

    Seyed-allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters – excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli. PMID:25788885

  17. Neural network identifications of spectral signatures

    SciTech Connect

    Gisler, G.; Borel, C.

    1996-02-01

    We have investigated the application of neural nets to the determination of fundamental leaf canopy parameters from synthetic spectra. We describe some preliminary runs in which we separately determine leaf chemistry, leaf structure, leaf area index, and soil characteristics, and then we perform a simultaneous determination of all these parameters in a single neural network run with synthetic six-band Landsat data. We find that neural nets offer considerable promise in the determination of fundamental parameters of agricultural and environmental interest from broad-band multispectral data. The determination of the quantities of interest is frequently performed with accuracies of 5% or better, though as expected, the accuracy of determination in any one parameter depends to some extent on the value of other parameters, most importantly the leaf area index. Soil characterization, for example, is best done at low lai, while leaf chemistry is most reliably done at high lai. We believe that these techniques, particularly when implemented in fast parallel hardware and mounted directly on remote sensing platforms, will be useful for various agricultural and environmental applications.

  18. Neural Networks for Signal Processing and Control

    NASA Astrophysics Data System (ADS)

    Hesselroth, Ted Daniel

    Neural networks are developed for controlling a robot-arm and camera system and for processing images. The networks are based upon computational schemes that may be found in the brain. In the first network, a neural map algorithm is employed to control a five-joint pneumatic robot arm and gripper through feedback from two video cameras. The pneumatically driven robot arm employed shares essential mechanical characteristics with skeletal muscle systems. To control the position of the arm, 200 neurons formed a network representing the three-dimensional workspace embedded in a four-dimensional system of coordinates from the two cameras, and learned a set of pressures corresponding to the end effector positions, as well as a set of Jacobian matrices for interpolating between these positions. Because of the properties of the rubber-tube actuators of the arm, the position as a function of supplied pressure is nonlinear, nonseparable, and exhibits hysteresis. Nevertheless, through the neural network learning algorithm the position could be controlled to an accuracy of about one pixel (~3 mm) after two hundred learning steps. Applications of repeated corrections in each step via the Jacobian matrices leads to a very robust control algorithm since the Jacobians learned by the network have to satisfy the weak requirement that they yield a reduction of the distance between gripper and target. The second network is proposed as a model for the mammalian vision system in which backward connections from the primary visual cortex (V1) to the lateral geniculate nucleus play a key role. The application of hebbian learning to the forward and backward connections causes the formation of receptive fields which are sensitive to edges, bars, and spatial frequencies of preferred orientations. The receptive fields are learned in such a way as to maximize the rate of transfer of information from the LGN to V1. Orientational preferences are organized into a feature map in the primary visual

  19. Spiking modular neural networks: A neural network modeling approach for hydrological processes

    NASA Astrophysics Data System (ADS)

    Parasuraman, Kamban; Elshorbagy, Amin; Carey, Sean K.

    2006-05-01

    Artificial Neural Networks (ANNs) have been widely used for modeling hydrological processes that are embedded with high nonlinearity in both spatial and temporal scales. The input-output functional relationship does not remain the same over the entire modeling domain, varying at different spatial and temporal scales. In this study, a novel neural network model called the spiking modular neural networks (SMNNs) is proposed. An SMNN consists of an input layer, a spiking layer, and an associator neural network layer. The modular nature of the SMNN helps in finding domain-dependent relationships. The performance of the model is evaluated using two distinct case studies. The first case study is that of streamflow modeling, and the second case study involves modeling of eddy covariance-measured evapotranspiration. Two variants of SMNNs were analyzed in this study. The first variant employs a competitive layer as the spiking layer, and the second variant employs a self-organizing map as the spiking layer. The performance of SMNNs is compared to that of a regular feed forward neural network (FFNN) model. Results from the study demonstrate that SMNNs performed better than FFNNs for both the case studies. Results from partitioning analysis reveal that, compared to FFNNs, SMNNs are effective in capturing the dynamics of high flows. In modeling evapotranspiration, it is found that net radiation and ground temperature alone can be used to model the evaporation flux effectively. The SMNNs are shown to be effective in discretizing the complex mapping space into simpler domains that can be learned with relative ease.

  20. Orthogonal patterns in binary neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    A binary neural network that stores only mutually orthogonal patterns is shown to converge, when probed by any pattern, to a pattern in the memory space, i.e., the space spanned by the stored patterns. The latter are shown to be the only members of the memory space under a certain coding condition, which allows maximum storage of M=(2N) sup 0.5 patterns, where N is the number of neurons. The stored patterns are shown to have basins of attraction of radius N/(2M), within which errors are corrected with probability 1 in a single update cycle. When the probe falls outside these regions, the error correction capability can still be increased to 1 by repeatedly running the network with the same probe.

  1. Neural networks for fault location in substations

    SciTech Connect

    Alves da Silva, A.P.; Silveira, P.M. da; Lambert-Torres, G.; Insfran, A.H.F.

    1996-01-01

    Faults producing load disconnections or emergency situations have to be located as soon as possible to start the electric network reconfiguration, restoring normal energy supply. This paper proposes the use of artificial neural networks (ANNs), of the associative memory type, to solve the fault location problem. The main idea is to store measurement sets representing the normal behavior of the protection system, considering the basic substation topology only, into associated memories. Afterwards, these memories are employed on-line for fault location using the protection system equipment status. The associative memories work correctly even in case of malfunction of the protection system and different pre-fault configurations. Although the ANNs are trained with single contingencies only, their generalization capability allows a good performance for multiple contingencies. The resultant fault location system is in operation at the 500 kV gas-insulated substation of the Itaipu system.

  2. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  3. A convolutional neural network neutrino event classifier

    NASA Astrophysics Data System (ADS)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  4. Multisensory integration substantiates distributed and overlapping neural networks.

    PubMed

    Pasqualotto, Achille

    2016-01-01

    The hypothesis that highly overlapping networks underlie brain functions (neural reuse) is decisively supported by three decades of multisensory research. Multisensory areas process information from more than one sensory modality and therefore represent the best examples of neural reuse. Recent evidence of multisensory processing in primary visual cortices further indicates that neural reuse is a basic feature of the brain. PMID:27562234

  5. Energy coding in neural network with inhibitory neurons.

    PubMed

    Wang, Ziyin; Wang, Rubin; Fang, Ruiyan

    2015-04-01

    This paper aimed at assessing and comparing the effects of the inhibitory neurons in the neural network on the neural energy distribution, and the network activities in the absence of the inhibitory neurons to understand the nature of neural energy distribution and neural energy coding. Stimulus, synchronous oscillation has significant difference between neural networks with and without inhibitory neurons, and this difference can be quantitatively evaluated by the characteristic energy distribution. In addition, the synchronous oscillation difference of the neural activity can be quantitatively described by change of the energy distribution if the network parameters are gradually adjusted. Compared with traditional method of correlation coefficient analysis, the quantitative indicators based on nervous energy distribution characteristics are more effective in reflecting the dynamic features of the neural network activities. Meanwhile, this neural coding method from a global perspective of neural activity effectively avoids the current defects of neural encoding and decoding theory and enormous difficulties encountered. Our studies have shown that neural energy coding is a new coding theory with high efficiency and great potential.

  6. The Electrophysiological MEMS Device with Micro Channel Array for Cellular Network Analysis

    NASA Astrophysics Data System (ADS)

    Tonomura, Wataru; Kurashima, Toshiaki; Takayama, Yuzo; Moriguchi, Hiroyuki; Jimbo, Yasuhiko; Konishi, Satoshi

    This paper describes a new type of MCA (Micro Channel Array) for simultaneous multipoint measurement of cellular network. Presented MCA employing the measurement principles of the patch-clamp technique is designed for advanced neural network analysis which has been studied by co-authors using 64ch MEA (Micro Electrode Arrays) system. First of all, sucking and clamping of cells through channels of developed MCA is expected to improve electrophysiological signal detections. Electrophysiological sensing electrodes integrated around individual channels of MCA by using MEMS (Micro Electro Mechanical System) technologies are electrically isolated for simultaneous multipoint measurement. In this study, we tested the developed MCA using the non-cultured rat's cerebral cortical slice and the hippocampal neurons. We could measure the spontaneous action potential of the slice simultaneously at multiple points and culture the neurons on developed MCA. Herein, we describe the experimental results together with the design and fabrication of the electrophysiological MEMS device with MCA for cellular network analysis.

  7. Adaptive neural network motion control of manipulators with experimental evaluations.

    PubMed

    Puga-Guzmán, S; Moreno-Valenzuela, J; Santibáñez, V

    2014-01-01

    A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller. PMID:24574910

  8. Adaptive Neural Network Motion Control of Manipulators with Experimental Evaluations

    PubMed Central

    Puga-Guzmán, S.; Moreno-Valenzuela, J.; Santibáñez, V.

    2014-01-01

    A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller. PMID:24574910

  9. Adaptive neural network motion control of manipulators with experimental evaluations.

    PubMed

    Puga-Guzmán, S; Moreno-Valenzuela, J; Santibáñez, V

    2014-01-01

    A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller.

  10. Neural network analysis for hazardous waste characterization

    SciTech Connect

    Misra, M.; Pratt, L.Y.; Farris, C.

    1995-12-31

    This paper is a summary of our work in developing a system for interpreting electromagnetic (EM) and magnetic sensor information from the dig face characterization experimental cell at INEL to determine the depth and nature of buried objects. This project contained three primary components: (1) development and evaluation of several geophysical interpolation schemes for correcting missing or noisy data, (2) development and evaluation of several wavelet compression schemes for removing redundancies from the data, and (3) construction of two neural networks that used the results of steps (1) and (2) to determine the depth and nature of buried objects. This work is a proof-of-concept study that demonstrates the feasibility of this approach. The resulting system was able to determine the nature of buried objects correctly 87% of the time and was able to locate a buried object to within an average error of 0.8 feet. These statistics were gathered based on a large test set and so can be considered reliable. Considering the limited nature of this study, these results strongly indicate the feasibility of this approach, and the importance of appropriate preprocessing of neural network input data.

  11. Damage identification with probabilistic neural networks

    SciTech Connect

    Klenke, S.E.; Paez, T.L.

    1995-12-01

    This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework, it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  12. Ordinal neural networks without iterative tuning.

    PubMed

    Fernández-Navarro, Francisco; Riccardi, Annalisa; Carloni, Sante

    2014-11-01

    Ordinal regression (OR) is an important branch of supervised learning in between the multiclass classification and regression. In this paper, the traditional classification scheme of neural network is adapted to learn ordinal ranks. The model proposed imposes monotonicity constraints on the weights connecting the hidden layer with the output layer. To do so, the weights are transcribed using padding variables. This reformulation leads to the so-called inequality constrained least squares (ICLS) problem. Its numerical solution can be obtained by several iterative methods, for example, trust region or line search algorithms. In this proposal, the optimum is determined analytically according to the closed-form solution of the ICLS problem estimated from the Karush-Kuhn-Tucker conditions. Furthermore, following the guidelines of the extreme learning machine framework, the weights connecting the input and the hidden layers are randomly generated, so the final model estimates all its parameters without iterative tuning. The model proposed achieves competitive performance compared with the state-of-the-art neural networks methods for OR. PMID:25330430

  13. Boundary Depth Information Using Hopfield Neural Network

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Wang, Ruisheng

    2016-06-01

    Depth information is widely used for representation, reconstruction and modeling of 3D scene. Generally two kinds of methods can obtain the depth information. One is to use the distance cues from the depth camera, but the results heavily depend on the device, and the accuracy is degraded greatly when the distance from the object is increased. The other one uses the binocular cues from the matching to obtain the depth information. It is more and more mature and convenient to collect the depth information of different scenes by stereo matching methods. In the objective function, the data term is to ensure that the difference between the matched pixels is small, and the smoothness term is to smooth the neighbors with different disparities. Nonetheless, the smoothness term blurs the boundary depth information of the object which becomes the bottleneck of the stereo matching. This paper proposes a novel energy function for the boundary to keep the discontinuities and uses the Hopfield neural network to solve the optimization. We first extract the region of interest areas which are the boundary pixels in original images. Then, we develop the boundary energy function to calculate the matching cost. At last, we solve the optimization globally by the Hopfield neural network. The Middlebury stereo benchmark is used to test the proposed method, and results show that our boundary depth information is more accurate than other state-of-the-art methods and can be used to optimize the results of other stereo matching methods.

  14. A neural network model of harmonic detection

    NASA Astrophysics Data System (ADS)

    Lewis, Clifford F.

    2003-04-01

    Harmonic detection theories postulate that a virtual pitch is perceived when a sufficient number of harmonics is present. The harmonics need not be consecutive, but higher harmonics contribute less than lower harmonics [J. Raatgever and F. A. Bilsen, in Auditory Physiology and Perception, edited by Y. Cazals, K. Horner, and L. Demany (Pergamon, Oxford, 1992), pp. 215-222 M. K. McBeath and J. F. Wayand, Abstracts of the Psychonom. Soc. 3, 55 (1998)]. A neural network model is presented that has the potential to simulate this operation. Harmonics are first passed through a bank of rounded exponential filters with lateral inhibition. The results are used as inputs for an autoassociator neural network. The model is trained using harmonic data for symphonic musical instruments, in order to test whether it can self-organize by learning associations between co-occurring harmonics. It is shown that the trained model can complete the pattern for missing-fundamental sounds. The Performance of the model in harmonic detection will be compared with experimental results for humans.

  15. Altered Synchronizations among Neural Networks in Geriatric Depression.

    PubMed

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression. PMID:26180795

  16. Altered Synchronizations among Neural Networks in Geriatric Depression.

    PubMed

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  17. Learning and coding in biological neural networks

    NASA Astrophysics Data System (ADS)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  18. Extracting insight from noisy cellular networks.

    PubMed

    Landry, Christian R; Levy, Emmanuel D; Abd Rabbo, Diala; Tarassov, Kirill; Michnick, Stephen W

    2013-11-21

    Network biologists attempt to extract meaningful relationships among genes or their products from very noisy data. We argue that what we categorize as noisy data may sometimes reflect noisy biology and therefore may shield a hidden meaning about how networks evolve and how matter is organized in the cell. We present practical solutions, based on existing evolutionary and biophysical concepts, through which our understanding of cell biology can be enormously enriched. PMID:24267884

  19. Extracting insight from noisy cellular networks.

    PubMed

    Landry, Christian R; Levy, Emmanuel D; Abd Rabbo, Diala; Tarassov, Kirill; Michnick, Stephen W

    2013-11-21

    Network biologists attempt to extract meaningful relationships among genes or their products from very noisy data. We argue that what we categorize as noisy data may sometimes reflect noisy biology and therefore may shield a hidden meaning about how networks evolve and how matter is organized in the cell. We present practical solutions, based on existing evolutionary and biophysical concepts, through which our understanding of cell biology can be enormously enriched.

  20. Performance comparison of neural networks for undersea mine detection

    NASA Astrophysics Data System (ADS)

    Toborg, Scott T.; Lussier, Matthew; Rowe, David

    1994-03-01

    This paper describes the design of an undersea mine detection system and compares the performance of various neural network models for classification of features extracted from side-scan sonar images. Techniques for region of interest and statistical feature extraction are described. Subsequent feature analysis verifies the need for neural network processing. Several different neural and conventional pattern classifiers are compared including: k-Nearest Neighbors, Backprop, Quickprop, and LVQ. Results using the Naval Image Database from Coastal Systems Station (Panama City, FL) indicate neural networks have consistently superior performance over conventional classifiers. Concepts for further performance improvements are also discussed including: alternative image preprocessing and classifier fusion.

  1. Syntactic neural network for character recognition

    NASA Astrophysics Data System (ADS)

    Jaravine, Viktor A.

    1992-08-01

    This article presents a synergism of syntactic 2-D parsing of images and multilayered, feed- forward network techniques. This approach makes it possible to build a written text reading system with absolute recognition rate for unambiguous text strings. The Syntactic Neural Network (SNN) is created during image parsing process by capturing the higher order statistical structure in the ensemble of input image examples. Acquired knowledge is stored in the form of hierarchical image elements dictionary and syntactic network. The number of hidden layers and neuron units is not fixed and is determined by the structural complexity of the teaching set. A proposed syntactic neuron differs from conventional numerical neuron by its symbolic input/output and usage of the dictionary for determining the output. This approach guarantees exact recognition of an image that is a combinatorial variation of the images from the training set. The system is taught to generalize and to make stochastic parsing of distorted and shifted patterns. The generalizations enables the system to perform continuous incremental optimization of its work. New image data learned by SNN doesn''t interfere with previously stored knowledge, thus leading to unlimited storage capacity of the network.

  2. Artificial neural networks in predicting current in electric arc furnaces

    NASA Astrophysics Data System (ADS)

    Panoiu, M.; Panoiu, C.; Iordan, A.; Ghiormez, L.

    2014-03-01

    The paper presents a study of the possibility of using artificial neural networks for the prediction of the current and the voltage of Electric Arc Furnaces. Multi-layer perceptron and radial based functions Artificial Neural Networks implemented in Matlab were used. The study is based on measured data items from an Electric Arc Furnace in an industrial plant in Romania.

  3. Neural network approach for solving the maximal common subgraph problem.

    PubMed

    Shoukry, A; Aboutabl, M

    1996-01-01

    A new formulation of the maximal common subgraph problem (MCSP), that is implemented using a two-stage Hopfield neural network, is given. Relative merits of this proposed formulation, with respect to current neural network-based solutions as well as classical sequential-search-based solutions, are discussed.

  4. The use of neural networks for approximation of nuclear data

    SciTech Connect

    Korovin, Yu. A.; Maksimushkina, A. V.

    2015-12-15

    The article discusses the possibility of using neural networks for approximation or reconstruction of data such as the reaction cross sections. The quality of the approximation using fitting criteria is also evaluated. The activity of materials under irradiation is calculated from data obtained using neural networks.

  5. Multiple image sensor data fusion through artificial neural networks

    Technology Transfer Automated Retrieval System (TEKTRAN)

    With multisensor data fusion technology, the data from multiple sensors are fused in order to make a more accurate estimation of the environment through measurement, processing and analysis. Artificial neural networks are the computational models that mimic biological neural networks. With high per...

  6. Improved Adjoint-Operator Learning For A Neural Network

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1995-01-01

    Improved method of adjoint-operator learning reduces amount of computation and associated computational memory needed to make electronic neural network learn temporally varying pattern (e.g., to recognize moving object in image) in real time. Method extension of method described in "Adjoint-Operator Learning for a Neural Network" (NPO-18352).

  7. Using Neural Networks to Predict MBA Student Success

    ERIC Educational Resources Information Center

    Naik, Bijayananda; Ragothaman, Srinivasan

    2004-01-01

    Predicting MBA student performance for admission decisions is crucial for educational institutions. This paper evaluates the ability of three different models--neural networks, logit, and probit to predict MBA student performance in graduate programs. The neural network technique was used to classify applicants into successful and marginal student…

  8. A new small-world network created by Cellular Automata

    NASA Astrophysics Data System (ADS)

    Ruan, Yuhong; Li, Anwei

    2016-08-01

    In this paper, we generate small-world networks by the Cellular Automaton based on starting with one-dimensional regular networks. Besides the common properties of small-world networks with small average shortest path length and large clustering coefficient, the small-world networks generated in this way have other properties: (i) The edges which are cut in the regular network can be controlled that whether the edges are reconnected or not, and (ii) the number of the edges of the small-world network model equals the number of the edges of the original regular network. In other words, the average degree of the small-world network model equals to the average degree of the original regular network.

  9. Optimal Prediction by Cellular Signaling Networks

    NASA Astrophysics Data System (ADS)

    Becker, Nils B.; Mugler, Andrew; ten Wolde, Pieter Rein

    2015-12-01

    Living cells can enhance their fitness by anticipating environmental change. We study how accurately linear signaling networks in cells can predict future signals. We find that maximal predictive power results from a combination of input-noise suppression, linear extrapolation, and selective readout of correlated past signal values. Single-layer networks generate exponential response kernels, which suffice to predict Markovian signals optimally. Multilayer networks allow oscillatory kernels that can optimally predict non-Markovian signals. At low noise, these kernels exploit the signal derivative for extrapolation, while at high noise, they capitalize on signal values in the past that are strongly correlated with the future signal. We show how the common motifs of negative feedback and incoherent feed-forward can implement these optimal response functions. Simulations reveal that E. coli can reliably predict concentration changes for chemotaxis, and that the integration time of its response kernel arises from a trade-off between rapid response and noise suppression.

  10. Thermoelastic steam turbine rotor control based on neural network

    NASA Astrophysics Data System (ADS)

    Rzadkowski, Romuald; Dominiczak, Krzysztof; Radulski, Wojciech; Szczepanik, R.

    2015-12-01

    Considered here are Nonlinear Auto-Regressive neural networks with eXogenous inputs (NARX) as a mathematical model of a steam turbine rotor for controlling steam turbine stress on-line. In order to obtain neural networks that locate critical stress and temperature points in the steam turbine during transient states, an FE rotor model was built. This model was used to train the neural networks on the basis of steam turbine transient operating data. The training included nonlinearity related to steam turbine expansion, heat exchange and rotor material properties during transients. Simultaneous neural networks are algorithms which can be implemented on PLC controllers. This allows for the application neural networks to control steam turbine stress in industrial power plants.

  11. Artificial neural networks: theoretical background and pharmaceutical applications: a review.

    PubMed

    Wesolowski, Marek; Suchacz, Bogdan

    2012-01-01

    In recent times, there has been a growing interest in artificial neural networks, which are a rough simulation of the information processing ability of the human brain, as modern and vastly sophisticated computational techniques. This interest has also been reflected in the pharmaceutical sciences. This paper presents a review of articles on the subject of the application of neural networks as effective tools assisting the solution of various problems in science and the pharmaceutical industry, especially those characterized by multivariate and nonlinear dependencies. After a short description of theoretical background and practical basics concerning the computations performed by means of neural networks, the most important pharmaceutical applications of neural networks, with suitable references, are demonstrated. The huge role played by neural networks in pharmaceutical analysis, pharmaceutical technology, and searching for the relationships between the chemical structure and the properties of newly synthesized compounds as candidates for drugs is discussed.

  12. Identification of power system load dynamics using artificial neural networks

    SciTech Connect

    Bostanci, M.; Koplowitz, J.; Taylor, C.W. |

    1997-11-01

    Power system loads are important for planning and operation of an electric power system. Load characteristics can significantly influence the results of synchronous stability and voltage stability studies. This paper presents a methodology for identification of power system load dynamics using neural networks. Input-output data of a power system dynamic load is used to design a neural network model which comprises delayed inputs and feedback connections. The developed neural network model can predict the future power system dynamic load behavior for arbitrary inputs. In particular, a third-order induction motor load neural network model is developed to verify the methodology. Neural network simulation results are illustrated and compared with the induction motor load response.

  13. A novel neural network for nonlinear convex programming.

    PubMed

    Gao, Xing-Bao

    2004-05-01

    In this paper, we present a neural network for solving the nonlinear convex programming problem in real time by means of the projection method. The main idea is to convert the convex programming problem into a variational inequality problem. Then a dynamical system and a convex energy function are constructed for resulting variational inequality problem. It is shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. Compared with the existing neural networks for solving the nonlinear convex programming problem, the proposed neural network has no Lipschitz condition, no adjustable parameter, and its structure is simple. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.

  14. Neural network for constrained nonsmooth optimization using Tikhonov regularization.

    PubMed

    Qin, Sitian; Fan, Dejun; Wu, Guangxi; Zhao, Lijun

    2015-03-01

    This paper presents a one-layer neural network to solve nonsmooth convex optimization problems based on the Tikhonov regularization method. Firstly, it is shown that the optimal solution of the original problem can be approximated by the optimal solution of a strongly convex optimization problems. Then, it is proved that for any initial point, the state of the proposed neural network enters the equality feasible region in finite time, and is globally convergent to the unique optimal solution of the related strongly convex optimization problems. Compared with the existing neural networks, the proposed neural network has lower model complexity and does not need penalty parameters. In the end, some numerical examples and application are given to illustrate the effectiveness and improvement of the proposed neural network.

  15. Combined optical tweezers and laser dissector for controlled ablation of functional connections in neural networks

    NASA Astrophysics Data System (ADS)

    Difato, Francesco; Dal Maschio, Marco; Marconi, Emanuele; Ronzitti, Giuseppe; Maccione, Alessandro; Fellin, Tommasso; Berdondini, Luca; Chieregatti, Evelina; Benfenati, Fabio; Blau, Axel

    2011-05-01

    Regeneration of functional connectivity within a neural network after different degrees of lesion is of utmost clinical importance. To test pharmacological approaches aimed at recovering from a total or partial damage of neuronal connections within a circuit, it is necessary to develop a precise method for controlled ablation of neuronal processes. We combined a UV laser microdissector to ablate neural processes in vitro at single neuron and neural network level with infrared holographic optical tweezers to carry out force spectroscopy measurements. Simultaneous force spectroscopy, down to the sub-pico-Newton range, was performed during laser dissection to quantify the tension release in a partially ablated neurite. Therefore, we could control and measure the damage inflicted to an individual neuronal process. To characterize the effect of the inflicted injury on network level, changes in activity of neural subpopulations were monitored with subcellular resolution and overall network activity with high temporal resolution by concurrent calcium imaging and microelectrode array recording. Neuronal connections have been sequentially ablated and the correlated changes in network activity traced and mapped. With this unique combination of electrophysiological and optical tools, neural activity can be studied and quantified in response to controlled injury at the subcellular, cellular, and network level.

  16. Quantum neural networks: Current status and prospects for development

    NASA Astrophysics Data System (ADS)

    Altaisky, M. V.; Kaputkina, N. E.; Krylov, V. A.

    2014-11-01

    The idea of quantum artificial neural networks, first formulated in [34], unites the artificial neural network concept with the quantum computation paradigm. Quantum artificial neural networks were first systematically considered in the PhD thesis by T. Menneer (1998). Based on the works of Menneer and Narayanan [42, 43], Kouda, Matsui, and Nishimura [35, 36], Altaisky [2, 68], Zhou [67], and others, quantum-inspired learning algorithms for neural networks were developed, and are now used in various training programs and computer games [29, 30]. The first practically realizable scaled hardware-implemented model of the quantum artificial neural network is obtained by D-Wave Systems, Inc. [33]. It is a quantum Hopfield network implemented on the basis of superconducting quantum interference devices (SQUIDs). In this work we analyze possibilities and underlying principles of an alternative way to implement quantum neural networks on the basis of quantum dots. A possibility of using quantum neural network algorithms in automated control systems, associative memory devices, and in modeling biological and social networks is examined.

  17. Integration of Optical Manipulation and Electrophysiological Tools to Modulate and Record Activity in Neural Networks

    NASA Astrophysics Data System (ADS)

    Difato, F.; Schibalsky, L.; Benfenati, F.; Blau, A.

    2011-07-01

    We present an optical system that combines IR (1064 nm) holographic optical tweezers with a sub-nanosecond-pulsed UV (355 nm) laser microdissector for the optical manipulation of single neurons and entire networks both on transparent and non-transparent substrates in vitro. The phase-modulated laser beam can illuminate the sample concurrently or independently from above or below assuring compatibility with different types of microelectrode array and patch-clamp electrophysiology. By combining electrophysiological and optical tools, neural activity in response to localized stimuli or injury can be studied and quantified at sub-cellular, cellular, and network level.

  18. Network, cellular, and molecular mechanisms underlying long-term memory formation.

    PubMed

    Carasatorre, Mariana; Ramírez-Amaya, Víctor

    2013-01-01

    The neural network stores information through activity-dependent synaptic plasticity that occurs in populations of neurons. Persistent forms of synaptic plasticity may account for long-term memory storage, and the most salient forms are the changes in the structure of synapses. The theory proposes that encoding should use a sparse code and evidence suggests that this can be achieved through offline reactivation or by sparse initial recruitment of the network units. This idea implies that in some cases the neurons that underwent structural synaptic plasticity might be a subpopulation of those originally recruited; However, it is not yet clear whether all the neurons recruited during acquisition are the ones that underwent persistent forms of synaptic plasticity and responsible for memory retrieval. To determine which neural units underlie long-term memory storage, we need to characterize which are the persistent forms of synaptic plasticity occurring in these neural ensembles and the best hints so far are the molecular signals underlying structural modifications of the synapses. Structural synaptic plasticity can be achieved by the activity of various signal transduction pathways, including the NMDA-CaMKII and ACh-MAPK. These pathways converge with the Rho family of GTPases and the consequent ERK 1/2 activation, which regulates multiple cellular functions such as protein translation, protein trafficking, and gene transcription. The most detailed explanation may come from models that allow us to determine the contribution of each piece of this fascinating puzzle that is the neuron and the neural network.

  19. Network, cellular, and molecular mechanisms underlying long-term memory formation.

    PubMed

    Carasatorre, Mariana; Ramírez-Amaya, Víctor

    2013-01-01

    The neural network stores information through activity-dependent synaptic plasticity that occurs in populations of neurons. Persistent forms of synaptic plasticity may account for long-term memory storage, and the most salient forms are the changes in the structure of synapses. The theory proposes that encoding should use a sparse code and evidence suggests that this can be achieved through offline reactivation or by sparse initial recruitment of the network units. This idea implies that in some cases the neurons that underwent structural synaptic plasticity might be a subpopulation of those originally recruited; However, it is not yet clear whether all the neurons recruited during acquisition are the ones that underwent persistent forms of synaptic plasticity and responsible for memory retrieval. To determine which neural units underlie long-term memory storage, we need to characterize which are the persistent forms of synaptic plasticity occurring in these neural ensembles and the best hints so far are the molecular signals underlying structural modifications of the synapses. Structural synaptic plasticity can be achieved by the activity of various signal transduction pathways, including the NMDA-CaMKII and ACh-MAPK. These pathways converge with the Rho family of GTPases and the consequent ERK 1/2 activation, which regulates multiple cellular functions such as protein translation, protein trafficking, and gene transcription. The most detailed explanation may come from models that allow us to determine the contribution of each piece of this fascinating puzzle that is the neuron and the neural network. PMID:22976275

  20. Personal communication in traditional cellular networks

    NASA Astrophysics Data System (ADS)

    Neuer, Ellwood I.

    1996-01-01

    The purpose of this paper is to describe the flow of calls through the mobile network as it applies to the operation of Basic and Enhanced Services. Included in the discussion is the overall network layout, the physical connections between the network entities, and the signaling protocols which allow the entities to be integrated. The specific functionality of the applications and services are not detailed as the specific implementation varies from vendor to vendor and from service provider to service provider. The Enhanced Services Platform is installed in a service providers network in order to offer mobile subscribers services and applications which would otherwise not be available. The service providers' objective is to increase revenue/subscriber, increase subscriber loyalty/decrease churn, and build competitive advantages through differentiation. The services provided on the Enhanced Services platform can be viewed as either Basic or Enhanced. For the purpose of this paper, Basic Services refers to Numeric Paging, Call Answering, and Voice Messaging while Enhanced Services refers to FAX Messaging, One Number Service, Voice Dialing and other Voice Recognition applications, Information Services including FAX on Demand, and Automated Call Routing.

  1. Application of artificial neural networks in nonlinear analysis of trusses

    NASA Technical Reports Server (NTRS)

    Alam, J.; Berke, L.

    1991-01-01

    A method is developed to incorporate neural network model based upon the Backpropagation algorithm for material response into nonlinear elastic truss analysis using the initial stiffness method. Different network configurations are developed to assess the accuracy of neural network modeling of nonlinear material response. In addition to this, a scheme based upon linear interpolation for material data, is also implemented for comparison purposes. It is found that neural network approach can yield very accurate results if used with care. For the type of problems under consideration, it offers a viable alternative to other material modeling methods.

  2. Neural network classification of sweet potato embryos

    NASA Astrophysics Data System (ADS)

    Molto, Enrique; Harrell, Roy C.

    1993-05-01

    Somatic embryogenesis is a process that allows for the in vitro propagation of thousands of plants in sub-liter size vessels and has been successfully applied to many significant species. The heterogeneity of maturity and quality of embryos produced with this technique requires sorting to obtain a uniform product. An automated harvester is being developed at the University of Florida to sort embryos in vitro at different stages of maturation in a suspension culture. The system utilizes machine vision to characterize embryo morphology and a fluidic based separation device to isolate embryos associated with a pre-defined, targeted morphology. Two different backpropagation neural networks (BNN) were used to classify embryos based on information extracted from the vision system. One network utilized geometric features such as embryo area, length, and symmetry as inputs. The alternative network utilized polar coordinates of an embryo's perimeter with respect to its centroid as inputs. The performances of both techniques were compared with each other and with an embryo classification method based on linear discriminant analysis (LDA). Similar results were obtained with all three techniques. Classification efficiency was improved by reducing the dimension of the feature vector trough a forward stepwise analysis by LDA. In order to enhance the purity of the sample selected as harvestable, a reject to classify option was introduced in the model and analyzed. The best classifier performances (76% overall correct classifications, 75% harvestable objects properly classified, homogeneity improvement ratio 1.5) were obtained using 8 features in a BNN.

  3. Antagonistic neural networks underlying differentiated leadership roles

    PubMed Central

    Boyatzis, Richard E.; Rochford, Kylie; Jack, Anthony I.

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks – the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success. PMID:24624074

  4. Neural networks for identifying drunk persons using thermal infrared imagery.

    PubMed

    Koukiou, Georgia; Anastassopoulos, Vassilis

    2015-07-01

    Neural networks were tested on infrared images of faces for discriminating intoxicated persons. The images were acquired during controlled alcohol consumption by forty-one persons. Two different experimental approaches were thoroughly investigated. In the first one, each face was examined, location by location, using each time a different neural network, in order to find out those regions that can be used for discriminating a drunk from a sober person. It was found that it was mainly the face forehead that changed thermal behaviour with alcohol consumption. In the second procedure, a single neural structure was trained on the whole face. The discrimination performance of this neural structure was tested on the same face, as well as on unknown faces. The neural networks presented high discrimination performance even on unknown persons, when trained on the forehead of the sober and the drunk person, respectively. Small neural structures presented better generalisation performance.

  5. Deep Neural Networks with Multistate Activation Functions

    PubMed Central

    Cai, Chenghao; Xu, Yanyan; Ke, Dengfeng; Su, Kaile

    2015-01-01

    We propose multistate activation functions (MSAFs) for deep neural networks (DNNs). These MSAFs are new kinds of activation functions which are capable of representing more than two states, including the N-order MSAFs and the symmetrical MSAF. DNNs with these MSAFs can be trained via conventional Stochastic Gradient Descent (SGD) as well as mean-normalised SGD. We also discuss how these MSAFs perform when used to resolve classification problems. Experimental results on the TIMIT corpus reveal that, on speech recognition tasks, DNNs with MSAFs perform better than the conventional DNNs, getting a relative improvement of 5.60% on phoneme error rates. Further experiments also reveal that mean-normalised SGD facilitates the training processes of DNNs with MSAFs, especially when being with large training sets. The models can also be directly trained without pretraining when the training set is sufficiently large, which results in a considerable relative improvement of 5.82% on word error rates. PMID:26448739

  6. Adaptive pattern recognition and neural networks

    SciTech Connect

    Pao, Yohhan.

    1989-01-01

    The application of neural-network computers to pattern-recognition tasks is discussed in an introduction for advanced students. Chapters are devoted to the nature of the pattern-recognition task, the Bayesian approach to the estimation of class membership, the fuzzy-set approach, patterns with nonnumeric feature values, learning discriminants and the generalized perceptron, recognition and recall on the basis of partial cues, associative memories, self-organizing nets, the functional-link net, fuzzy logic in the linking of symbolic and subsymbolic processing, and adaptive pattern recognition and its applications. Also included are C-language programs for (1) a generalized delta-rule net for supervised learning and (2) unsupervised learning based on the discovery of clustered structure. 183 refs.

  7. Artificial neural network for multifunctional areas.

    PubMed

    Riccioli, Francesco; El Asmar, Toufic; El Asmar, Jean-Pierre; Fagarazzi, Claudio; Casini, Leonardo

    2016-01-01

    The issues related to the appropriate planning of the territory are particularly pronounced in highly inhabited areas (urban areas), where in addition to protecting the environment, it is important to consider an anthropogenic (urban) development placed in the context of sustainable growth. This work aims at mathematically simulating the changes in the land use, by implementing an artificial neural network (ANN) model. More specifically, it will analyze how the increase of urban areas will develop and whether this development would impact on areas with particular socioeconomic and environmental value, defined as multifunctional areas. The simulation is applied to the Chianti Area, located in the province of Florence, in Italy. Chianti is an area with a unique landscape, and its territorial planning requires a careful examination of the territory in which it is inserted. PMID:26718948

  8. Neural network training as a dissipative process.

    PubMed

    Gori, Marco; Maggini, Marco; Rossi, Alessandro

    2016-09-01

    This paper analyzes the practical issues and reports some results on a theory in which learning is modeled as a continuous temporal process driven by laws describing the interactions of intelligent agents with their own environment. The classic regularization framework is paired with the idea of temporal manifolds by introducing the principle of least cognitive action, which is inspired by the related principle of mechanics. The introduction of the counterparts of the kinetic and potential energy leads to an interpretation of learning as a dissipative process. As an example, we apply the theory to supervised learning in neural networks and show that the corresponding Euler-Lagrange differential equations can be connected to the classic gradient descent algorithm on the supervised pairs. We give preliminary experiments to confirm the soundness of the theory. PMID:27389569

  9. Demonstrations of Neural Network Computations Involving Students

    PubMed Central

    May, Christopher J.

    2010-01-01

    David Marr famously proposed three levels of analysis (implementational, algorithmic, and computational) for understanding information processing systems such as the brain. While two of these levels are commonly taught in neuroscience courses (the implementational level through neurophysiology and the computational level through systems/cognitive neuroscience), the algorithmic level is typically neglected. This leaves an explanatory gap in students’ understanding of how, for example, the flow of sodium ions enables cognition. Neural networks bridge these two levels by demonstrating how collections of interacting neuron-like units can give rise to more overtly cognitive phenomena. The demonstrations in this paper are intended to facilitate instructors’ introduction and exploration of how neurons “process information.” PMID:23493501

  10. Delayed switching applied to memristor neural networks

    SciTech Connect

    Wang, Frank Z.; Yang Xiao; Lim Guan; Helian Na; Wu Sining; Guo Yike; Rashid, Md Mamunur

    2012-04-01

    Magnetic flux and electric charge are linked in a memristor. We reported recently that a memristor has a peculiar effect in which the switching takes place with a time delay because a memristor possesses a certain inertia. This effect was named the ''delayed switching effect.'' In this work, we elaborate on the importance of delayed switching in a brain-like computer using memristor neural networks. The effect is used to control the switching of a memristor synapse between two neurons that fire together (the Hebbian rule). A theoretical formula is found, and the design is verified by a simulation. We have also built an experimental setup consisting of electronic memristive synapses and electronic neurons.

  11. Galaxies, human eyes, and artificial neural networks.

    PubMed

    Lahav, O; Naim, A; Buta, R J; Corwin, H G; de Vaucouleurs, G; Dressler, A; Huchra, J P; van den Bergh, S; Raychaudhury, S; Sodré, L; Storrie-Lombardi, M C

    1995-02-10

    The quantitative morphological classification of galaxies is important for understanding the origin of type frequency and correlations with environment. However, galaxy morphological classification is still mainly done visually by dedicated individuals, in the spirit of Hubble's original scheme and its modifications. The rapid increase in data on galaxy images at low and high redshift calls for a re-examination of the classification schemes and for automatic methods. Here are shown results from a systematic comparison of the dispersion among human experts classifying a uniformly selected sample of more than 800 digitized galaxy images. These galaxy images were then classified by six of the authors independently. The human classifications are compared with each other and with an automatic classification by an artificial neural network, which replicates the classification by a human expert to the same degree of agreement as that between two human experts. PMID:17813914

  12. From the neuron doctrine to neural networks.

    PubMed

    Yuste, Rafael

    2015-08-01

    For over a century, the neuron doctrine--which states that the neuron is the structural and functional unit of the nervous system--has provided a conceptual foundation for neuroscience. This viewpoint reflects its origins in a time when the use of single-neuron anatomical and physiological techniques was prominent. However, newer multineuronal recording methods have revealed that ensembles of neurons, rather than individual cells, can form physiological units and generate emergent functional properties and states. As a new paradigm for neuroscience, neural network models have the potential to incorporate knowledge acquired with single-neuron approaches to help us understand how emergent functional states generate behaviour, cognition and mental disease. PMID:26152865

  13. Neural network based short term load forecasting

    SciTech Connect

    Lu, C.N.; Wu, H.T. . Dept. of Electrical Engineering); Vemuri, S. . Controls and Composition Div.)

    1993-02-01

    The artificial neural network (ANN) technique for short term load forecasting (STLF) has been proposed by several authors, and gained a lot of attention recently. In order to evaluate ANN as a viable technique for STLF, one has to evaluate the performance of ANN methodology for practical considerations of STLF problems. This paper makes an attempt to address these issues. The paper presents the results of a study to investigate whether the ANN model is system dependent, and/or case dependent. Data from two utilities were used in modeling and forecasting. In addition, the effectiveness of a next 24 hour ANN model is predicting 24 hour load profile at one time was compared with the traditional next one hour ANN model.

  14. Neural networks in support of manned space

    NASA Technical Reports Server (NTRS)

    Werbos, Paul J.

    1989-01-01

    Many lobbyists in Washington have argued that artificial intelligence (AI) is an alternative to manned space activity. In actuality, this is the opposite of the truth, especially as regards artificial neural networks (ANNs), that form of AI which has the greatest hope of mimicking human abilities in learning, ability to interface with sensors and actuators, flexibility and balanced judgement. ANNs and their relation to expert systems (the more traditional form of AI), and the limitations of both technologies are briefly reviewed. A Few highlights of recent work on ANNs, including an NSF-sponsored workshop on ANNs for control applications are given. Current thinking on ANNs for use in certain key areas (the National Aerospace Plane, teleoperation, the control of large structures, fault diagnostics, and docking) which may be crucial to the long term future of man in space is discussed.

  15. Hopf bifurcation stability in Hopfield neural networks.

    PubMed

    Marichal, R L; González, E J; Marichal, G N

    2012-12-01

    In this paper we consider a simple discrete Hopfield neural network model and analyze local stability using the associated characteristic model. In order to study the dynamic behavior of the quasi-periodic orbit, the Hopf bifurcation must be determined. For the case of two neurons, we find one necessary condition that yields the Hopf bifurcation. In addition, we determine the stability and direction of the Hopf bifurcation by applying normal form theory and the center manifold theorem. An example is given and a numerical simulation is performed to illustrate the results. We analyze the influence of bias weights on the stability of the quasi-periodic orbit and study the phase-locking phenomena for certain experimental results with Arnold Tongues in a particular weight configuration.

  16. Evolving neural networks through augmenting topologies.

    PubMed

    Stanley, Kenneth O; Miikkulainen, Risto

    2002-01-01

    An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution. PMID:12180173

  17. Neural network modeling of visual recognition

    NASA Astrophysics Data System (ADS)

    Braham, Rafik

    1992-07-01

    The recognition of visual patterns is one of the main application areas of neural networks. Several models have been designed based on the current understanding of visual information processing in the brains of cats and monkeys. Examples of such models are described in the works of Fukushima, Grossberg, von der Malsburg, and others. But because the visual system is very complex and visual information processing consists of several stages, the technical models attempt to reproduce one or a few aspects. The author has been mostly interested in modeling some of the anatomical features of visual areas and understanding their functional significance. In this paper, principles used in popular models are analyzed. Then the structure and design rationale of a vision model is described. In this description, the principles of the model rather than its implementation details are underscored.

  18. Microturbine control based on fuzzy neural network

    NASA Astrophysics Data System (ADS)

    Yan, Shijie; Bian, Chunyuan; Wang, Zhiqiang

    2006-11-01

    As microturbine generator (MTG) is a clean, efficient, low cost and reliable energy supply system. From outside characteristics of MTG, it is multi-variable, time-varying and coupling system, so it is difficult to be identified on-line and conventional control law adopted before cannot achieve desirable result. A novel fuzzy-neural networks (FNN) control algorithm was proposed in combining with the conventional PID control. In the paper, IF-THEN rules for tuning were applied by a first-order Sugeno fuzzy model with seven fuzzy rules and the membership function was given as the continuous GAUSSIAN function. Some sample data were utilized to train FNN. Through adjusting shape of membership function and weight continually, objective of auto-tuning fuzzy-rules can be achieved. The FNN algorithm had been applied to "100kW Microturbine control and power converter system". The results of simulation and experiment are shown that the algorithm can work very well.

  19. Artificial neural network circuits with Josephson devices

    SciTech Connect

    Harada, Y.; Goto, E. )

    1991-03-01

    This article describes a new approach of Josephson devices for computer applications. With an artificial neural network scheme Josephson devices is expected to develop a new paradigm for future computer systems. Here the authors discuss circuit configuration for a neuron with Josephson devices. The authors proposed a combination of a variable bias source and Josephson devices for a synapse circuit. The bias source signal is steered by the Josephson device input signal and becomes the synapse output signal. These output signals are summed up at the specific resistor or inductor to produce the weighted sum of Josephson devices input signals. According to the error signal, the bias source value is corrected. This corresponds to the learning procedure.

  20. Artificial neural network for multifunctional areas.

    PubMed

    Riccioli, Francesco; El Asmar, Toufic; El Asmar, Jean-Pierre; Fagarazzi, Claudio; Casini, Leonardo

    2016-01-01

    The issues related to the appropriate planning of the territory are particularly pronounced in highly inhabited areas (urban areas), where in addition to protecting the environment, it is important to consider an anthropogenic (urban) development placed in the context of sustainable growth. This work aims at mathematically simulating the changes in the land use, by implementing an artificial neural network (ANN) model. More specifically, it will analyze how the increase of urban areas will develop and whether this development would impact on areas with particular socioeconomic and environmental value, defined as multifunctional areas. The simulation is applied to the Chianti Area, located in the province of Florence, in Italy. Chianti is an area with a unique landscape, and its territorial planning requires a careful examination of the territory in which it is inserted.

  1. Financial time series prediction using spiking neural networks.

    PubMed

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  2. Financial Time Series Prediction Using Spiking Neural Networks

    PubMed Central

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  3. Modeling Aircraft Wing Loads from Flight Data Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Allen, Michael J.; Dibley, Ryan P.

    2003-01-01

    Neural networks were used to model wing bending-moment loads, torsion loads, and control surface hinge-moments of the Active Aeroelastic Wing (AAW) aircraft. Accurate loads models are required for the development of control laws designed to increase roll performance through wing twist while not exceeding load limits. Inputs to the model include aircraft rates, accelerations, and control surface positions. Neural networks were chosen to model aircraft loads because they can account for uncharacterized nonlinear effects while retaining the capability to generalize. The accuracy of the neural network models was improved by first developing linear loads models to use as starting points for network training. Neural networks were then trained with flight data for rolls, loaded reversals, wind-up-turns, and individual control surface doublets for load excitation. Generalization was improved by using gain weighting and early stopping. Results are presented for neural network loads models of four wing loads and four control surface hinge moments at Mach 0.90 and an altitude of 15,000 ft. An average model prediction error reduction of 18.6 percent was calculated for the neural network models when compared to the linear models. This paper documents the input data conditioning, input parameter selection, structure, training, and validation of the neural network models.

  4. Identification of THz absorption spectra of chemicals using neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Jingling; Jia, Yan; Liang, Meiyan; Chen, Sijia

    2007-09-01

    Absorption spectra in the range from 0.2 to 2.6 THz of chemicals such as illicit drugs and antibiotics obtaining from Terahertz time-domain spectroscopy technique were identified successfully by artificial neural networks. Back Propagation (BP) and Self-Organizing Feature Map (SOM) were investigated to do the identification or classification, respectively. Three-layer BP neural networks were employed to identify absorption spectra of nine illicit drugs and six antibiotics. The spectra of the chemicals were used to train a BP neural network and then the absorption spectra measured in different times were identified by the trained BP neural network. The average identification rate of 76% was achieved. SOM neural networks, another important neural network which sorts input vectors by their similarity, was used to sort 60 absorption spectra from 6 illicit drugs. The whole network was trained by setting a 20×20 and a 16×16 grid, and both of them had given satisfied clustering results. These results indicate that it is feasible to apply BP and SOM neural networks model in the field of THz spectra identification.

  5. Inferential estimation of polymer quality using bootstrap aggregated neural networks.

    PubMed

    Zhang, J

    1999-07-01

    Inferential estimation of polymer quality in a batch polymerisation reactor using bootstrap aggregated neural networks is studied in this paper. Number average molecular weight and weight average molecular weight are estimated from the on-line measurements of reactor temperature, jacket inlet and outlet temperatures, coolant flow rate through the jacket, monomer conversion, and the initial batch conditions. Bootstrap aggregated neural networks are used to enhance the accuracy and robustness of neural network models built from a limited amount of training data. The training data set is re-sampled using bootstrap re-sampling with replacement to form several sets of training data. For each set of training data, a neural network model is developed. The individual neural networks are then combined together to form a bootstrap aggregated neural network. Determination of appropriate weights for combining individual networks using principal component regression is proposed in this paper. Confidence bounds for neural network predictions can also be obtained using the bootstrapping technique. The techniques have been successfully applied to the simulation of a batch methyl methacrylate polymerisation reactor.

  6. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  7. Forecasting Zakat collection using artificial neural network

    NASA Astrophysics Data System (ADS)

    Sy Ahmad Ubaidillah, Sh. Hafizah; Sallehuddin, Roselina

    2013-04-01

    'Zakat', "that which purifies" or "alms", is the giving of a fixed portion of one's wealth to charity, generally to the poor and needy. It is one of the five pillars of Islam, and must be paid by all practicing Muslims who have the financial means (nisab). 'Nisab' is the minimum level to determine whether there is a 'zakat' to be paid on the assets. Today, in most Muslim countries, 'zakat' is collected through a decentralized and voluntary system. Under this voluntary system, 'zakat' committees are established, which are tasked with the collection and distribution of 'zakat' funds. 'Zakat' promotes a more equitable redistribution of wealth, and fosters a sense of solidarity amongst members of the 'Ummah'. The Malaysian government has established a 'zakat' center at every state to facilitate the management of 'zakat'. The center has to have a good 'zakat' management system to effectively execute its functions especially in the collection and distribution of 'zakat'. Therefore, a good forecasting model is needed. The purpose of this study is to develop a forecasting model for Pusat Zakat Pahang (PZP) to predict the total amount of collection from 'zakat' of assets more precisely. In this study, two different Artificial Neural Network (ANN) models using two different learning algorithms are developed; Back Propagation (BP) and Levenberg-Marquardt (LM). Both models are developed and compared in terms of their accuracy performance. The best model is determined based on the lowest mean square error and the highest correlations values. Based on the results obtained from the study, BP neural network is recommended as the forecasting model to forecast the collection from 'zakat' of assets for PZP.

  8. Increasing cellular coverage within integrated terrestrial/satellite mobile networks

    NASA Technical Reports Server (NTRS)

    Castro, Jonathan P.

    1995-01-01

    When applying the hierarchical cellular concept, the satellite acts as giant umbrella cell covering a region with some terrestrial cells. If a mobile terminal traversing the region arrives to the border-line or limits of a regular cellular ground service, network transition occurs and the satellite system continues the mobile coverage. To adequately assess the boundaries of service of a mobile satellite system an a cellular network within an integrated environment, this paper provides an optimized scheme to predict when a network transition may be necessary. Under the assumption of a classified propagation phenomenon and Lognormal shadowing, the study applies an analytical approach to estimate the location of a mobile terminal based on a reception of the signal strength emitted by a base station.

  9. Analog neural network-based helicopter gearbox health monitoring system.

    PubMed

    Monsen, P T; Dzwonczyk, M; Manolakos, E S

    1995-12-01

    The development of a reliable helicopter gearbox health monitoring system (HMS) has been the subject of considerable research over the past 15 years. The deployment of such a system could lead to a significant saving in lives and vehicles as well as dramatically reduce the cost of helicopter maintenance. Recent research results indicate that a neural network-based system could provide a viable solution to the problem. This paper presents two neural network-based realizations of an HMS system. A hybrid (digital/analog) neural system is proposed as an extremely accurate off-line monitoring tool used to reduce helicopter gearbox maintenance costs. In addition, an all analog neural network is proposed as a real-time helicopter gearbox fault monitor that can exploit the ability of an analog neural network to directly compute the discrete Fourier transform (DFT) as a sum of weighted samples. Hardware performance results are obtained using the Integrated Neural Computing Architecture (INCA/1) analog neural network platform that was designed and developed at The Charles Stark Draper Laboratory. The results indicate that it is possible to achieve a 100% fault detection rate with 0% false alarm rate by performing a DFT directly on the first layer of INCA/1 followed by a small-size two-layer feed-forward neural network and a simple post-processing majority voting stage.

  10. Analysis of torsional oscillations using an artificial neural network

    SciTech Connect

    Hsu, Y.Y.; Jeng, L,H. )

    1992-12-01

    In this paper, a novel approach using an artificial neural network (ANN) is proposed for the analysis of torsional oscillations in a power system. In the developed artificial neural network, those system variables such as generator loadings and capacitor compensation ratio which have major impacts on the damping characteristics of torsional oscillatio modes are employed as the inputs. The outputs of the neural net provide the desired eigenvalues for torsional modes. Once the connection weights of the neural network have been learned using a set of training data derived off-line, the neural network can be applied to torsional analysis in real-time situations. To demonstrate the effectiveness of the proposed neural net, torsional analysis is performed on the IEEE First Benchmark Model. It is concluded from the test results that accurate assessment of the torsional mode eigenvalues can be achieved by the neural network in a very efficient manner. Thereofore, the proposed neural network approach can serve as a valuable tool to system operators in conducting SSR analysis in operational planning.

  11. [How can an otolaryngologist benefit from artificial neural networks?].

    PubMed

    Szaleniec, Joanna; Składzień, Jacek; Tadeusiewicz, Ryszard; Oleś, Krzysztof; Konior, Marcin; Przeklasa, Robert

    2012-01-01

    Artificial neural networks are informatic systems that have unique computational capabilities. The principle of their functioning is based on the rules of data processing in the brain. This article discusses the most important features of the artificial neural networks with reference to their applications in otolaryngology. The cited studies concern the fields of rhinology, audiology, phoniatrics, vestibulology, oncology, sleep apnea and salivary gland diseases. The authors also refer to their own experience with predictive neural models designed in the Department of Otolaryngology of the Jagiellonian University Medical College in Krakow. The applications of artificial neural networks in clinical diagnosis, automated signal interpretation and outcome prediction are presented. Moreover, the article explains how the artificial neural networks work and how the otolaryngologists can use them in their clinical practice and research.

  12. Predicting neural network firing pattern from phase resetting curve

    NASA Astrophysics Data System (ADS)

    Oprisan, Sorinel; Oprisan, Ana

    2007-04-01

    Autonomous neural networks called central pattern generators (CPG) are composed of endogenously bursting neurons and produce rhythmic activities, such as flying, swimming, walking, chewing, etc. Simplified CPGs for quadrupedal locomotion and swimming are modeled by a ring of neural oscillators such that the output of one oscillator constitutes the input for the subsequent neural oscillator. The phase response curve (PRC) theory discards the detailed conductance-based description of the component neurons of a network and reduces them to ``black boxes'' characterized by a transfer function, which tabulates the transient change in the intrinsic period of a neural oscillator subject to external stimuli. Based on open-loop PRC, we were able to successfully predict the phase-locked period and relative phase between neurons in a half-center network. We derived existence and stability criteria for heterogeneous ring neural networks that are in good agreement with experimental data.

  13. Controlling natural convection in a closed thermosyphon using neural networks

    NASA Astrophysics Data System (ADS)

    Cammarata, L.; Fichera, A.; Pagano, A.

    . The aim of this paper is to present a neural network-based approach to identification and control of a rectangular natural circulation loop. The first part of the paper defines a NARMAX model for the prediction of the experimental oscillating behavior characterizing the fluid temperature. The model has been generalized and implemented by means of a Multilayer Perceptron Neural Network that has been trained to simulate the system experimental dynamics. In the second part of the paper, the NARMAX model has been used to simulate the plant during the training of another neural network aiming to suppress the undesired oscillating behavior of the system. In order to define the neural controller, a cascade of several couples of neural networks representing both the system and the controller has been used, the number of couples coinciding with the number of steps in which the control action is exerted.

  14. Tracking and vertex finding with drift chambers and neural networks

    SciTech Connect

    Lindsey, C.

    1991-09-01

    Finding tracks, track vertices and event vertices with neural networks from drift chamber signals is discussed. Simulated feed-forward neural networks have been trained with back-propagation to give track parameters using Monte Carlo simulated tracks in one case and actual experimental data in another. Effects on network performance of limited weight resolution, noise and drift chamber resolution are given. Possible implementations in hardware are discussed. 7 refs., 10 figs.

  15. Neural network based speech synthesizer: A preliminary report

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Mcintire, Gary

    1987-01-01

    A neural net based speech synthesis project is discussed. The novelty is that the reproduced speech was extracted from actual voice recordings. In essence, the neural network learns the timing, pitch fluctuations, connectivity between individual sounds, and speaking habits unique to that individual person. The parallel distributed processing network used for this project is the generalized backward propagation network which has been modified to also learn sequences of actions or states given in a particular plan.

  16. Artificial neural network simulation of battery performance

    SciTech Connect

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  17. Design of neural networks for classification of remotely sensed imagery

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Cromp, Robert F.; Birmingham, Mark

    1992-01-01

    Classification accuracies of a backpropagation neural network are discussed and compared with a maximum likelihood classifier (MLC) with multivariate normal class models. We have found that, because of its nonparametric nature, the neural network outperforms the MLC in this area. In addition, we discuss techniques for constructing optimal neural nets on parallel hardware like the MasPar MP-1 currently at GSFC. Other important discussions are centered around training and classification times of the two methods, and sensitivity to the training data. Finally, we discuss future work in the area of classification and neural nets.

  18. Comparison of Gompertz and neural network models of broiler growth.

    PubMed

    Roush, W B; Dozier, W A; Branton, S L

    2006-04-01

    Neural networks offer an alternative to regression analysis for biological growth modeling. Very little research has been conducted to model animal growth using artificial neural networks. Twenty-five male chicks (Ross x Ross 308) were raised in an environmental chamber. Body weights were determined daily and feed and water were provided ad libitum. The birds were fed a starter diet (23% CP and 3,200 kcal of ME/kg) from 0 to 21 d, and a grower diet (20% CP and 3,200 kcal of ME/ kg) from 22 to 70 d. Dead and female birds were not included in the study. Average BW of 18 birds were used as the data points for the growth curve to be modeled. Training data consisted of alternate-day weights starting with the first day. Validation data consisted of BW at all other age periods. Comparison was made between the modeling by the Gompertz nonlinear regression equation and neural network modeling. Neural network models were developed with the Neuroshell Predictor. Accuracy of the models was determined by mean square error (MSE), mean absolute deviation (MAD), mean absolute percentage error (MAPE), and bias. The Gompertz equation was fit for the data. Forecasting error measurements were based on the difference between the model and the observed values. For the training data, the lowest MSE, MAD, MAPE, and bias were noted for the neural-developed neural network. For the validation data, the lowest MSE and MAD were noted with the genetic algorithm-developed neural network. Lowest bias was for the neural-developed network. As measured by bias, the Gompertz equation underestimated the values whereas the neural- and genetic-developed neural networks produced little or no overestimation of the observed BW responses. Past studies have attempted to interpret the biological significance of the estimates of the parameters of an equation. However, it may be more practical to ignore the relevance of parameter estimates and focus on the ability to predict responses.

  19. Discriminating lysosomal membrane protein types using dynamic neural network.

    PubMed

    Tripathi, Vijay; Gupta, Dwijendra Kumar

    2014-01-01

    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  20. Short-Term Load Forecasting using Dynamic Neural Networks

    NASA Astrophysics Data System (ADS)

    Chogumaira, Evans N.; Hiyama, Takashi

    This paper presents short-term electricity load forecasting using dynamic neural networks, DNN. The proposed approach includes an assessment of the DNN's stability to ascertain continued reliability. A comparative study between three different neural network architectures, which include feedforward, Elman and the radial basis neural networks, is performed. The performance and stability of each DNN is evaluated using actual hourly load data. Stability for each of the three different networks is determined through Eigen values analysis. The neural networks weights are dynamically adapted to meet the performance and stability requirements. A new approach for adapting radial basis function (RBF) neural network weights is also proposed. Evaluation of the networks is done in terms of forecasting error, stability and the effort required in training a particular network. The results show that DNN based on the radial basis neural network architecture performs much better than the rest. Eigen value analysis also shows that the radial basis based DNN is more stable making it very reliable as the input varies.

  1. Modeling integrated cellular machinery using hybrid Petri-Boolean networks.

    PubMed

    Berestovsky, Natalie; Zhou, Wanding; Nagrath, Deepak; Nakhleh, Luay

    2013-01-01

    The behavior and phenotypic changes of cells are governed by a cellular circuitry that represents a set of biochemical reactions. Based on biological functions, this circuitry is divided into three types of networks, each encoding for a major biological process: signal transduction, transcription regulation, and metabolism. This division has generally enabled taming computational complexity dealing with the entire system, allowed for using modeling techniques that are specific to each of the components, and achieved separation of the different time scales at which reactions in each of the three networks occur. Nonetheless, with this division comes loss of information and power needed to elucidate certain cellular phenomena. Within the cell, these three types of networks work in tandem, and each produces signals and/or substances that are used by the others to process information and operate normally. Therefore, computational techniques for modeling integrated cellular machinery are needed. In this work, we propose an integrated hybrid model (IHM) that combines Petri nets and Boolean networks to model integrated cellular networks. Coupled with a stochastic simulation mechanism, the model simulates the dynamics of the integrated network, and can be perturbed to generate testable hypotheses. Our model is qualitative and is mostly built upon knowledge from the literature and requires fine-tuning of very few parameters. We validated our model on two systems: the transcriptional regulation of glucose metabolism in human cells, and cellular osmoregulation in S. cerevisiae. The model produced results that are in very good agreement with experimental data, and produces valid hypotheses. The abstract nature of our model and the ease of its construction makes it a very good candidate for modeling integrated networks from qualitative data. The results it produces can guide the practitioner to zoom into components and interconnections and investigate them using such more

  2. Cost estimation of timber bridges using neural networks

    SciTech Connect

    Creese, R.C.; Li. L.

    1995-05-01

    Neural network models, or more simply {open_quotes}neural nets,{close_quotes} have great potential application in speech and image recognition. They also have great potential for cost estimating. Neural networks are particularly effective for complex estimation where the relationship between the output and the input cannot be expressed by simple mathematic relationships. A neural network method was applied to the cost estimation of timber bridges to illustrate the technique. The results of the neural network method were evaluated by the coefficient of determination, The R square value for the key input variables. A comparison of the neural network results and the standard linear regression results was performed upon the timber bridge data. A step-by-step validation is presented to make it easy to understand the application of neural networks to this estimation process. The input is propagated from the input through each layer until an output is generated. The output is compared with the desired output and the error is distributed for each node in the outer layer. The error is transmitted backward (thus the phase {open_quotes}back propagation{close_quotes}) from the output layer to the intermediate layers and then to the input layer. Based upon the errors, the weights are adjusted and the procedure is repeated. The number of training cycles is 15,000 to 50,000 for simple networks, but this usually takes only a few minutes on a personal computer. 7 refs., 4 figs., 11 tabs.

  3. Learning evasive maneuvers using evolutionary algorithms and neural networks

    NASA Astrophysics Data System (ADS)

    Kang, Moung Hung

    In this research, evolutionary algorithms and recurrent neural networks are combined to evolve control knowledge to help pilots avoid being struck by a missile, based on a two-dimensional air combat simulation model. The recurrent neural network is used for representing the pilot's control knowledge and evolutionary algorithms (i.e., Genetic Algorithms, Evolution Strategies, and Evolutionary Programming) are used for optimizing the weights and/or topology of the recurrent neural network. The simulation model of the two-dimensional evasive maneuver problem evolved is used for evaluating the performance of the recurrent neural network. Five typical air combat conditions were selected to evaluate the performance of the recurrent neural networks evolved by the evolutionary algorithms. Analysis of Variance (ANOVA) tests and response graphs were used to analyze the results. Overall, there was little difference in the performance of the three evolutionary algorithms used to evolve the control knowledge. However, the number of generations of each algorithm required to obtain the best performance was significantly different. ES converges the fastest, followed by EP and then by GA. The recurrent neural networks evolved by the evolutionary algorithms provided better performance than the traditional recommendations for evasive maneuvers, maximum gravitational turn, for each air combat condition. Furthermore, the recommended actions of the recurrent neural networks are reasonable and can be used for pilot training.

  4. Financial volatility trading using recurrent neural networks.

    PubMed

    Tino, P; Schittenkopf, C; Dorffner, G

    2001-01-01

    We simulate daily trading of straddles on financial indexes. The straddles are traded based on predictions of daily volatility differences in the indexes. The main predictive models studied are recurrent neural nets (RNN). Such applications have often been studied in isolation. However, due to the special character of daily financial time-series, it is difficult to make full use of RNN representational power. Recurrent networks either tend to overestimate noisy data, or behave like finite-memory sources with shallow memory; they hardly beat classical fixed-order Markov models. To overcome data nonstationarity, we use a special technique that combines sophisticated models fitted on a larger data set, with a fixed set of simple-minded symbolic predictors using only recent inputs. Finally, we compare our predictors with the GARCH family of econometric models designed to capture time-dependent volatility structure in financial returns. GARCH models have been used to trade volatility. Experimental results show that while GARCH models cannot generate any significantly positive profit, by careful use of recurrent networks or Markov models, the market makers can generate a statistically significant excess profit, but then there is no reason to prefer RNN over much more simple and straightforward Markov models. We argue that any report containing RNN results on financial tasks should be accompanied by results achieved by simple finite-memory sources combined with simple techniques to fight nonstationarity in the data.

  5. EEG source localization: a neural network approach.

    PubMed

    Sclabassi, R J; Sonmez, M; Sun, M

    2001-07-01

    Functional activity in the brain is associated with the generation of currents and resultant voltages which may be observed on the scalp as the electroencephelogram. The current sources may be modeled as dipoles. The properties of the current dipole sources may be studied by solving either the forward or inverse problems. The forward problem utilizes a volume conductor model for the head, in which the potentials on the conductor surface are computed based on an assumed current dipole at an arbitrary location, orientation, and strength. In the inverse problem, on the other hand, a current dipole, or a group of dipoles, is identified based on the observed EEG. Both the forward and inverse problems are typically solved by numerical procedures, such as a boundary element method and an optimization algorithm. These approaches are highly time-consuming and unsuitable for the rapid evaluation of brain function. In this paper we present a different approach to these problems based on machine learning. We solve both problems using artificial neural networks which are trained off-line using back-propagation techniques to learn the complex source-potential relationships of head volume conduction. Once trained, these networks are able to generalize their knowledge to localize functional activity within the brain in a computationally efficient manner.

  6. Bayesian Recurrent Neural Network for Language Modeling.

    PubMed

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  7. DEM interpolation based on artificial neural networks

    NASA Astrophysics Data System (ADS)

    Jiao, Limin; Liu, Yaolin

    2005-10-01

    This paper proposed a systemic resolution scheme of Digital Elevation model (DEM) interpolation based on Artificial Neural Networks (ANNs). In this paper, we employ BP network to fit terrain surface, and then detect and eliminate the samples with gross errors. This paper uses Self-organizing Feature Map (SOFM) to cluster elevation samples. The study area is divided into many more homogenous tiles after clustering. BP model is employed to interpolate DEM in each cluster. Because error samples are eliminated and clusters are built, interpolation result is better. The case study indicates that ANN interpolation scheme is feasible. It also shows that ANN can get a more accurate result by comparing ANN with polynomial and spline interpolation. ANN interpolation doesn't need to determine the interpolation function beforehand, so manmade influence is lessened. The ANN interpolation is more automatic and intelligent. At the end of the paper, we propose the idea of constructing ANN surface model. This model can be used in multi-scale DEM visualization, and DEM generalization, etc.

  8. Designed Proteins To Modulate Cellular Networks

    PubMed Central

    Cortajarena, Aitziber L.; Liu, Tina Y.; Hochstrasser, Mark; Regan, Lynne

    2012-01-01

    A major challenge of protein design is to create useful new proteins that interact specifically with biological targets in living cells. Such binding modules have many potential applications, including the targeted perturbation of protein networks. As a general approach to create such modules, we designed a library with approximately 109 different binding specificities based on a small 3-tetratricopeptide repeat (TPR) motif framework. We employed a novel strategy, based on split GFP reassembly, to screen the library for modules with the desired binding specificity. Using this approach, we identified modules that bind tightly and specifically to Dss1, a small human protein that interacts with the tumor suppressor protein BRCA2. We showed that these modules also bind the yeast homologue of Dss1, Sem1. Furthermore, we demonstrated that these modules inhibit Sem1 activity in yeast. This strategy will be generally applicable to make novel genetically encoded tools for systems/synthetic biology applications. PMID:20020775

  9. Neural population densities shape network correlations

    NASA Astrophysics Data System (ADS)

    Lefebvre, Jérémie; Perkins, Theodore J.

    2012-02-01

    The way sensory microcircuits manage cellular response correlations is a crucial question in understanding how such systems integrate external stimuli and encode information. Most sensory systems exhibit heterogeneities in terms of population sizes and features, which all impact their dynamics. This work addresses how correlations between the dynamics of neural ensembles depend on the relative size or density of excitatory and inhibitory populations. To do so, we study an apparently symmetric system of coupled stochastic differential equations that model the evolution of the populations’ activities. Excitatory and inhibitory populations are connected by reciprocal recurrent connections, and both receive different stimuli exhibiting a certain level of correlation with each other. A stability analysis is performed, which reveals an intrinsic asymmetry in the distribution of the fixed points with respect to the threshold of the nonlinearities. Based on this, we show how the cross correlation between the population responses depends on the density of the inhibitory population, and that a specific ratio between both population sizes leads to a state of zero correlation. We show that this so-called asynchronous state subsists, despite the presence of stimulus correlation, and most importantly, that it occurs only in asymmetrical systems where one population outnumbers the other. Using linear approximations, we derive analytical expressions for the root of the cross-correlation function and study how the asynchronous state is impacted by the model's parameters. This work suggests a possible explanation for why inhibitory cells outnumber excitatory cells in the visual system.

  10. Short term load forecasting using fuzzy neural networks

    SciTech Connect

    Bakirtzis, A.G.; Theocharis, J.B.; Kiartzis, S.J.; Satsios, K.J.

    1995-08-01

    This paper presents the development of a fuzzy system for short term load forecasting. The fuzzy system has the network structure and the training procedure of a neural network and is called Fuzzy Neural Network (FNN). A FNN initially creates a rule base from existing historical load data. The parameters of the rule base are then tuned through a training process, so that the output of the FNN adequately matches the available historical load data. Once trained, the FNN can be used to forecast future loads. Test results show that the FNN can forecast future loads with an accuracy comparable to that of neural networks, while its training is much faster than that of neural networks.

  11. Digital associative memory neural network with optical learning capability

    NASA Astrophysics Data System (ADS)

    Watanabe, Minoru; Ohtsubo, Junji

    1994-12-01

    A digital associative memory neural network system with optical learning and recalling capabilities is proposed by using liquid crystal television spatial light modulators and an Optic RAM detector. In spite of the drawback of the limited memory capacity compared with optical analogue associative memory neural network, the proposed optical digital neural network has the advantage of all optical learning and recalling capabilities, thus an all optics network system is easily realized. Some experimental results of the learning and the recalling for character recognitions are presented. This new optical architecture offers compactness of the system and the fast learning and recalling properties. Based on the results, the practical system for the implementation of a faster optical digital associative memory neural network system with ferro-electric liquid crystal SLMs is also proposed.

  12. Genetic Algorithm Based Neural Networks for Nonlinear Optimization

    1994-09-28

    This software develops a novel approach to nonlinear optimization using genetic algorithm based neural networks. To our best knowledge, this approach represents the first attempt at applying both neural network and genetic algorithm techniques to solve a nonlinear optimization problem. The approach constructs a neural network structure and an appropriately shaped energy surface whose minima correspond to optimal solutions of the problem. A genetic algorithm is employed to perform a parallel and powerful search ofmore » the energy surface.« less

  13. Robust neural network with applications to credit portfolio data analysis

    PubMed Central

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2011-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure. PMID:21687821

  14. Neural network tool for rapid recovery of plasma topology

    SciTech Connect

    Tribaldos, V.; van Milligen, B.P.

    1997-01-01

    A general method for the rapid recovery of plasma topology based on a neural network fit of the normalized magnetic flux is presented. We propose a general method for coordinate inversions that is based on a neural network fit of the normalized magnetic flux. The neural network provides a flexible and compact base for representing the plasma topology and allows the evaluation of spatial derivatives by analytic methods (as opposed to finite-difference methods), making it faster than other techniques. We present examples of this technique for both two-dimensional plasmas (tokamak D shaped, X point) and stellarators. {copyright} {ital 1997 American Institute of Physics.}

  15. Application of artificial neural networks (ANNs) in wine technology.

    PubMed

    Baykal, Halil; Yildirim, Hatice Kalkan

    2013-01-01

    In recent years, neural networks have turned out as a powerful method for numerous practical applications in a wide variety of disciplines. In more practical terms neural networks are one of nonlinear statistical data modeling tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data. In food technology artificial neural networks (ANNs) are useful for food safety and quality analyses, predicting chemical, functional and sensory properties of various food products during processing and distribution. In wine technology, ANNs have been used for classification and for predicting wine process conditions. This review discusses the basic ANNs technology and its possible applications in wine technology.

  16. Caption detection from video sequence based on fuzzy neural networks

    NASA Astrophysics Data System (ADS)

    Gao, Xinbo; Xin, Hong; Li, Jie

    2001-09-01

    Caption graphically superimposed in video frames can provide important indexing information. The automatic detection and recognition of video captions can be of great help in querying topics of interest in digital news library. To detect the caption from video sequence, we present algorithms based on fuzzy clustering neural networks. Since neural networks have the capabilities of learning and self-organizing and parallel computing mechanism, with the great increasing of digital images and video databases, neural networks based techniques become more efficient and popular tools for multimedia processing. Experimental results show that our caption detection scheme is effective and robust.

  17. Multitarget data association using an optical neural network.

    PubMed

    Yee, M; Casasent, D

    1992-02-10

    A neural network solution to the data association problem in multitarget tracking is presented. It uses position and velocity measurements of the targets over two consecutive time frames. A quadratic neural energy function, which is suitable for an optical processing implementation, results. Simulation resultsusing realistic target trajectories with target measurement noise including platform movement or jitter are presented. The results show that the network performs well when track data are corrupted by significant noise. Several possible optical neural network architectures to implement this algorithm are discussed, including a new all-optical matrix-vector multiplication approach. The matrix structure is employed to allow binary-ternary spatial light modulators to be used.

  18. Prediction of Aerodynamic Coefficients using Neural Networks for Sparse Data

    NASA Technical Reports Server (NTRS)

    Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Basic aerodynamic coefficients are modeled as functions of angles of attack and sideslip with vehicle lateral symmetry and compressibility effects. Most of the aerodynamic parameters can be well-fitted using polynomial functions. In this paper a fast, reliable way of predicting aerodynamic coefficients is produced using a neural network. The training data for the neural network is derived from wind tunnel test and numerical simulations. The coefficients of lift, drag, pitching moment are expressed as a function of alpha (angle of attack) and Mach number. The results produced from preliminary neural network analysis are very good.

  19. Real-Time Adaptive Color Segmentation by Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2004-01-01

    Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural

  20. Beneficial role of noise in artificial neural networks

    SciTech Connect

    Monterola, Christopher; Saloma, Caesar; Zapotocky, Martin

    2008-06-18

    We demonstrate enhancement of neural networks efficacy to recognize frequency encoded signals and/or to categorize spatial patterns of neural activity as a result of noise addition. For temporal information recovery, noise directly added to the receiving neurons allow instantaneous improvement of signal-to-noise ratio [Monterola and Saloma, Phys. Rev. Lett. 2002]. For spatial patterns however, recurrence is necessary to extend and homogenize the operating range of a feed-forward neural network [Monterola and Zapotocky, Phys. Rev. E 2005]. Finally, using the size of the basin of attraction of the networks learned patterns (dynamical fixed points), a procedure for estimating the optimal noise is demonstrated.

  1. Pseudogradient Training For A Class Of Neural Networks

    NASA Technical Reports Server (NTRS)

    Zeng, Zheng; Goodman, Rodney M.; Smyth, Padhraic J.

    1995-01-01

    Developmental second-order recurrent neural networks of special type modified to enhance stability in face of inputs beyond range of inputs on which trained. Second-order recurrent neural networks contain product feedback units and can be trained, by use of example inputs and outputs, to act as finite-state automatons. Particular second-order recurrent neural networks in question learn grammars in sense they are trained to generate binary responses to input training sequences of ones and zeros, each sequence being marked "legal" or "illegal" according to grammar to be learned.

  2. Autonomous Navigation Apparatus With Neural Network for a Mobile Vehicle

    NASA Technical Reports Server (NTRS)

    Quraishi, Naveed (Inventor)

    1996-01-01

    An autonomous navigation system for a mobile vehicle arranged to move within an environment includes a plurality of sensors arranged on the vehicle and at least one neural network including an input layer coupled to the sensors, a hidden layer coupled to the input layer, and an output layer coupled to the hidden layer. The neural network produces output signals representing respective positions of the vehicle, such as the X coordinate, the Y coordinate, and the angular orientation of the vehicle. A plurality of patch locations within the environment are used to train the neural networks to produce the correct outputs in response to the distances sensed.

  3. Noise-enhanced categorization in a recurrently reconnected neural network

    SciTech Connect

    Monterola, Christopher; Zapotocky, Martin

    2005-03-01

    We investigate the interplay of recurrence and noise in neural networks trained to categorize spatial patterns of neural activity. We develop the following procedure to demonstrate how, in the presence of noise, the introduction of recurrence permits to significantly extend and homogenize the operating range of a feed-forward neural network. We first train a two-level perceptron in the absence of noise. Following training, we identify the input and output units of the feed-forward network, and thus convert it into a two-layer recurrent network. We show that the performance of the reconnected network has features reminiscent of nondynamic stochastic resonance: the addition of noise enables the network to correctly categorize stimuli of subthreshold strength, with optimal noise magnitude significantly exceeding the stimulus strength. We characterize the dynamics leading to this effect and contrast it to the behavior of a more simple associative memory network in which noise-mediated categorization fails.

  4. Further results in multiset processing with neural networks.

    PubMed

    McGregor, Simon

    2008-08-01

    This paper presents new experimental results on the variadic neural network (VNN) [McGregor, S. (2007). Neural network processing for multiset data. In Proceedings: Vol. 4668. Artificial neural networks - ICANN 2007, 17th international conference (pp. 460-470). Springer]. The inputs to a variadic network are an arbitrary-length list of n-tuples of real numbers, where n is fixed, and the function computed by the network is unaffected by permutation of the inputs. This paper describes improvements in the training algorithm for the variadic perceptron, based on a constructive cascade topology, and performance of the improved networks on geometric problems inspired by vector graphics. Further development may allow practical application of these or similar networks to vector graphics processing and statistical analysis.

  5. Efficiently modeling neural networks on massively parallel computers

    SciTech Connect

    Farber, R.M.

    1992-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper will describe the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SMM computers and can be implemented on computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors. We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can be extend to arbitrarily large networks by merging the memory space of separate processors with fast adjacent processor inter-processor communications. This paper will consider the simulation of only feed forward neural network although this method is extendible to recurrent networks.

  6. Efficiently modeling neural networks on massively parallel computers

    SciTech Connect

    Farber, R.M.

    1992-12-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper will describe the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SMM computers and can be implemented on computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors. We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can be extend to arbitrarily large networks by merging the memory space of separate processors with fast adjacent processor inter-processor communications. This paper will consider the simulation of only feed forward neural network although this method is extendible to recurrent networks.

  7. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  8. Devices and circuits for nanoelectronic implementation of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Turel, Ozgur

    Biological neural networks perform complicated information processing tasks at speeds better than conventional computers based on conventional algorithms. This has inspired researchers to look into the way these networks function, and propose artificial networks that mimic their behavior. Unfortunately, most artificial neural networks, either software or hardware, do not provide either the speed or the complexity of a human brain. Nanoelectronics, with high density and low power dissipation that it provides, may be used in developing more efficient artificial neural networks. This work consists of two major contributions in this direction. First is the proposal of the CMOL concept, hybrid CMOS-molecular hardware [1-8]. CMOL may circumvent most of the problems in posed by molecular devices, such as low yield, vet provide high active device density, ˜1012/cm 2. The second contribution is CrossNets, artificial neural networks that are based on CMOL. We showed that CrossNets, with their fault tolerance, exceptional speed (˜ 4 to 6 orders of magnitude faster than biological neural networks) can perform any task any artificial neural network can perform. Moreover, there is a hope that if their integration scale is increased to that of human cerebral cortex (˜ 1010 neurons and ˜ 1014 synapses), they may be capable of performing more advanced tasks.

  9. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives.

  10. Combining neural network models for automated diagnostic systems.

    PubMed

    Ubeyli, Elif Derya

    2006-12-01

    This paper illustrates the use of combined neural network (CNN) models to guide model selection for diagnosis of internal carotid arterial (ICA) disorders. The ICA Doppler signals were decomposed into time-frequency representations using discrete wavelet transform and statistical features were calculated to depict their distribution. The first level networks were implemented for the diagnosis of ICA disorders using the statistical features as inputs. To improve diagnostic accuracy, the second level network was trained using the outputs of the first level networks as input data. The CNN models achieved accuracy rates which were higher than that of the stand-alone neural network models. PMID:17233161

  11. Axial level-dependent molecular and cellular mechanisms underlying the genesis of the embryonic neural plate.

    PubMed

    Kondoh, Hisato; Takada, Shinji; Takemoto, Tatsuya

    2016-06-01

    The transcription factor gene Sox2, centrally involved in neural primordial regulation, is activated by many enhancers. During the early stages of embryonic development, Sox2 is regulated by the enhancers N2 and N1 in the anterior neural plate (ANP) and posterior neural plate (PNP), respectively. This differential use of the enhancers reflects distinct regulatory mechanisms underlying the genesis of ANP and PNP. The ANP develops directly from the epiblast, triggered by nodal signal inhibition, and via the combined action of TFs SOX2, OTX2, POU3F1, and ZIC2, which promotes the the ANP development and inhibits other cell lineages. In contrast, the PNP is derived from neuromesodermal bipotential axial stem cells that develop into the neural plate when Sox2 is activated by the N1 enhancer, whereas they develop into the paraxial mesoderm when the N1 enhancer is repressed by the action of TBX6. The axial stem cells are maintained by the activity of WNT3a and T (Brachyury). However, at axial levels more anterior to the 8th somites (cervical levels), the development of both the neural plate and somite proceeds in the absence of WNT3a, T, or TBX6. These observations indicate that distinct molecular and cellular mechanisms determine neural plate genesis based on the axial level, and contradict the classical concept of the term "neural induction," which assumes a pan-neural plate mechanism. PMID:27279156

  12. Axial level-dependent molecular and cellular mechanisms underlying the genesis of the embryonic neural plate.

    PubMed

    Kondoh, Hisato; Takada, Shinji; Takemoto, Tatsuya

    2016-06-01

    The transcription factor gene Sox2, centrally involved in neural primordial regulation, is activated by many enhancers. During the early stages of embryonic development, Sox2 is regulated by the enhancers N2 and N1 in the anterior neural plate (ANP) and posterior neural plate (PNP), respectively. This differential use of the enhancers reflects distinct regulatory mechanisms underlying the genesis of ANP and PNP. The ANP develops directly from the epiblast, triggered by nodal signal inhibition, and via the combined action of TFs SOX2, OTX2, POU3F1, and ZIC2, which promotes the the ANP development and inhibits other cell lineages. In contrast, the PNP is derived from neuromesodermal bipotential axial stem cells that develop into the neural plate when Sox2 is activated by the N1 enhancer, whereas they develop into the paraxial mesoderm when the N1 enhancer is repressed by the action of TBX6. The axial stem cells are maintained by the activity of WNT3a and T (Brachyury). However, at axial levels more anterior to the 8th somites (cervical levels), the development of both the neural plate and somite proceeds in the absence of WNT3a, T, or TBX6. These observations indicate that distinct molecular and cellular mechanisms determine neural plate genesis based on the axial level, and contradict the classical concept of the term "neural induction," which assumes a pan-neural plate mechanism.

  13. Neural model of gene regulatory network: a survey on supportive meta-heuristics.

    PubMed

    Biswas, Surama; Acharyya, Sriyankar

    2016-06-01

    Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here. PMID:27048512

  14. Neural model of gene regulatory network: a survey on supportive meta-heuristics.

    PubMed

    Biswas, Surama; Acharyya, Sriyankar

    2016-06-01

    Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here.

  15. Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo

    PubMed Central

    Packer, Adam M.; Russell, Lloyd E.; Dalgleish, Henry W.P.; Häusser, Michael

    2016-01-01

    We describe an all-optical strategy for simultaneously manipulating and recording the activity of multiple neurons with cellular resolution in vivo. Concurrent two-photon optogenetic activation and calcium imaging is enabled by coexpression of a red-shifted opsin and a genetically encoded calcium indicator. A spatial light modulator allows tens of user-selected neurons to be targeted for spatiotemporally precise optogenetic activation, while simultaneous fast calcium imaging provides high-resolution network-wide readout of the manipulation with negligible optical crosstalk. Proof-of-principle experiments in mouse barrel cortex demonstrate interrogation of the same neuronal population during different behavioral states, and targeting of neuronal ensembles based on their functional signature. This approach extends the optogenetic toolkit beyond the specificity obtained with genetic or viral approaches, enabling high-throughput, flexible and long-term optical interrogation of functionally defined neural circuits with single-cell and single-spike resolution in the mammalian brain in vivo. PMID:25532138

  16. Neural Network Based Intelligent Sootblowing System

    SciTech Connect

    Mark Rhode

    2005-04-01

    , particulate matter is also a by-product of coal combustion. Modern day utility boilers are usually fitted with electrostatic precipitators to aid in the collection of particulate matter. Although extremely efficient, these devices are sensitive to rapid changes in inlet mass concentration as well as total mass loading. Traditionally, utility boilers are equipped with devices known as sootblowers, which use, steam, water or air to dislodge and clean the surfaces within the boiler and are operated based upon established rule or operator's judgment. Poor sootblowing regimes can influence particulate mass loading to the electrostatic precipitators. The project applied a neural network intelligent sootblowing system in conjunction with state-of-the-art controls and instruments to optimize the operation of a utility boiler and systematically control boiler slagging/fouling. This optimization process targeted reduction of NOx of 30%, improved efficiency of 2% and a reduction in opacity of 5%. The neural network system proved to be a non-invasive system which can readily be adapted to virtually any utility boiler. Specific conclusions from this neural network application are listed below. These conclusions should be used in conjunction with the specific details provided in the technical discussions of this report to develop a thorough understanding of the process.

  17. Millimeter-Wave Evolution for 5G Cellular Networks

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Kei; Tran, Gia Khanh; Shimodaira, Hidekazu; Nanba, Shinobu; Sakurai, Toshiaki; Takinami, Koji; Siaud, Isabelle; Strinati, Emilio Calvanese; Capone, Antonio; Karls, Ingolf; Arefi, Reza; Haustein, Thomas

    Triggered by the explosion of mobile traffic, 5G (5th Generation) cellular network requires evolution to increase the system rate 1000 times higher than the current systems in 10 years. Motivated by this common problem, there are several studies to integrate mm-wave access into current cellular networks as multi-band heterogeneous networks to exploit the ultra-wideband aspect of the mm-wave band. The authors of this paper have proposed comprehensive architecture of cellular networks with mm-wave access, where mm-wave small cell basestations and a conventional macro basestation are connected to Centralized-RAN (C-RAN) to effectively operate the system by enabling power efficient seamless handover as well as centralized resource control including dynamic cell structuring to match the limited coverage of mm-wave access with high traffic user locations via user-plane/control-plane splitting. In this paper, to prove the effectiveness of the proposed 5G cellular networks with mm-wave access, system level simulation is conducted by introducing an expected future traffic model, a measurement based mm-wave propagation model, and a centralized cell association algorithm by exploiting the C-RAN architecture. The numerical results show the effectiveness of the proposed network to realize 1000 times higher system rate than the current network in 10 years which is not achieved by the small cells using commonly considered 3.5 GHz band. Furthermore, the paper also gives latest status of mm-wave devices and regulations to show the feasibility of using mm-wave in the 5G systems.

  18. Neural and Cellular Mechanisms of Fear and Extinction Memory Formation

    PubMed Central

    Orsini, Caitlin A.; Maren, Stephen

    2012-01-01

    Over the course of natural history, countless animal species have evolved adaptive behavioral systems to cope with dangerous situations and promote survival. Emotional memories are central to these defense systems because they are rapidly acquired and prepare organisms for future threat. Unfortunately, the persistence and intrusion of memories of fearful experiences are quite common and can lead to pathogenic conditions, such as anxiety and phobias. Over the course of the last thirty years, neuroscientists and psychologists alike have attempted to understand the mechanisms by which the brain encodes and maintains these aversive memories. Of equal interest, though, is the neurobiology of extinction memory formation as this may shape current therapeutic techniques. Here we review the extant literature on the neurobiology of fear and extinction memory formation, with a strong focus on the cellular and molecular mechanisms underlying these processes. PMID:22230704

  19. A TLD dose algorithm using artificial neural networks

    SciTech Connect

    Moscovitch, M.; Rotunda, J.E.; Tawil, R.A.; Rathbone, B.A.

    1995-12-31

    An artificial neural network was designed and used to develop a dose algorithm for a multi-element thermoluminescence dosimeter (TLD). The neural network architecture is based on the concept of functional links network (FLN). Neural network is an information processing method inspired by the biological nervous system. A dose algorithm based on neural networks is fundamentally different as compared to conventional algorithms, as it has the capability to learn from its own experience. The neural network algorithm is shown the expected dose values (output) associated with given responses of a multi-element dosimeter (input) many times. The algorithm, being trained that way, eventually is capable to produce its own unique solution to similar (but not exactly the same) dose calculation problems. For personal dosimetry, the output consists of the desired dose components: deep dose, shallow dose and eye dose. The input consists of the TL data obtained from the readout of a multi-element dosimeter. The neural network approach was applied to the Harshaw Type 8825 TLD, and was shown to significantly improve the performance of this dosimeter, well within the U.S. accreditation requirements for personnel dosimeters.

  20. Neural progenitors organize in small-world networks to promote cell proliferation

    PubMed Central

    Malmersjö, Seth; Rebellato, Paola; Smedler, Erik; Planert, Henrike; Kanatani, Shigeaki; Liste, Isabel; Nanou, Evanthia; Sunner, Hampus; Abdelhady, Shaimaa; Zhang, Songbai; Andäng, Michael; El Manira, Abdeljabbar; Silberberg, Gilad; Arenas, Ernest; Uhlén, Per

    2013-01-01

    Coherent network activity among assemblies of interconnected cells is essential for diverse functions in the adult brain. However, cellular networks before formations of chemical synapses are poorly understood. Here, embryonic stem cell-derived neural progenitors were found to form networks exhibiting synchronous calcium ion (Ca2+) activity that stimulated cell proliferation. Immature neural cells established circuits that propagated electrical signals between neighboring cells, thereby activating voltage-gated Ca2+ channels that triggered Ca2+ oscillations. These network circuits were dependent on gap junctions, because blocking prevented electrotonic transmission both in vitro and in vivo. Inhibiting connexin 43 gap junctions abolished network activity, suppressed proliferation, and affected embryonic cortical layer formation. Cross-correlation analysis revealed highly correlated Ca2+ activities in small-world networks that followed a scale-free topology. Graph theory predicts that such network designs are effective for biological systems. Taken together, these results demonstrate that immature cells in the developing brain organize in small-world networks that critically regulate neural progenitor proliferation. PMID:23576737