Science.gov

Sample records for cellular neural network

  1. Cellular neuron and large wireless neural network

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Forrester, Thomas; Ambrose, Barry; Kazantzidis, Matheos; Lin, Freddie

    2006-05-01

    A new approach to neural networks is proposed, based on wireless interconnects (synapses) and cellular neurons, both software and hardware; with the capacity of 10 10 neurons, almost fully connected. The core of the system is Spatio-Temporal-Variant (STV) kernel and cellular axon with synaptic plasticity variable in time and space. The novel large neural network hardware is based on two established wireless technologies: RF-cellular and IR-wireless.

  2. Spatial Dynamics of Multilayer Cellular Neural Networks

    NASA Astrophysics Data System (ADS)

    Wu, Shi-Liang; Hsu, Cheng-Hsiung

    2017-06-01

    The purpose of this work is to study the spatial dynamics of one-dimensional multilayer cellular neural networks. We first establish the existence of rightward and leftward spreading speeds of the model. Then we show that the spreading speeds coincide with the minimum wave speeds of the traveling wave fronts in the right and left directions. Moreover, we obtain the asymptotic behavior of the traveling wave fronts when the wave speeds are positive and greater than the spreading speeds. According to the asymptotic behavior and using various kinds of comparison theorems, some front-like entire solutions are constructed by combining the rightward and leftward traveling wave fronts with different speeds and a spatially homogeneous solution of the model. Finally, various qualitative features of such entire solutions are investigated.

  3. Widrow-cellular neural network and optoelectronic implementation

    NASA Astrophysics Data System (ADS)

    Bal, Abdullah

    A new type of optoelectronic cellular neural network has been developed by providing the capability of coefficients adjusment of cellular neural network (CNN) using Widrow based perceptron learning algorithm. The new supervised cellular neural network is called Widrow-CNN. Despite the unsupervised CNN, the proposed learning algorithm allows to use the Widrow-CNN for various image processing applications easily. Also, the capability of CNN for image processing and feature extraction has been improved using basic joint transform correlation architecture. This hardware application presents high speed processing capability compared to digital applications. The optoelectronic Widrow-CNN has been tested for classic CNN feature extraction problems. It yields the best results even in case of hard feature extraction problems such as diagonal line detection and vertical line determination.

  4. Digital implementation of shunting-inhibitory cellular neural network

    NASA Astrophysics Data System (ADS)

    Hammadou, Tarik; Bouzerdoum, Abdesselam; Bermak, Amine

    2000-05-01

    Shunting inhibition is a model of early visual processing which can provide contrast and edge enhancement, and dynamic range compression. An architecture of digital Shunting Inhibitory Cellular Neural Network for real time image processing is presented. The proposed architecture is intended to be used in a complete vision system for edge detection and image enhancement. The present hardware architecture, is modeled and simulated in VHDL. Simulation results show the functional validity of the proposed architecture.

  5. A competitive layer model for cellular neural networks.

    PubMed

    Zhou, Wei; Zurada, Jacek M

    2012-09-01

    This paper discusses a Competitive Layer Model (CLM) for a class of recurrent Cellular Neural Networks (CNNs) from continuous-time type to discrete-time type. The objective of the CLM is to partition a set of input features into salient groups. The complete convergence of such networks in continuous-time type has been discussed first. We give a necessary condition, and a necessary and sufficient condition, which allow the CLM performance existence in our networks. We also discuss the properties of such networks of discrete-time type, and propose a novel CLM iteration method. Such method shows similar performance and storage allocation but faster convergence compared with the previous CLM iteration method (Wersing, Steil, & Ritter, 2001a). Especially for a large scale network with many features and layers, it can significantly reduce the computing time. Examples and simulation results are used to illustrate the developed theory, the comparison between two CLM iteration methods, and the application in image segmentation.

  6. A learning algorithm for oscillatory cellular neural networks.

    PubMed

    Ho, C Y.; Kurokawa, H

    1999-07-01

    We present a cellular type oscillatory neural network for temporal segregation of stationary input patterns. The model comprises an array of locally connected neural oscillators with connections limited to a 4-connected neighborhood. The architecture is reminiscent of the well-known cellular neural network that consists of local connection for feature extraction. By means of a novel learning rule and an initialization scheme, global synchronization can be accomplished without incurring any erroneous synchrony among uncorrelated objects. Each oscillator comprises two mutually coupled neurons, and neurons share a piecewise-linear activation function characteristic. The dynamics of traditional oscillatory models is simplified by using only one plastic synapse, and the overall complexity for hardware implementation is reduced. Based on the connectedness of image segments, it is shown that global synchronization and desynchronization can be achieved by means of locally connected synapses, and this opens up a tremendous application potential for the proposed architecture. Furthermore, by using special grouping synapses it is demonstrated that temporal segregation of overlapping gray-level and color segments can also be achieved. Finally, simulation results show that the learning rule proposed circumvents the problem of component mismatches, and hence facilitates a large-scale integration.

  7. Separation of Bouguer anomaly map using cellular neural network

    NASA Astrophysics Data System (ADS)

    Albora, A. Muhittin; Ucan, Osman N.; Ozmen, Atilla; Ozkan, Tulay

    2001-02-01

    In this paper, a modern image-processing technique, the Cellular Neural Network (CNN) has been firstly applied to Bouguer anomaly map of synthetic examples and then to data from the Sivas-Divrigi Akdag region. CNN is an analog parallel computing paradigm defined in space and characterized by the locality of connections between processing neurons. The behaviour of the CNN is defined by two template matrices and a template vector. We have optimised the weight coefficients of these templates using the Recurrent Perceptron Learning Algorithm (RPLA). After testing CNN performance on synthetic examples, the CNN approach has been applied to the Bouguer anomaly of Sivas-Divrigi Akdag region and the results match drilling logs done by Mineral Research and Exploration (MTA).

  8. Cellular neural network analysis for two-dimensional bioheat transfer equation.

    PubMed

    Niu, J H; Wang, H Z; Zhang, H X; Yan, J Y; Zhu, Y S

    2001-09-01

    The cellular neural network (CNN) method is applied to solve the Pennes bioheat transfer equation, and its feasibility is demonstrated. Numerical solutions were obtained for a cellular neural network for a two-dimensional steady-state temperature field obtained from focused and unfocused ultrasound heat sources. Transient-state temperature fields were also studied and compared with experimental results obtained elsewhere. The cellular neural networks' key features of asynchronous parallel processing, continuous-time dynamics and local interaction enable real-time temperature field estimation for clinical hyperthermia.

  9. Almost automorphic solutions for shunting inhibitory cellular neural networks with time-varying delays.

    PubMed

    Xu, Changjin; Liao, Maoxin

    2015-01-01

    This paper is concerned with the shunting inhibitory cellular neural networks with time-varying delays. Under some suitable conditions, we establish some criteria on the existence and global exponential stability of the almost automorphic solutions of the networks. Numerical simulations are given to support the theoretical findings.

  10. Effects of cellular homeostatic intrinsic plasticity on dynamical and computational properties of biological recurrent neural networks.

    PubMed

    Naudé, Jérémie; Cessac, Bruno; Berry, Hugues; Delord, Bruno

    2013-09-18

    Homeostatic intrinsic plasticity (HIP) is a ubiquitous cellular mechanism regulating neuronal activity, cardinal for the proper functioning of nervous systems. In invertebrates, HIP is critical for orchestrating stereotyped activity patterns. The functional impact of HIP remains more obscure in vertebrate networks, where higher order cognitive processes rely on complex neural dynamics. The hypothesis has emerged that HIP might control the complexity of activity dynamics in recurrent networks, with important computational consequences. However, conflicting results about the causal relationships between cellular HIP, network dynamics, and computational performance have arisen from machine-learning studies. Here, we assess how cellular HIP effects translate into collective dynamics and computational properties in biological recurrent networks. We develop a realistic multiscale model including a generic HIP rule regulating the neuronal threshold with actual molecular signaling pathways kinetics, Dale's principle, sparse connectivity, synaptic balance, and Hebbian synaptic plasticity (SP). Dynamic mean-field analysis and simulations unravel that HIP sets a working point at which inputs are transduced by large derivative ranges of the transfer function. This cellular mechanism ensures increased network dynamics complexity, robust balance with SP at the edge of chaos, and improved input separability. Although critically dependent upon balanced excitatory and inhibitory drives, these effects display striking robustness to changes in network architecture, learning rates, and input features. Thus, the mechanism we unveil might represent a ubiquitous cellular basis for complex dynamics in neural networks. Understanding this robustness is an important challenge to unraveling principles underlying self-organization around criticality in biological recurrent neural networks.

  11. Cellular Neural Network for Real Time Image Processing

    SciTech Connect

    Vagliasindi, G.; Arena, P.; Fortuna, L.; Mazzitelli, G.; Murari, A.

    2008-03-12

    Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information for plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)

  12. Cellular Neural Network for Real Time Image Processing

    NASA Astrophysics Data System (ADS)

    Vagliasindi, G.; Arena, P.; Fortuna, L.; Mazzitelli, G.; Murari, A.

    2008-03-01

    Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information for plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET).

  13. Almost Periodic Dynamics for Memristor-Based Shunting Inhibitory Cellular Neural Networks with Leakage Delays

    PubMed Central

    Lu, Lin

    2016-01-01

    We investigate a class of memristor-based shunting inhibitory cellular neural networks with leakage delays. By applying a new Lyapunov function method, we prove that the neural network which has a unique almost periodic solution is globally exponentially stable. Moreover, the theoretical findings of this paper on the almost periodic solution are applied to prove the existence and stability of periodic solution for memristor-based shunting inhibitory cellular neural networks with leakage delays and periodic coefficients. An example is given to illustrate the effectiveness of the theoretical results. The results obtained in this paper are completely new and complement the previously known studies of Wu (2011) and Chen and Cao (2002). PMID:27840634

  14. Hardware Implementation of a Desktop Supercomputer for High Performance Image Processing. Color Image Processing Using Cellular Neural Networks

    DTIC Science & Technology

    1994-11-01

    This report addresses the functional behavior of Cellular Neural Networks (CNN). The impact of variable convergence times on the proper operation of...The report discusses the new fault model, presents the algorithmic procedures and shows simulated testing results. Cellular neural Networks , Testing.

  15. Convolutional neural networks for automated annotation of cellular cryo-electron tomograms.

    PubMed

    Chen, Muyuan; Dai, Wei; Sun, Stella Y; Jonasch, Darius; He, Cynthia Y; Schmid, Michael F; Chiu, Wah; Ludtke, Steven J

    2017-10-01

    Cellular electron cryotomography offers researchers the ability to observe macromolecules frozen in action in situ, but a primary challenge with this technique is identifying molecular components within the crowded cellular environment. We introduce a method that uses neural networks to dramatically reduce the time and human effort required for subcellular annotation and feature extraction. Subsequent subtomogram classification and averaging yield in situ structures of molecular components of interest. The method is available in the EMAN2.2 software package.

  16. A Memristive Multilayer Cellular Neural Network With Applications to Image Processing.

    PubMed

    Hu, Xiaofang; Feng, Gang; Duan, Shukai; Liu, Lu

    2016-05-13

    The memristor has been extensively studied in electrical engineering and biological sciences as a means to compactly implement the synaptic function in neural networks. The cellular neural network (CNN) is one of the most implementable artificial neural network models and capable of massively parallel analog processing. In this paper, a novel memristive multilayer CNN (Mm-CNN) model is presented along with its performance analysis and applications. In this new CNN design, the memristor crossbar circuit acts as the synapse, which realizes one signed synaptic weight with a pair of memristors and performs the synaptic weighting compactly and linearly. Moreover, the complex weighted summation is executed in an efficient way with a proper design of Mm-CNN cell circuits. The proposed Mm-CNN has several merits, such as compactness, nonvolatility, versatility, and programmability of synaptic weights. Its performance in several image processing applications is illustrated through simulations.

  17. Complete stability of cellular neural networks with unbounded time-varying delays.

    PubMed

    Wang, Lili; Chen, Tianping

    2012-12-01

    In this paper, we are concerned with the delayed cellular neural networks (DCNNs) in the case that the time-varying delays are unbounded. Under some conditions, it shows that the DCNNs can exhibit 3(n) equilibrium points. Then, we track the dynamics of u(t)(t>0) in two cases with respect to different types of subset regions in which u(0) is located. It concludes that every solution trajectory u(t) would converge to one of the equilibrium points despite the time-varying delays, that is, the delayed cellular neural networks are completely stable. The method is novel and the results obtained extend the existing ones. In addition, two illustrative examples are presented to verify the effectiveness of our results.

  18. Exponential stability of delayed and impulsive cellular neural networks with partially Lipschitz continuous activation functions.

    PubMed

    Song, Xueli; Xin, Xing; Huang, Wenpo

    2012-05-01

    The paper discusses exponential stability of distributed delayed and impulsive cellular neural networks with partially Lipschitz continuous activation functions. By relative nonlinear measure method, some novel criteria are obtained for the uniqueness and exponential stability of the equilibrium point. Our method abandons usual assumptions on global Lipschitz continuity, boundedness and monotonicity of activation functions. Our results are generalization and improvement of some existing ones. Finally, two examples and their simulations are presented to illustrate the correctness of our analysis.

  19. Cellular Neural Network Models of Growth and Immune of Effector Cells Response to Cancer

    NASA Astrophysics Data System (ADS)

    Su, Yongmei; Min, Lequan

    Four reaction-diffusion cellular neural network (R-D CNN) models are set up based on the differential equation models for the growths of effector cells and cancer cells, and the model of the immune response to cancer proposed by Allison et al. The CNN models have different reaction-diffusion coefficients and coupling parameters. The R-D CNN models may provide possible quantitative interpretations, and are good in agreement with the in vitro experiment data reported by Allison et al.

  20. Cellular neural network implementation using a phase-only joint transform correlator

    NASA Astrophysics Data System (ADS)

    Zhang, Shuqun; Karim, Mohammad A.

    1999-04-01

    A phase-only joint transform correlator (JTC) is used to realize cellular neural networks (CNNs). The operation of summing cross-correlations of bipolar data in CNNs can be realized in parallel by phase-encoding bipolar data. Compared to other optical systems for implementing CNNs, the proposed method offers the advantages of easier implementation and robustness in terms of system alignment, and requires neither electronic precalculation nor data rearrangement. Simulation results of the proposed optical CNNs for edge detection are provided.

  1. Realization of couplings in a polynomial type mixed-mode cellular neural network.

    PubMed

    Laiho, Mika; Paasio, Ari; Kananen, Asko; Halonen, Kari

    2003-12-01

    In this paper realization of couplings between cells in a polynomial type mixed-mode cellular neural network (CNN) is analyzed. The choice of the multiplier is discussed and two multiplier types are analyzed. Also, two circuits for generating the second and third order polynomial terms of cell output are described. The accuracy of the multipliers and polynomial circuits at the presence of device mismatch is analyzed.

  2. Multilayer cellular neural network and fuzzy C-mean classifiers: comparison and performance analysis

    NASA Astrophysics Data System (ADS)

    Trujillo San-Martin, Maite; Hlebarov, Vejen; Sadki, Mustapha

    2004-11-01

    Neural Networks and Fuzzy systems are considered two of the most important artificial intelligent algorithms which provide classification capabilities obtained through different learning schemas which capture knowledge and process it according to particular rule-based algorithms. These methods are especially suited to exploit the tolerance for uncertainty and vagueness in cognitive reasoning. By applying these methods with some relevant knowledge-based rules extracted using different data analysis tools, it is possible to obtain a robust classification performance for a wide range of applications. This paper will focus on non-destructive testing quality control systems, in particular, the study of metallic structures classification according to the corrosion time using a novel cellular neural network architecture, which will be explained in detail. Additionally, we will compare these results with the ones obtained using the Fuzzy C-means clustering algorithm and analyse both classifiers according to its classification capabilities.

  3. Cellular pulse-coupled neural network with adaptive weights for image segmentation and its VLSI implementation

    NASA Astrophysics Data System (ADS)

    Schreiter, Juerg; Ramacher, Ulrich; Heittmann, Arne; Matolin, Daniel; Schuffny, Rene

    2004-05-01

    We present a cellular pulse coupled neural network with adaptive weights and its analog VLSI implementation. The neural network operates on a scalar image feature, such as grey scale or the output of a spatial filter. It detects segments and marks them with synchronous pulses of the corresponding neurons. The network consists of integrate-and-fire neurons, which are coupled to their nearest neighbors via adaptive synaptic weights. Adaptation follows either one of two empirical rules. Both rules lead to spike grouping in wave like patterns. This synchronous activity binds groups of neurons and labels the corresponding image segments. Applications of the network also include feature preserving noise removal, image smoothing, and detection of bright and dark spots. The adaptation rules are insensitive for parameter deviations, mismatch and non-ideal approximation of the implied functions. That makes an analog VLSI implementation feasible. Simulations showed no significant differences in the synchronization properties between networks using the ideal adaptation rules and networks resembling implementation properties such as randomly distributed parameters and roughly implemented adaptation functions. A prototype is currently being designed and fabricated using an Infineon 130nm technology. It comprises a 128 × 128 neuron array, analog image memory, and an address event representation pulse output.

  4. Functional recognition imaging using artificial neural networks: applications to rapid cellular identification via broadband electromechanical response

    NASA Astrophysics Data System (ADS)

    Nikiforov, M. P.; Reukov, V. V.; Thompson, G. L.; Vertegel, A. A.; Guo, S.; Kalinin, S. V.; Jesse, S.

    2009-10-01

    Functional recognition imaging in scanning probe microscopy (SPM) using artificial neural network identification is demonstrated. This approach utilizes statistical analysis of complex SPM responses at a single spatial location to identify the target behavior, which is reminiscent of associative thinking in the human brain, obviating the need for analytical models. We demonstrate, as an example of recognition imaging, rapid identification of cellular organisms using the difference in electromechanical activity over a broad frequency range. Single-pixel identification of model Micrococcus lysodeikticus and Pseudomonas fluorescens bacteria is achieved, demonstrating the viability of the method.

  5. A new method for the re-implementation of threshold logic functions with cellular neural networks.

    PubMed

    Bénédic, Y; Wira, P; Mercklé, J

    2008-08-01

    A new strategy is presented for the implementation of threshold logic functions with binary-output Cellular Neural Networks (CNNs). The objective is to optimize the CNNs weights to develop a robust implementation. Hence, the concept of generative set is introduced as a convenient representation of any linearly separable Boolean function. Our analysis of threshold logic functions leads to a complete algorithm that automatically provides an optimized generative set. New weights are deduced and a more robust CNN template assuming the same function can thus be implemented. The strategy is illustrated by a detailed example.

  6. Moving object segmentation algorithm based on cellular neural networks in the H.264 compressed domain

    NASA Astrophysics Data System (ADS)

    Feng, Jie; Chen, Yaowu; Tian, Xiang

    2009-07-01

    A cellular neural network (CNN)-based moving object segmentation algorithm in the H.264 compressed domain is proposed. This algorithm mainly utilizes motion vectors directly extracted from H.264 bitstreams. To improve the robustness of the motion vector information, the intramodes in I-frames are used for smooth and nonsmooth region classification, and the residual coefficient energy of P-frames is used to update the classification results first. Then, an adaptive motion vector filter is used according to interpartition modes. Finally, many CNN models are applied to implement moving object segmentation based on motion vector fields. Experiment results are presented to verify the efficiency and the robustness of this algorithm.

  7. Segmentation algorithm via Cellular Neural/Nonlinear Network: implementation on Bio-inspired hardware platform

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Vecchio, Pietro; Grassi, Giuseppe

    2011-12-01

    The Bio-inspired (Bi-i) Cellular Vision System is a computing platform consisting of sensing, array sensing-processing, and digital signal processing. The platform is based on the Cellular Neural/Nonlinear Network (CNN) paradigm. This article presents the implementation of a novel CNN-based segmentation algorithm onto the Bi-i system. Each part of the algorithm, along with the corresponding implementation on the hardware platform, is carefully described through the article. The experimental results, carried out for Foreman and Car-phone video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frames/s. Comparisons with existing CNN-based methods show that the conceived approach is more accurate, thus representing a good trade-off between real-time requirements and accuracy.

  8. Condition monitoring of 3G cellular networks through competitive neural models.

    PubMed

    Barreto, Guilherme A; Mota, João C M; Souza, Luis G M; Frota, Rewbenio A; Aguayo, Leonardo

    2005-09-01

    We develop an unsupervised approach to condition monitoring of cellular networks using competitive neural algorithms. Training is carried out with state vectors representing the normal functioning of a simulated CDMA2000 network. Once training is completed, global and local normality profiles (NPs) are built from the distribution of quantization errors of the training state vectors and their components, respectively. The global NP is used to evaluate the overall condition of the cellular system. If abnormal behavior is detected, local NPs are used in a component-wise fashion to find abnormal state variables. Anomaly detection tests are performed via percentile-based confidence intervals computed over the global and local NPs. We compared the performance of four competitive algorithms [winner-take-all (WTA), frequency-sensitive competitive learning (FSCL), self-organizing map (SOM), and neural-gas algorithm (NGA)] and the results suggest that the joint use of global and local NPs is more efficient and more robust than current single-threshold methods.

  9. Adding learning to cellular genetic algorithms for training recurrent neural networks.

    PubMed

    Ku, K W; Mak, M W; Siu, W C

    1999-01-01

    This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GA's) for training recurrent neural networks (RNN's). Each weight of an RNN is encoded as a floating point number, and a concatenation of the numbers forms a chromosome. Reproduction takes place locally in a square grid with each grid point representing a chromosome. Two approaches, Lamarckian and Baldwinian mechanisms, for combining cellular GA's and learning have been compared. Different hill-climbing algorithms are incorporated into the cellular GA's as learning methods. These include the real-time recurrent learning (RTRL) and its simplified versions, and the delta rule. The RTRL algorithm has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, which is the simplest form of learning, has been implemented by considering the RNN's as feedforward networks during learning. The hybrid algorithms are used to train the RNN's to solve a long-term dependency problem. The results show that Baldwinian learning is inefficient in assisting the cellular GA. It is conjectured that the more difficult it is for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning, the poorer is the convergence of Baldwinian learning. Most of the combinations using the Lamarckian mechanism show an improvement in reducing the number of generations required for an optimum network; however, only a few can reduce the actual time taken. Embedding the delta rule in the cellular GA's has been found to be the fastest method. It is also concluded that learning should not be too extensive if the hybrid algorithm is to be benefit from learning.

  10. Application of neural networks to channel assignment for cellular CDMA networks with multiple services and mobile base stations

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1996-03-01

    The use of artificial neural networks to the channel assignment problem for cellular code- division multiple access (CDMA) telecommunications systems is considered. CDMA takes advantage of voice activity and spatial isolation because its capacity is only interference limited, unlike time-division multiple access (TDMA) and frequency-division multiple access (FDMA) where capacities are bandwidth limited. Any reduction in interference in CDMA translates linearly into increased capacity. FDMA and TDMA use a frequency reuse pattern as a method to increase capacity, while CDMA reuses the same frequency for all cells and gains a reuse efficiency by means of orthogonal codes. The latter method can improve system capacity by factors of four to six over digital TDMA or FDMA. Cellular carriers are planning to provide multiple communication services using CDMA in the next generation cellular system infrastructure. The approach of this study is the use of neural network methods for automatic and local network control, based on traffic behavior in specific cell cites and demand history. The goal is to address certain problems associated with the management of mobile and personal communication services in a cellular radio communications environment. In planning a cellular radio network, the operator assigns channels to the radio cells so that the probability of the processed carrier-to-interference ratio, CII, exceeding a predefined value is sufficiently low. The RF propagation, determined from the topography and infrastructure in the operating area, is used in conjunction with the densities of expected communications traffic to formulate interference constraints. These constraints state which radio cells may use the same code (channel) or adjacent channels at a time. The traffic loading and the number of service grades can also be used to calculate the number of required channels (codes) for each cell. The general assignment problem is the task of assigning the required number

  11. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  12. Modeling brain electrical activity in epilepsy by reaction-diffusion cellular neural networks

    NASA Astrophysics Data System (ADS)

    Gollas, F.; Tetzlaff, R.

    2005-06-01

    Reaction-Diffusion systems can be applied to describe a broad class of nonlinear phenomena, in particular in biological systems and in the propagation of nonlinear waves in excitable media. Especially, pattern formation and chaotic behavior are observed in Reaction-Diffusion systems and can be analyzed. Due to their structure multi-layer Cellular Neural Networks (CNN) are capable of representing Reaction-Diffusion systems effectively. In this contribution Reaction-Diffusion CNN are considered for modeling dynamics of brain activity in epilepsy. Thereby the parameters of Reaction-Diffusion systems are determined in a supervised optimization process, and brain electrical activity using invasive multi-electrode EEG recordings is analyzed with the aim to detect of precursors of impending epileptic seizures. A detailed discussion of first results and potentiality of the proposed approach will be given.

  13. Parallelism on the Intel 860 Hypercube:. Ising Magnets, Hydrodynamical Cellular Automata and Neural Networks

    NASA Astrophysics Data System (ADS)

    Kohring, G. A.; Stauffer, D.

    Geometric parallelization was tested on the Intel Hypercube with 32 MIMD processors of 1860 type, each with 16 Mbytes of distributed memory. We applied it to Ising models in two and three dimensions as well as to neural networks and two-dimensional hydrodynamic cellular automata. For system sizes suited to this machine, up to 60960*60960 and 1410*1410*1408 Ising spins, we found nearly hundred percent parallel efficiency in spite of the needed inter-processor communications. For small systems, the observed deviations from full efficiency were compared with the scaling concepts of Heermann and Burkitt and of Jakobs and Gerling. For Ising models, we determined the Glauber kinetic exponent z≃2.18 in two dimensions and confirmed the stretched exponential relaxation of the magnetization towards the spontaneous magnetization below Tc. For three dimensions we found z≃2.09 and simple exponential relaxation.

  14. Extended LaSalle's Invariance Principle for Full-Range Cellular Neural Networks

    NASA Astrophysics Data System (ADS)

    Di Marco, Mauro; Forti, Mauro; Grazzini, Massimo; Pancioni, Luca

    2009-12-01

    In several relevant applications to the solution of signal processing tasks in real time, a cellular neural network (CNN) is required to be convergent, that is, each solution should tend toward some equilibrium point. The paper develops a Lyapunov method, which is based on a generalized version of LaSalle's invariance principle, for studying convergence and stability of the differential inclusions modeling the dynamics of the full-range (FR) model of CNNs. The applicability of the method is demonstrated by obtaining a rigorous proof of convergence for symmetric FR-CNNs. The proof, which is a direct consequence of the fact that a symmetric FR-CNN admits a strict Lyapunov function, is much more simple than the corresponding proof of convergence for symmetric standard CNNs.

  15. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  16. Modeling of trophospheric ozone concentrations using genetically trained multi-level cellular neural networks

    NASA Astrophysics Data System (ADS)

    Ozcan, H. Kurtulus; Bilgili, Erdem; Sahin, Ulku; Ucan, O. Nuri; Bayat, Cuma

    2007-09-01

    Tropospheric ozone concentrations, which are an important air pollutant, are modeled by the use of an artificial intelligence structure. Data obtained from air pollution measurement stations in the city of Istanbul are utilized in constituting the model. A supervised algorithm for the evaluation of ozone concentration using a genetically trained multi-level cellular neural network (ML-CNN) is introduced, developed, and applied to real data. A genetic algorithm is used in the optimization of CNN templates. The model results and the actual measurement results are compared and statistically evaluated. It is observed that seasonal changes in ozone concentrations are reflected effectively by the concentrations estimated by the multilevel-CNN model structure, with a correlation value of 0.57 ascertained between actual and model results. It is shown that the multilevel-CNN modeling technique is as satisfactory as other modeling techniques in associating the data in a complex medium in air pollution applications.

  17. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  18. Computerized detection of pulmonary nodules using cellular neural networks in CT images

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangwei; McLennan, Geoffrey; Hoffman, Eric A.; Sonka, Milan

    2004-05-01

    The purpose of this study is to develop a computer-aided diagnosis (CAD) system to detect small-sized (from 2mm to 10mm) non-pleural pulmonary nodules in high resolution helical CT scans. A new 3D automated scheme using cellular neural networks is presented. Different from most previous methods, this scheme employed the local shape property to perform voxel classification. The shape index feature successfully captured the local shape difference between nodules and non-nodules, especially vessels. A 3D discrete-time cellular neural network (DTCNN) was constructed to give a reliable voxel classification by collecting information in a neighborhood. To tailor it for lung nodule detection, this DTCNN was trained using genetic algorithms (GAs) to derive the shape index variation pattern of nodules. 19 clinical thoracic CT cases involving a total of 4838 sectional images were used in this work, with 2 scans forming the training set, and the remaining 17 cases being the testing set. The evaluation was composed of two stages. During the first stage, a pulmonologist and our CAD system independently detected nodules in the testing set. Then, the suspected nodule areas located by the computer were reviewed by the pulmonologist to confirm nodules missed by the human in the first review. There were 32 true nodules detected by the computer but missed by the pulmonologist in the first review, in which 30 non-juxtapleural nodules were found. Considering the nodules detected by the pulmonologist during the first and second reviews as the truth, 52 of 62 non-pleural nodules were detected by the CAD system (sensitivity being 83.9%), with the number of false positives being 3.47 per case.

  19. Neural Networks

    DTIC Science & Technology

    1990-01-01

    FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT ELEMENT NO. NO. NO. ACCESSION NO 11 TITLE (Include Security Classification) NEURAL NETWORKS 12. PERSONAL...SUB-GROUP Neural Networks Optical Architectures Nonlinear Optics Adaptation 19. ABSTRACT (Continue on reverse if necessary and identify by block number...341i Y C-odes , lo iii/(iv blank) 1. INTRODUCTION Neural networks are a type of distributed processing system [1

  20. Existence and global exponential stability of periodic solution of a cellular neural networks difference equation with delays and impulses.

    PubMed

    Yang, Xinsong; Cui, Xiangzhao; Long, Yao

    2009-09-01

    A class of cellular neural networks difference equation with delays and impulses are considered. Sufficient conditions for the existence and global exponential stability of periodic solution are obtained by using contraction mapping theorem and inequality techniques. The results of this paper are completely new. A numerical example and its simulations are offered to show the effectiveness of our new results.

  1. Identification of the direction of the neural network activation with a cellular resolution by fast two-photon imaging

    NASA Astrophysics Data System (ADS)

    Liu, Xiuli; Quan, Tingwei; Zeng, Shaoqun; Lv, Xiaohua

    2011-08-01

    Spatiotemporal activity patterns in local neural networks are fundamental to understanding how information is processed and stored in brain microcircuits. Currently, imaging techniques are able to map the directional activation of macronetworks across brain areas; however, these strategies still fail to resolve the activation direction for fine microcircuits with cellular spatial resolution. Here, we show the capability to identify the activation direction of a multicell network with a cellular resolution and millisecond precision by using fast two-photon microscopy and cross correlation procedures. As an example, we characterized a directional neuronal network in an epilepsy brain slice to provide different initiation delay among multiple neurons defined at a millisecond scale.

  2. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  3. Memristor-based cellular nonlinear/neural network: design, analysis, and applications.

    PubMed

    Duan, Shukai; Hu, Xiaofang; Dong, Zhekang; Wang, Lidan; Mazumder, Pinaki

    2015-06-01

    Cellular nonlinear/neural network (CNN) has been recognized as a powerful massively parallel architecture capable of solving complex engineering problems by performing trillions of analog operations per second. The memristor was theoretically predicted in the late seventies, but it garnered nascent research interest due to the recent much-acclaimed discovery of nanocrossbar memories by engineers at the Hewlett-Packard Laboratory. The memristor is expected to be co-integrated with nanoscale CMOS technology to revolutionize conventional von Neumann as well as neuromorphic computing. In this paper, a compact CNN model based on memristors is presented along with its performance analysis and applications. In the new CNN design, the memristor bridge circuit acts as the synaptic circuit element and substitutes the complex multiplication circuit used in traditional CNN architectures. In addition, the negative differential resistance and nonlinear current-voltage characteristics of the memristor have been leveraged to replace the linear resistor in conventional CNNs. The proposed CNN design has several merits, for example, high density, nonvolatility, and programmability of synaptic weights. The proposed memristor-based CNN design operations for implementing several image processing functions are illustrated through simulation and contrasted with conventional CNNs. Monte-Carlo simulation has been used to demonstrate the behavior of the proposed CNN due to the variations in memristor synaptic weights.

  4. Early warning of illegal development for protected areas by integrating cellular automata with neural networks.

    PubMed

    Li, Xia; Lao, Chunhua; Liu, Yilun; Liu, Xiaoping; Chen, Yimin; Li, Shaoying; Ai, Bing; He, Zijian

    2013-11-30

    Ecological security has become a major issue under fast urbanization in China. As the first two cities in this country, Shenzhen and Dongguan issued the ordinance of Eco-designated Line of Control (ELC) to "wire" ecologically important areas for strict protection in 2005 and 2009 respectively. Early warning systems (EWS) are a useful tool for assisting the implementation ELC. In this study, a multi-model approach is proposed for the early warning of illegal development by integrating cellular automata (CA) and artificial neural networks (ANN). The objective is to prevent the ecological risks or catastrophe caused by such development at an early stage. The integrated model is calibrated by using the empirical information from both remote sensing and handheld GPS (global positioning systems). The MAR indicator which is the ratio of missing alarms to all the warnings is proposed for better assessment of the model performance. It is found that the fast urban development has caused significant threats to natural-area protection in the study area. The integration of CA, ANN and GPS provides a powerful tool for describing and predicting illegal development which is in highly non-linear and fragmented forms. The comparison shows that this multi-model approach has much better performances than the single-model approach for the early warning. Compared with the single models of CA and ANN, this integrated multi-model can improve the value of MAR by 65.48% and 5.17% respectively. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. A universal concept based on cellular neural networks for ultrafast and flexible solving of differential equations.

    PubMed

    Chedjou, Jean Chamberlain; Kyamakya, Kyandoghere

    2015-04-01

    This paper develops and validates a comprehensive and universally applicable computational concept for solving nonlinear differential equations (NDEs) through a neurocomputing concept based on cellular neural networks (CNNs). High-precision, stability, convergence, and lowest-possible memory requirements are ensured by the CNN processor architecture. A significant challenge solved in this paper is that all these cited computing features are ensured in all system-states (regular or chaotic ones) and in all bifurcation conditions that may be experienced by NDEs.One particular quintessence of this paper is to develop and demonstrate a solver concept that shows and ensures that CNN processors (realized either in hardware or in software) are universal solvers of NDE models. The solving logic or algorithm of given NDEs (possible examples are: Duffing, Mathieu, Van der Pol, Jerk, Chua, Rössler, Lorenz, Burgers, and the transport equations) through a CNN processor system is provided by a set of templates that are computed by our comprehensive templates calculation technique that we call nonlinear adaptive optimization. This paper is therefore a significant contribution and represents a cutting-edge real-time computational engineering approach, especially while considering the various scientific and engineering applications of this ultrafast, energy-and-memory-efficient, and high-precise NDE solver concept. For illustration purposes, three NDE models are demonstratively solved, and related CNN templates are derived and used: the periodically excited Duffing equation, the Mathieu equation, and the transport equation.

  6. Cellular neural networks, the Navier-Stokes equation, and microarray image reconstruction.

    PubMed

    Zineddin, Bachar; Wang, Zidong; Liu, Xiaohui

    2011-11-01

    Although the last decade has witnessed a great deal of improvements achieved for the microarray technology, many major developments in all the main stages of this technology, including image processing, are still needed. Some hardware implementations of microarray image processing have been proposed in the literature and proved to be promising alternatives to the currently available software systems. However, the main drawback of those proposed approaches is the unsuitable addressing of the quantification of the gene spot in a realistic way without any assumption about the image surface. Our aim in this paper is to present a new image-reconstruction algorithm using the cellular neural network that solves the Navier-Stokes equation. This algorithm offers a robust method for estimating the background signal within the gene-spot region. The MATCNN toolbox for Matlab is used to test the proposed method. Quantitative comparisons are carried out, i.e., in terms of objective criteria, between our approach and some other available methods. It is shown that the proposed algorithm gives highly accurate and realistic measurements in a fully automated manner within a remarkably efficient time.

  7. New second-order difference algorithm for image segmentation based on cellular neural networks (CNNs)

    NASA Astrophysics Data System (ADS)

    Meng, Shukai; Mo, Yu L.

    2001-09-01

    Image segmentation is one of the most important operations in many image analysis problems, which is the process that subdivides an image into its constituents and extracts those parts of interest. In this paper, we present a new second order difference gray-scale image segmentation algorithm based on cellular neural networks. A 3x3 CNN cloning template is applied, which can make smooth processing and has a good ability to deal with the conflict between the capability of noise resistance and the edge detection of complex shapes. We use second order difference operator to calculate the coefficients of the control template, which are not constant but rather depend on the input gray-scale values. It is similar to Contour Extraction CNN in construction, but there are some different in algorithm. The result of experiment shows that the second order difference CNN has a good capability in edge detection. It is better than Contour Extraction CNN in detail detection and more effective than the Laplacian of Gauss (LOG) algorithm.

  8. Image-based quantitative analysis of gold immunochromatographic strip via cellular neural network approach.

    PubMed

    Zeng, Nianyin; Wang, Zidong; Zineddin, Bachar; Li, Yurong; Du, Min; Xiao, Liang; Liu, Xiaohui; Young, Terry

    2014-05-01

    Gold immunochromatographic strip assay provides a rapid, simple, single-copy and on-site way to detect the presence or absence of the target analyte. This paper aims to develop a method for accurately segmenting the test line and control line of the gold immunochromatographic strip (GICS) image for quantitatively determining the trace concentrations in the specimen, which can lead to more functional information than the traditional qualitative or semi-quantitative strip assay. The canny operator as well as the mathematical morphology method is used to detect and extract the GICS reading-window. Then, the test line and control line of the GICS reading-window are segmented by the cellular neural network (CNN) algorithm, where the template parameters of the CNN are designed by the switching particle swarm optimization (SPSO) algorithm for improving the performance of the CNN. It is shown that the SPSO-based CNN offers a robust method for accurately segmenting the test and control lines, and therefore serves as a novel image methodology for the interpretation of GICS. Furthermore, quantitative comparison is carried out among four algorithms in terms of the peak signal-to-noise ratio. It is concluded that the proposed CNN algorithm gives higher accuracy and the CNN is capable of parallelism and analog very-large-scale integration implementation within a remarkably efficient time.

  9. Residual Separation of Magnetic Fields Using a Cellular Neural Network Approach

    NASA Astrophysics Data System (ADS)

    Albora, A. M.; Özmen, A.; Uçan, O. N.

    - In this paper, a Cellular Neural Network (CNN) has been applied to a magnetic regional/residual anomaly separation problem. CNN is an analog parallel computing paradigm defined in space and characterized by the locality of connections between processing neurons. The behavior of the CNN is defined by the template matrices A, B and the template vector I. We have optimized weight coefficients of these templates using Recurrent Perceptron Learning Algorithm (RPLA). The advantages of CNN as a real-time stochastic method are that it introduces little distortion to the shape of the original image and that it is not effected significantly by factors such as the overlap of power spectra of residual fields. The proposed method is tested using synthetic examples and the average depth of the buried objects has been estimated by power spectrum analysis. Next the CNN approach is applied to magnetic data over the Golalan chromite mine in Elazig which lies East of Turkey. This area is among the largest and richest chromite masses of the world. We compared the performance of CNN to classical derivative approaches.

  10. Neural Networks

    SciTech Connect

    Smith, Patrick I.

    2003-09-23

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  11. Intrinsic Cellular Properties and Connectivity Density Determine Variable Clustering Patterns in Randomly Connected Inhibitory Neural Networks

    PubMed Central

    Rich, Scott; Booth, Victoria; Zochowski, Michal

    2016-01-01

    The plethora of inhibitory interneurons in the hippocampus and cortex play a pivotal role in generating rhythmic activity by clustering and synchronizing cell firing. Results of our simulations demonstrate that both the intrinsic cellular properties of neurons and the degree of network connectivity affect the characteristics of clustered dynamics exhibited in randomly connected, heterogeneous inhibitory networks. We quantify intrinsic cellular properties by the neuron's current-frequency relation (IF curve) and Phase Response Curve (PRC), a measure of how perturbations given at various phases of a neurons firing cycle affect subsequent spike timing. We analyze network bursting properties of networks of neurons with Type I or Type II properties in both excitability and PRC profile; Type I PRCs strictly show phase advances and IF curves that exhibit frequencies arbitrarily close to zero at firing threshold while Type II PRCs display both phase advances and delays and IF curves that have a non-zero frequency at threshold. Type II neurons whose properties arise with or without an M-type adaptation current are considered. We analyze network dynamics under different levels of cellular heterogeneity and as intrinsic cellular firing frequency and the time scale of decay of synaptic inhibition are varied. Many of the dynamics exhibited by these networks diverge from the predictions of the interneuron network gamma (ING) mechanism, as well as from results in all-to-all connected networks. Our results show that randomly connected networks of Type I neurons synchronize into a single cluster of active neurons while networks of Type II neurons organize into two mutually exclusive clusters segregated by the cells' intrinsic firing frequencies. Networks of Type II neurons containing the adaptation current behave similarly to networks of either Type I or Type II neurons depending on network parameters; however, the adaptation current creates differences in the cluster dynamics

  12. Intrinsic Cellular Properties and Connectivity Density Determine Variable Clustering Patterns in Randomly Connected Inhibitory Neural Networks.

    PubMed

    Rich, Scott; Booth, Victoria; Zochowski, Michal

    2016-01-01

    The plethora of inhibitory interneurons in the hippocampus and cortex play a pivotal role in generating rhythmic activity by clustering and synchronizing cell firing. Results of our simulations demonstrate that both the intrinsic cellular properties of neurons and the degree of network connectivity affect the characteristics of clustered dynamics exhibited in randomly connected, heterogeneous inhibitory networks. We quantify intrinsic cellular properties by the neuron's current-frequency relation (IF curve) and Phase Response Curve (PRC), a measure of how perturbations given at various phases of a neurons firing cycle affect subsequent spike timing. We analyze network bursting properties of networks of neurons with Type I or Type II properties in both excitability and PRC profile; Type I PRCs strictly show phase advances and IF curves that exhibit frequencies arbitrarily close to zero at firing threshold while Type II PRCs display both phase advances and delays and IF curves that have a non-zero frequency at threshold. Type II neurons whose properties arise with or without an M-type adaptation current are considered. We analyze network dynamics under different levels of cellular heterogeneity and as intrinsic cellular firing frequency and the time scale of decay of synaptic inhibition are varied. Many of the dynamics exhibited by these networks diverge from the predictions of the interneuron network gamma (ING) mechanism, as well as from results in all-to-all connected networks. Our results show that randomly connected networks of Type I neurons synchronize into a single cluster of active neurons while networks of Type II neurons organize into two mutually exclusive clusters segregated by the cells' intrinsic firing frequencies. Networks of Type II neurons containing the adaptation current behave similarly to networks of either Type I or Type II neurons depending on network parameters; however, the adaptation current creates differences in the cluster dynamics

  13. Implementation of a cellular neural network-based segmentation algorithm on the bio-inspired vision system

    NASA Astrophysics Data System (ADS)

    Karabiber, Fethullah; Grassi, Giuseppe; Vecchio, Pietro; Arik, Sabri; Yalcin, M. Erhan

    2011-01-01

    Based on the cellular neural network (CNN) paradigm, the bio-inspired (bi-i) cellular vision system is a computing platform consisting of state-of-the-art sensing, cellular sensing-processing and digital signal processing. This paper presents the implementation of a novel CNN-based segmentation algorithm onto the bi-i system. The experimental results, carried out for different benchmark video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frame/sec. Comparisons with existing CNN-based methods show that, even though these methods are from two to six times faster than the proposed one, the conceived approach is more accurate and, consequently, represents a satisfying trade-off between real-time requirements and accuracy.

  14. Discrimination of liver cancer in cellular level based on backscatter micro-spectrum with PCA algorithm and BP neural network

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Wang, Cheng; Cai, Gan; Dong, Xiaona

    2016-10-01

    The incidence and mortality rate of the primary liver cancer are very high and its postoperative metastasis and recurrence have become important factors to the prognosis of patients. Circulating tumor cells (CTC), as a new tumor marker, play important roles in the early diagnosis and individualized treatment. This paper presents an effective method to distinguish liver cancer based on the cellular scattering spectrum, which is a non-fluorescence technique based on the fiber confocal microscopic spectrometer. Combining the principal component analysis (PCA) with back propagation (BP) neural network were utilized to establish an automatic recognition model for backscatter spectrum of the liver cancer cells from blood cell. PCA was applied to reduce the dimension of the scattering spectral data which obtained by the fiber confocal microscopic spectrometer. After dimensionality reduction by PCA, a neural network pattern recognition model with 2 input layer nodes, 11 hidden layer nodes, 3 output nodes was established. We trained the network with 66 samples and also tested it. Results showed that the recognition rate of the three types of cells is more than 90%, the relative standard deviation is only 2.36%. The experimental results showed that the fiber confocal microscopic spectrometer combining with the algorithm of PCA and BP neural network can automatically identify the liver cancer cell from the blood cells. This will provide a better tool for investigating the metastasis of liver cancers in vivo, the biology metabolic characteristics of liver cancers and drug transportation. Additionally, it is obviously referential in practical application.

  15. Neural Networks

    NASA Astrophysics Data System (ADS)

    Schwindling, Jerome

    2010-04-01

    This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p.) corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  16. Delay-dependent global exponential robust stability for delayed cellular neural networks with time-varying delay.

    PubMed

    Liu, Pin-Lin

    2013-11-01

    This paper investigates a class of delayed cellular neural networks (DCNN) with time-varying delay. Based on the Lyapunov-Krasovski functional and integral inequality approach (IIA), a uniformly asymptotic stability criterion in terms of only one simple linear matrix inequality (LMI) is addressed, which guarantees stability for such time-varying delay systems. This LMI can be easily solved by convex optimization techniques. Unlike previous methods, the upper bound of the delay derivative is taken into consideration, even if larger than or equal to 1. It is proven that results obtained are less conservative than existing ones. Four numerical examples illustrate efficacy of the proposed methods.

  17. Delay-dependent exponential passivity of uncertain cellular neural networks with discrete and distributed time-varying delays.

    PubMed

    Du, Yuanhua; Zhong, Shouming; Xu, Jia; Zhou, Nan

    2015-05-01

    This paper is concerned with the delay-dependent exponential passivity analysis issue for uncertain cellular neural networks with discrete and distributed time-varying delays. By decomposing the delay interval into multiple equidistant subintervals and multiple nonuniform subintervals, a suitable augmented Lyapunov-Krasovskii functionals are constructed on these intervals. A set of novel sufficient conditions are obtained to guarantee the exponential passivity analysis issue for the considered system. Finally, two numerical examples are provided to demonstrate the effectiveness of the proposed results.

  18. A novel spatter detection algorithm based on typical cellular neural network operations for laser beam welding processes

    NASA Astrophysics Data System (ADS)

    Nicolosi, L.; Abt, F.; Blug, A.; Heider, A.; Tetzlaff, R.; Höfler, H.

    2012-01-01

    Real-time monitoring of laser beam welding (LBW) has increasingly gained importance in several manufacturing processes ranging from automobile production to precision mechanics. In the latter, a novel algorithm for the real-time detection of spatters was implemented in a camera based on cellular neural networks. The latter can be connected to the optics of commercially available laser machines leading to real-time monitoring of LBW processes at rates up to 15 kHz. Such high monitoring rates allow the integration of other image evaluation tasks such as the detection of the full penetration hole for real-time control of process parameters.

  19. Neural Network Function Classifier

    DTIC Science & Technology

    2003-02-07

    neural network sets. Each of the neural networks in a particular set is trained to recognize a particular data set type. The best function representation of the data set is determined from the neural network output. The system comprises sets of trained neural networks having neural networks trained to identify different types of data. The number of neural networks within each neural network set will depend on the number of function types that are represented. The system further comprises

  20. An asymmetric image cryptosystem based on the adaptive synchronization of an uncertain unified chaotic system and a cellular neural network

    NASA Astrophysics Data System (ADS)

    Cheng, Chao-Jung; Cheng, Chi-Bin

    2013-10-01

    Chaotic dynamics provide a fast and simple means to create an excellent image cryptosystem, because it is extremely sensitive to initial conditions and system parameters, pseudorandomness, and non-periodicity. However, most chaos-based image encryption schemes are symmetric cryptographic techniques, which have been proven to be more vulnerable, compared to an asymmetric cryptosystem. This paper develops an asymmetric image cryptosystem, based on the adaptive synchronization of two different chaotic systems, namely a unified chaotic system and a cellular neural network. An adaptive controller with parameter update laws is formulated, using the Lyapunov stability theory, to asymptotically synchronize the two chaotic systems. The synchronization controller is embedded in the image cryptosystem and generates a pair of asymmetric keys, for image encryption and decryption. Using numerical simulations, three sets of experiments are conducted to evaluate the feasibility and reliability of the proposed chaos-based image cryptosystem.

  1. A cardiac electrical activity model based on a cellular automata system in comparison with neural network model.

    PubMed

    Khan, Muhammad Sadiq Ali; Yousuf, Sidrah

    2016-03-01

    Cardiac Electrical Activity is commonly distributed into three dimensions of Cardiac Tissue (Myocardium) and evolves with duration of time. The indicator of heart diseases can occur randomly at any time of a day. Heart rate, conduction and each electrical activity during cardiac cycle should be monitor non-invasively for the assessment of "Action Potential" (regular) and "Arrhythmia" (irregular) rhythms. Many heart diseases can easily be examined through Automata model like Cellular Automata concepts. This paper deals with the different states of cardiac rhythms using cellular automata with the comparison of neural network also provides fast and highly effective stimulation for the contraction of cardiac muscles on the Atria in the result of genesis of electrical spark or wave. The specific formulated model named as "States of automaton Proposed Model for CEA (Cardiac Electrical Activity)" by using Cellular Automata Methodology is commonly shows the three states of cardiac tissues conduction phenomena (i) Resting (Relax and Excitable state), (ii) ARP (Excited but Absolutely refractory Phase i.e. Excited but not able to excite neighboring cells) (iii) RRP (Excited but Relatively Refractory Phase i.e. Excited and able to excite neighboring cells). The result indicates most efficient modeling with few burden of computation and it is Action Potential during the pumping of blood in cardiac cycle.

  2. Neural Network Studies

    DTIC Science & Technology

    1993-07-01

    basic useful theorems and general rules which apply to neural networks (in ’Overview of Neural Network Theory’), studies of training time as the...The Neural Network , Bayes- Gaussian, and k-Nearest Neighbor Classifiers’), an analysis of fuzzy logic and its relationship to neural network (in ’Fuzzy

  3. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  4. A 181 GOPS AKAZE Accelerator Employing Discrete-Time Cellular Neural Networks for Real-Time Feature Extraction

    PubMed Central

    Jiang, Guangli; Liu, Leibo; Zhu, Wenping; Yin, Shouyi; Wei, Shaojun

    2015-01-01

    This paper proposes a real-time feature extraction VLSI architecture for high-resolution images based on the accelerated KAZE algorithm. Firstly, a new system architecture is proposed. It increases the system throughput, provides flexibility in image resolution, and offers trade-offs between speed and scaling robustness. The architecture consists of a two-dimensional pipeline array that fully utilizes computational similarities in octaves. Secondly, a substructure (block-serial discrete-time cellular neural network) that can realize a nonlinear filter is proposed. This structure decreases the memory demand through the removal of data dependency. Thirdly, a hardware-friendly descriptor is introduced in order to overcome the hardware design bottleneck through the polar sample pattern; a simplified method to realize rotation invariance is also presented. Finally, the proposed architecture is designed in TSMC 65 nm CMOS technology. The experimental results show a performance of 127 fps in full HD resolution at 200 MHz frequency. The peak performance reaches 181 GOPS and the throughput is double the speed of other state-of-the-art architectures. PMID:26404305

  5. A 181 GOPS AKAZE Accelerator Employing Discrete-Time Cellular Neural Networks for Real-Time Feature Extraction.

    PubMed

    Jiang, Guangli; Liu, Leibo; Zhu, Wenping; Yin, Shouyi; Wei, Shaojun

    2015-09-04

    This paper proposes a real-time feature extraction VLSI architecture for high-resolution images based on the accelerated KAZE algorithm. Firstly, a new system architecture is proposed. It increases the system throughput, provides flexibility in image resolution, and offers trade-offs between speed and scaling robustness. The architecture consists of a two-dimensional pipeline array that fully utilizes computational similarities in octaves. Secondly, a substructure (block-serial discrete-time cellular neural network) that can realize a nonlinear filter is proposed. This structure decreases the memory demand through the removal of data dependency. Thirdly, a hardware-friendly descriptor is introduced in order to overcome the hardware design bottleneck through the polar sample pattern; a simplified method to realize rotation invariance is also presented. Finally, the proposed architecture is designed in TSMC 65 nm CMOS technology. The experimental results show a performance of 127 fps in full HD resolution at 200 MHz frequency. The peak performance reaches 181 GOPS and the throughput is double the speed of other state-of-the-art architectures.

  6. Hierarchical random cellular neural networks for system-level brain-like signal processing.

    PubMed

    Kozma, Robert; Puljic, Marko

    2013-09-01

    Sensory information processing and cognition in brains are modeled using dynamic systems theory. The brain's dynamic state is described by a trajectory evolving in a high-dimensional state space. We introduce a hierarchy of random cellular automata as the mathematical tools to describe the spatio-temporal dynamics of the cortex. The corresponding brain model is called neuropercolation which has distinct advantages compared to traditional models using differential equations, especially in describing spatio-temporal discontinuities in the form of phase transitions. Phase transitions demarcate singularities in brain operations at critical conditions, which are viewed as hallmarks of higher cognition and awareness experience. The introduced Monte-Carlo simulations obtained by parallel computing point to the importance of computer implementations using very large-scale integration (VLSI) and analog platforms.

  7. Generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2013-03-01

    In this work a new radial basis function based classification neural network named as generalized classifier neural network, is proposed. The proposed generalized classifier neural network has five layers, unlike other radial basis function based neural networks such as generalized regression neural network and probabilistic neural network. They are input, pattern, summation, normalization and output layers. In addition to topological difference, the proposed neural network has gradient descent based optimization of smoothing parameter approach and diverge effect term added calculation improvements. Diverge effect term is an improvement on summation layer calculation to supply additional separation ability and flexibility. Performance of generalized classifier neural network is compared with that of the probabilistic neural network, multilayer perceptron algorithm and radial basis function neural network on 9 different data sets and with that of generalized regression neural network on 3 different data sets include only two classes in MATLAB environment. Better classification performance up to %89 is observed. Improved classification performances proved the effectivity of the proposed neural network.

  8. Existence and stability of pseudo almost periodic solutions for shunting inhibitory cellular neural networks with neutral type delays and time-varying leakage delays.

    PubMed

    Xu, Changjin; Zhang, Qiming; Wu, Yusen

    2014-01-01

    In this paper, shunting inhibitory cellular neural networks(SICNNs) with neutral type delays and time-varying leakage delays are investigated. By applying Lyapunov functional method and differential inequality techniques, a set of sufficient conditions are obtained for the existence and exponential stability of pseudo almost periodic solutions of the model. An example is given to support the theoretical findings. Our results improve and generalize those of the previous studies.

  9. Nonlinear Neural Network Oscillator.

    DTIC Science & Technology

    A nonlinear oscillator (10) includes a neural network (12) having at least one output (12a) for outputting a one dimensional vector. The neural ... neural network and the input of the input layer for modifying a magnitude and/or a polarity of the one dimensional output vector prior to the sample of...first or a second direction. Connection weights of the neural network are trained on a deterministic sequence of data from a chaotic source or may be a

  10. Modeling land use and land cover changes in a vulnerable coastal region using artificial neural networks and cellular automata.

    PubMed

    Qiang, Yi; Lam, Nina S N

    2015-03-01

    As one of the most vulnerable coasts in the continental USA, the Lower Mississippi River Basin (LMRB) region has endured numerous hazards over the past decades. The sustainability of this region has drawn great attention from the international, national, and local communities, wanting to understand how the region as a system develops under intense interplay between the natural and human factors. A major problem in this deltaic region is significant land loss over the years due to a combination of natural and human factors. The main scientific and management questions are what factors contribute to the land use land cover (LULC) changes in this region, can we model the changes, and how would the LULC look like in the future given the current factors? This study analyzed the LULC changes of the region between 1996 and 2006 by utilizing an artificial neural network (ANN) to derive the LULC change rules from 15 human and natural variables. The rules were then used to simulate future scenarios in a cellular automation model. A stochastic element was added in the model to represent factors that were not included in the current model. The analysis was conducted for two sub-regions in the study area for comparison. The results show that the derived ANN models could simulate the LULC changes with a high degree of accuracy (above 92 % on average). A total loss of 263 km(2) in wetlands from 2006 to 2016 was projected, whereas the trend of forest loss will cease. These scenarios provide useful information to decision makers for better planning and management of the region.

  11. Neural Network Hurricane Tracker

    DTIC Science & Technology

    1998-05-27

    data about the hurricane and supplying the data to a trained neural network for yielding a predicted path for the hurricane. The system further includes...a device for displaying the predicted path of the hurricane. A method for using and training the neural network in the system is described. In the...method, the neural network is trained using information about hurricanes in a specific geographical area maintained in a database. The training involves

  12. Exploring neural network technology

    SciTech Connect

    Naser, J.; Maulbetsch, J.

    1992-12-01

    EPRI is funding several projects to explore neural network technology, a form of artificial intelligence that some believe may mimic the way the human brain processes information. This research seeks to provide a better understanding of fundamental neural network characteristics and to identify promising utility industry applications. Results to date indicate that the unique attributes of neural networks could lead to improved monitoring, diagnostic, and control capabilities for a variety of complex utility operations. 2 figs.

  13. Studies in Neural Networks

    DTIC Science & Technology

    1991-01-01

    N00014-87-K-0377 TITLE: "Studies in Neural Networks " fl.U Q l~~izie JUL 021991 "" " F.: L9’CO37 "I! c-1(.d Contract No.: N00014-87-K-0377 Final...34) have been very useful, both in understanding the dynamics of neural networks and in engineering networks to perform particular tasks. We have noted...understanding more complex network computation. Interest in applying ideas from biological neural networks to real problems of engineering raises the issues of

  14. Global Exponential Stability of Almost Periodic Solution for Neutral-Type Cohen-Grossberg Shunting Inhibitory Cellular Neural Networks with Distributed Delays and Impulses

    PubMed Central

    Xu, Lijun; Jiang, Qi; Gu, Guodong

    2016-01-01

    A kind of neutral-type Cohen-Grossberg shunting inhibitory cellular neural networks with distributed delays and impulses is considered. Firstly, by using the theory of impulsive differential equations and the contracting mapping principle, the existence and uniqueness of the almost periodic solution for the above system are obtained. Secondly, by constructing a suitable Lyapunov functional, the global exponential stability of the unique almost periodic solution is also investigated. The work in this paper improves and extends some results in recent years. As an application, an example and numerical simulations are presented to demonstrate the feasibility and effectiveness of the main results. PMID:27190502

  15. Existence of periodic solutions for the discrete-time counterpart of a neutral-type cellular neural network with time-varying delays and impulses

    NASA Astrophysics Data System (ADS)

    Akça, Haydar; Al-Zahrani, Eadah; Covachev, Valéry; Covacheva, Zlatinka

    2017-07-01

    From the mathematical point of view, a cellular neural network (CNN) can be characterized by an array of identical nonlinear dynamical systems called cells (neurons) that are locally interconnected. Using the semi-discretization method, in the present talk we construct a discrete-time counterpart of a neutral-type CNN with time-varying delays and impulses. Sufficient conditions for the existence of periodic solutions of the discrete-time system thus obtained are found by using the continuation theorem of coincidence degree theory.

  16. Existence and global exponential stability of almost periodic solution for cellular neural networks with variable coefficients and time-varying delays.

    PubMed

    Jiang, Haijun; Zhang, Long; Teng, Zhidong

    2005-11-01

    In this paper, we study cellular neural networks with almost periodic variable coefficients and time-varying delays. By using the existence theorem of almost periodic solution for general functional differential equations, introducing many real parameters and applying the Lyapunov functional method and the technique of Young inequality, we obtain some sufficient conditions to ensure the existence, uniqueness, and global exponential stability of almost periodic solution. The results obtained in this paper are new, useful, and extend and improve the existing ones in previous literature.

  17. Global Exponential Stability of Almost Periodic Solution for Neutral-Type Cohen-Grossberg Shunting Inhibitory Cellular Neural Networks with Distributed Delays and Impulses.

    PubMed

    Xu, Lijun; Jiang, Qi; Gu, Guodong

    2016-01-01

    A kind of neutral-type Cohen-Grossberg shunting inhibitory cellular neural networks with distributed delays and impulses is considered. Firstly, by using the theory of impulsive differential equations and the contracting mapping principle, the existence and uniqueness of the almost periodic solution for the above system are obtained. Secondly, by constructing a suitable Lyapunov functional, the global exponential stability of the unique almost periodic solution is also investigated. The work in this paper improves and extends some results in recent years. As an application, an example and numerical simulations are presented to demonstrate the feasibility and effectiveness of the main results.

  18. Probabilistic Analysis of Neural Networks

    DTIC Science & Technology

    1990-11-26

    provide an understanding of the basic mechanisms of learning and recognition in neural networks . The main areas of progress were analysis of neural ... networks models, study of network connectivity, and investigation of computer network theory.

  19. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  20. Critical Branching Neural Networks

    ERIC Educational Resources Information Center

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  1. Critical Branching Neural Networks

    ERIC Educational Resources Information Center

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  2. Neural Networks: A Primer

    DTIC Science & Technology

    1991-05-01

    capture underlying relationships directly from observed behavior is one of the primary capabilities of neural networks. 29 Back P’ropagation Approximailon...model complex behavior patterns. Particularly in areas traditionally addressed by regression and other functional based techniques, neural networks...to.be determined directly from the observed behavior of a system or sample of individuals. This ability should prove important in personnel analysis and

  3. Programming neural networks

    SciTech Connect

    Anderson, J.A.; Markman, A.B.; Viscuso, S.R.; Wisniewski, E.J.

    1988-09-01

    Neural networks ''compute'' though not in the way that traditional computers do. One must accept their weaknesses to use their strengths. The authors present several applications of a particular non-linear network (the BSB model) to illustrate some of the peculiarities inherent in this architecture.

  4. Neural networks in seismic discrimination

    SciTech Connect

    Dowla, F.U.

    1995-01-01

    Neural networks are powerful and elegant computational tools that can be used in the analysis of geophysical signals. At Lawrence Livermore National Laboratory, we have developed neural networks to solve problems in seismic discrimination, event classification, and seismic and hydrodynamic yield estimation. Other researchers have used neural networks for seismic phase identification. We are currently developing neural networks to estimate depths of seismic events using regional seismograms. In this paper different types of network architecture and representation techniques are discussed. We address the important problem of designing neural networks with good generalization capabilities. Examples of neural networks for treaty verification applications are also described.

  5. Stochastic asymptotical synchronization of chaotic Markovian jumping fuzzy cellular neural networks with mixed delays and the Wiener process based on sampled-data control

    NASA Astrophysics Data System (ADS)

    Kalpana, M.; Balasubramaniam, P.

    2013-07-01

    We investigate the stochastic asymptotical synchronization of chaotic Markovian jumping fuzzy cellular neural networks (MJFCNNs) with discrete, unbounded distributed delays, and the Wiener process based on sampled-data control using the linear matrix inequality (LMI) approach. The Lyapunov—Krasovskii functional combined with the input delay approach as well as the free-weighting matrix approach is employed to derive several sufficient criteria in terms of LMIs to ensure that the delayed MJFCNNs with the Wiener process is stochastic asymptotical synchronous. Restrictions (e.g., time derivative is smaller than one) are removed to obtain a proposed sampled-data controller. Finally, a numerical example is provided to demonstrate the reliability of the derived results.

  6. Tomography using neural networks

    NASA Astrophysics Data System (ADS)

    Demeter, G.

    1997-03-01

    We have utilized neural networks for fast evaluation of tomographic data on the MT-1M tokamak. The networks have proven useful in providing the parameters of a nonlinear fit to experimental data, producing results in a fraction of the time required for performing the nonlinear fit. Time required for training the networks makes the method worth applying only if a substantial amount of data are to be evaluated.

  7. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  8. Hyperbolic Hopfield neural networks.

    PubMed

    Kobayashi, M

    2013-02-01

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states.

  9. Nested neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    Nested neural networks, consisting of small interconnected subnetworks, allow for the storage and retrieval of neural state patterns of different sizes. The subnetworks are naturally categorized by layers of corresponding to spatial frequencies in the pattern field. The storage capacity and the error correction capability of the subnetworks generally increase with the degree of connectivity between layers (the nesting degree). Storage of only few subpatterns in each subnetworks results in a vast storage capacity of patterns and subpatterns in the nested network, maintaining high stability and error correction capability.

  10. Processing the Bouguer anomaly map of Biga and the surrounding area by the cellular neural network: application to the southwestern Marmara region

    NASA Astrophysics Data System (ADS)

    Aydogan, D.

    2007-04-01

    An image processing technique called the cellular neural network (CNN) approach is used in this study to locate geological features giving rise to gravity anomalies such as faults or the boundary of two geologic zones. CNN is a stochastic image processing technique based on template optimization using the neighborhood relationships of cells. These cells can be characterized by a functional block diagram that is typical of neural network theory. The functionality of CNN is described in its entirety by a number of small matrices (A, B and I) called the cloning template. CNN can also be considered to be a nonlinear convolution of these matrices. This template describes the strength of the nearest neighbor interconnections in the network. The recurrent perceptron learning algorithm (RPLA) is used in optimization of cloning template. The CNN and standard Canny algorithms were first tested on two sets of synthetic gravity data with the aim of checking the reliability of the proposed approach. The CNN method was compared with classical derivative techniques by applying the cross-correlation method (CC) to the same anomaly map as this latter approach can detect some features that are difficult to identify on the Bouguer anomaly maps. This approach was then applied to the Bouguer anomaly map of Biga and its surrounding area, in Turkey. Structural features in the area between Bandirma, Biga, Yenice and Gonen in the southwest Marmara region are investigated by applying the CNN and CC to the Bouguer anomaly map. Faults identified by these algorithms are generally in accordance with previously mapped surface faults. These examples show that the geologic boundaries can be detected from Bouguer anomaly maps using the cloning template approach. A visual evaluation of the outputs of the CNN and CC approaches is carried out, and the results are compared with each other. This approach provides quantitative solutions based on just a few assumptions, which makes the method more

  11. Neural networks in psychiatry.

    PubMed

    Hulshoff Pol, Hilleke; Bullmore, Edward

    2013-01-01

    Over the past three decades numerous imaging studies have revealed structural and functional brain abnormalities in patients with neuropsychiatric diseases. These structural and functional brain changes are frequently found in multiple, discrete brain areas and may include frontal, temporal, parietal and occipital cortices as well as subcortical brain areas. However, while the structural and functional brain changes in patients are found in anatomically separated areas, these are connected through (long distance) fibers, together forming networks. Thus, instead of representing separate (patho)-physiological entities, these local changes in the brains of patients with psychiatric disorders may in fact represent different parts of the same 'elephant', i.e., the (altered) brain network. Recent developments in quantitative analysis of complex networks, based largely on graph theory, have revealed that the brain's structure and functions have features of complex networks. Here we briefly introduce several recent developments in neural network studies relevant for psychiatry, including from the 2013 special issue on Neural Networks in Psychiatry in European Neuropsychopharmacology. We conclude that new insights will be revealed from the neural network approaches to brain imaging in psychiatry that hold the potential to find causes for psychiatric disorders and (preventive) treatments in the future.

  12. Neural networks counting chimes.

    PubMed Central

    Amit, D J

    1988-01-01

    It is shown that the ideas that led to neural networks capable of recalling associatively and asynchronously temporal sequences of patterns can be extended to produce a neural network that automatically counts the cardinal number in a sequence of identical external stimuli. The network is explicitly constructed, analyzed, and simulated. Such a network may account for the cognitive effect of the automatic counting of chimes to tell the hour. A more general implication is that different electrophysiological responses to identical stimuli, at certain stages of cortical processing, do not necessarily imply synaptic modification, a la Hebb. Such differences may arise from the fact that consecutive identical inputs find the network in different stages of an active temporal sequence of cognitive states. These types of networks are then situated within a program for the study of cognition, which assigns the detection of meaning as the primary role of attractor neural networks rather than computation, in contrast to the parallel distributed processing attitude to the connectionist project. This interpretation is free of homunculus, as well as from the criticism raised against the cognitive model of symbol manipulation. Computation is then identified as the syntax of temporal sequences of quasi-attractors. PMID:3353371

  13. Evolving Neural Network Pattern Classifiers

    DTIC Science & Technology

    1994-05-01

    This work investigates the application of evolutionary programming for automatically configuring neural network architectures for pattern...evaluating a multitude of neural network model hypotheses. The evolutionary programming search is augmented with the Solis & Wets random optimization

  14. Mathematical Theory of Neural Networks

    DTIC Science & Technology

    1994-08-31

    This report provides a summary of the grant work by the principal investigators in the area of neural networks . The topics covered deal with...properties) for nets; and the use of neural networks for the control of nonlinear systems.

  15. Neural Networks and Micromechanics

    NASA Astrophysics Data System (ADS)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  16. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  17. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  18. Neural Network Communications Signal Processing

    DTIC Science & Technology

    1994-08-01

    This final technical report describes the research and development- results of the Neural Network Communications Signal Processing (NNCSP) Program...The objectives of the NNCSP program are to: (1) develop and implement a neural network and communications signal processing simulation system for the...purpose of exploring the applicability of neural network technology to communications signal processing; (2) demonstrate several configurations of the

  19. Neural Networks for Speech Application.

    DTIC Science & Technology

    1987-11-01

    This is a general introduction to the reemerging technology called neural networks , and how these networks may provide an important alternative to...traditional forms of computing in speech applications. Neural networks , sometimes called Artificial Neural Systems (ANS), have shown promise for solving

  20. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  1. Chaotic Neural Networks and Beyond

    NASA Astrophysics Data System (ADS)

    Aihara, Kazuyuki; Yamada, Taiji; Oku, Makito

    2013-01-01

    A chaotic neuron model which is closely related to deterministic chaos observed experimentally with squid giant axons is explained, and used to construct a chaotic neural network model. Further, such a chaotic neural network is extended to different chaotic models such as a largescale memory relation network, a locally connected network, a vector-valued network, and a quaternionic-valued neuron.

  2. Neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  3. Interval probabilistic neural network.

    PubMed

    Kowalski, Piotr A; Kulczycki, Piotr

    2017-01-01

    Automated classification systems have allowed for the rapid development of exploratory data analysis. Such systems increase the independence of human intervention in obtaining the analysis results, especially when inaccurate information is under consideration. The aim of this paper is to present a novel approach, a neural networking, for use in classifying interval information. As presented, neural methodology is a generalization of probabilistic neural network for interval data processing. The simple structure of this neural classification algorithm makes it applicable for research purposes. The procedure is based on the Bayes approach, ensuring minimal potential losses with regard to that which comes about through classification errors. In this article, the topological structure of the network and the learning process are described in detail. Of note, the correctness of the procedure proposed here has been verified by way of numerical tests. These tests include examples of both synthetic data, as well as benchmark instances. The results of numerical verification, carried out for different shapes of data sets, as well as a comparative analysis with other methods of similar conditioning, have validated both the concept presented here and its positive features.

  4. Rule generation from neural networks

    SciTech Connect

    Fu, L.

    1994-08-01

    The neural network approach has proven useful for the development of artificial intelligence systems. However, a disadvantage with this approach is that the knowledge embedded in the neural network is opaque. In this paper, we show how to interpret neural network knowledge in symbolic form. We lay down required definitions for this treatment, formulate the interpretation algorithm, and formally verify its soundness. The main result is a formalized relationship between a neural network and a rule-based system. In addition, it has been demonstrated that the neural network generates rules of better performance than the decision tree approach in noisy conditions. 7 refs.

  5. Cellular and synaptic network defects in autism.

    PubMed

    Peça, João; Feng, Guoping

    2012-10-01

    Many candidate genes are now thought to confer susceptibility to autism spectrum disorders (ASDs). Here we review four interrelated complexes, each composed of multiple families of genes that functionally coalesce on common cellular pathways. We illustrate a common thread in the organization of glutamatergic synapses and suggest a link between genes involved in Tuberous Sclerosis Complex, Fragile X syndrome, Angelman syndrome and several synaptic ASD candidate genes. When viewed in this context, progress in deciphering the molecular architecture of cellular protein-protein interactions together with the unraveling of synaptic dysfunction in neural networks may prove pivotal to advancing our understanding of ASDs. Copyright © 2012. Published by Elsevier Ltd.

  6. Neural networks for triggering

    SciTech Connect

    Denby, B. ); Campbell, M. ); Bedeschi, F. ); Chriss, N.; Bowers, C. ); Nesti, F. )

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  7. Structured Pyramidal Neural Networks.

    PubMed

    Soares, Alessandra M; Fernandes, Bruno J T; Bastos-Filho, Carmelo J A

    2017-02-09

    The Pyramidal Neural Networks (PNN) are an example of a successful recently proposed model inspired by the human visual system and deep learning theory. PNNs are applied to computer vision and based on the concept of receptive fields. This paper proposes a variation of PNN, named here as Structured Pyramidal Neural Network (SPNN). SPNN has self-adaptive variable receptive fields, while the original PNNs rely on the same size for the fields of all neurons, which limits the model since it is not possible to put more computing resources in a particular region of the image. Another limitation of the original approach is the need to define values for a reasonable number of parameters, which can turn difficult the application of PNNs in contexts in which the user does not have experience. On the other hand, SPNN has a fewer number of parameters. Its structure is determined using a novel method with Delaunay Triangulation and k-means clustering. SPNN achieved better results than PNNs and similar performance when compared to Convolutional Neural Network (CNN) and Support Vector Machine (SVM), but using lower memory capacity and processing time.

  8. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  9. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  10. Space-Time Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  11. Stimulated Photorefractive Optical Neural Networks

    DTIC Science & Technology

    1992-12-15

    This final report describes research in optical neural networks performed under DARPA sponsorship at Hughes Aircraft Company during the period 1989...in photorefractive crystals. This approach reduces crosstalk and improves the utilization of the optical input device. Successfully implemented neural ... networks include the Perceptron, Bidirectional Associative Memory, and multi-layer backpropagation networks. Up to 104 neurons, 2xl0(7) weights, and

  12. Trimaran Resistance Artificial Neural Network

    DTIC Science & Technology

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  13. Optical Neural Network Classifier Architectures

    DTIC Science & Technology

    1998-04-01

    We present an adaptive opto-electronic neural network hardware architecture capable of exploiting parallel optics to realize real-time processing and...function neural network based on a previously demonstrated binary-input version. The greyscale-input capability broadens the range of applications for...a reduced feature set of multiwavelet images to improve training times and discrimination capability of the neural network . The design uses a joint

  14. Analysis of Simple Neural Networks

    DTIC Science & Technology

    1988-12-20

    ANALYSIS OF SThlPLE NEURAL NETWORKS Chedsada Chinrungrueng Master’s Report Under the Supervision of Prof. Carlo H. Sequin Department of... Neural Networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT...and guidJ.nce. I have learned a great deal from his teaching, knowledge, and criti- cism. 1. MOTIVATION ANALYSIS OF SIMPLE NEURAL NETWORKS Chedsada

  15. Neural Networks For Robot Control

    DTIC Science & Technology

    2001-04-17

    following: (a) Application of artificial neural networks (multi-layer perceptrons, MLPs) for 2D planar robot arm by using the dynamic backpropagation...methods for the adjustment of parameters; and optimization of the architecture; (b) Application of artificial neural networks in controlling closed...studies in controlling dynamic robot arms by using neural networks in real-time process; (2) Research of optimal architectures used in closed-loop systems in order to compare with adaptive and robust control.

  16. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  17. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  18. [Artificial neural networks in Neurosciences].

    PubMed

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María

    2011-11-01

    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  19. Neural Networks, Reliability and Data Analysis

    DTIC Science & Technology

    1993-01-01

    Neural network technology has been surveyed with the intent of determining the feasibility and impact neural networks may have in the area of...automated reliability tools. Data analysis capabilities of neural networks appear to be very applicable to reliability science due to similar mathematical...tendencies in data.... Neural networks , Reliability, Data analysis, Automated reliability tools, Automated intelligent information processing, Statistical neural network.

  20. Three dimensional living neural networks

    NASA Astrophysics Data System (ADS)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  1. MSAT and cellular hybrid networking

    NASA Technical Reports Server (NTRS)

    Baranowsky, Patrick W., II

    1993-01-01

    Westinghouse Electric Corporation is developing both the Communications Ground Segment and the Series 1000 Mobile Phone for American Mobile Satellite Corporation's (AMSC's) Mobile Satellite (MSAT) system. The success of the voice services portion of this system depends, to some extent, upon the interoperability of the cellular network and the satellite communication circuit switched communication channels. This paper will describe the set of user-selectable cellular interoperable modes (cellular first/satellite second, etc.) provided by the Mobile Phone and described how they are implemented with the ground segment. Topics including roaming registration and cellular-to-satellite 'seamless' call handoff will be discussed, along with the relevant Interim Standard IS-41 Revision B Cellular Radiotelecommunications Intersystem Operations and IOS-553 Mobile Station - Land Station Compatibility Specification.

  2. Interacting neural networks

    NASA Astrophysics Data System (ADS)

    Metzler, R.; Kinzel, W.; Kanter, I.

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random.

  3. Nodule detection in a lung region that's segmented with using genetic cellular neural networks and 3D template matching with fuzzy rule based thresholding.

    PubMed

    Ozekes, Serhat; Osman, Onur; Ucan, Osman N

    2008-01-01

    The purpose of this study was to develop a new method for automated lung nodule detection in serial section CT images with using the characteristics of the 3D appearance of the nodules that distinguish themselves from the vessels. Lung nodules were detected in four steps. First, to reduce the number of region of interests (ROIs) and the computation time, the lung regions of the CTs were segmented using Genetic Cellular Neural Networks (G-CNN). Then, for each lung region, ROIs were specified with using the 8 directional search; +1 or -1 values were assigned to each voxel. The 3D ROI image was obtained by combining all the 2-Dimensional (2D) ROI images. A 3D template was created to find the nodule-like structures on the 3D ROI image. Convolution of the 3D ROI image with the proposed template strengthens the shapes that are similar to those of the template and it weakens the other ones. Finally, fuzzy rule based thresholding was applied and the ROI's were found. To test the system's efficiency, we used 16 cases with a total of 425 slices, which were taken from the Lung Image Database Consortium (LIDC) dataset. The computer aided diagnosis (CAD) system achieved 100% sensitivity with 13.375 FPs per case when the nodule thickness was greater than or equal to 5.625 mm. Our results indicate that the detection performance of our algorithm is satisfactory, and this may well improve the performance of computer-aided detection of lung nodules.

  4. Nodule Detection in a Lung Region that's Segmented with Using Genetic Cellular Neural Networks and 3D Template Matching with Fuzzy Rule Based Thresholding

    PubMed Central

    Osman, Onur; Ucan, Osman N.

    2008-01-01

    Objective The purpose of this study was to develop a new method for automated lung nodule detection in serial section CT images with using the characteristics of the 3D appearance of the nodules that distinguish themselves from the vessels. Materials and Methods Lung nodules were detected in four steps. First, to reduce the number of region of interests (ROIs) and the computation time, the lung regions of the CTs were segmented using Genetic Cellular Neural Networks (G-CNN). Then, for each lung region, ROIs were specified with using the 8 directional search; +1 or -1 values were assigned to each voxel. The 3D ROI image was obtained by combining all the 2-Dimensional (2D) ROI images. A 3D template was created to find the nodule-like structures on the 3D ROI image. Convolution of the 3D ROI image with the proposed template strengthens the shapes that are similar to those of the template and it weakens the other ones. Finally, fuzzy rule based thresholding was applied and the ROI's were found. To test the system's efficiency, we used 16 cases with a total of 425 slices, which were taken from the Lung Image Database Consortium (LIDC) dataset. Results The computer aided diagnosis (CAD) system achieved 100% sensitivity with 13.375 FPs per case when the nodule thickness was greater than or equal to 5.625 mm. Conclusion Our results indicate that the detection performance of our algorithm is satisfactory, and this may well improve the performance of computer-aided detection of lung nodules. PMID:18253070

  5. Vehicle Study with Neural Networks

    NASA Astrophysics Data System (ADS)

    Ruan, Xiaogang; Dai, Lizhen

    The biology is characteristic of biologic phototaxis and negative phototaxis. Can a machine be endowed with such a characteristic? This is the question we study in this paper, so a method of realizing vehicle's phototaxis and negative phototaxis through a neural network is presented. A randomly generated network is used as the main computational unit. Only the weights of the output units of this network are changed during training. It will be shown that this simple type of a biological realistic neural network is able to simulate robot controllers like that incorporated in Braitenberg vehicles. Two experiments are presented illustrating the stage-like study emerging with this neural network.

  6. Dynamic interactions in neural networks

    SciTech Connect

    Arbib, M.A. ); Amari, S. )

    1989-01-01

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  7. Neural Networks for the Beginner.

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  8. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  9. Technology Assessment of Neural Networks

    DTIC Science & Technology

    1989-02-13

    Unlike a Von Neumann type of computer which needs to be programmed to carry out an information-processing function, neural networks are promised as...trainable through a series of trials to learn how to process information. An assessment of the current, near-term (1995), and long-term (2010) trends in Neural Networks is given.

  10. Phase Detection Using Neural Networks.

    DTIC Science & Technology

    1997-03-10

    A likelihood of detecting a reflected signal characterized by phase discontinuities and background noise is enhanced by utilizing neural networks to...identify coherency intervals. The received signal is processed into a predetermined format such as a digital time series. Neural networks perform

  11. Hybrid Neural Network for Pattern Recognition.

    DTIC Science & Technology

    1997-02-03

    two one-layer neural networks and the second stage comprises a feedforward two-layer neural network . A method for recognizing patterns is also...topological representations of the input patterns using the first and second neural networks. The method further comprises providing a third neural network for...classifying and recognizing the inputted patterns and training the third neural network with a back-propagation algorithm so that the third neural network recognizes at least one interested pattern.

  12. Improved conditions for global exponential stability of recurrent neural networks with time-varying delays.

    PubMed

    Zeng, Zhigang; Wang, Jun

    2006-05-01

    This paper presents new theoretical results on global exponential stability of recurrent neural networks with bounded activation functions and time-varying delays. The stability conditions depend on external inputs, connection weights, and time delays of recurrent neural networks. Using these results, the global exponential stability of recurrent neural networks can be derived, and the estimated location of the equilibrium point can be obtained. As typical representatives, the Hopfield neural network (HNN) and the cellular neural network (CNN) are examined in detail.

  13. Neural Network Development Tool (NETS)

    NASA Technical Reports Server (NTRS)

    Baffes, Paul T.

    1990-01-01

    Artificial neural networks formed from hundreds or thousands of simulated neurons, connected in manner similar to that in human brain. Such network models learning behavior. Using NETS involves translating problem to be solved into input/output pairs, designing network configuration, and training network. Written in C.

  14. Neural networks in astronomy.

    PubMed

    Tagliaferri, Roberto; Longo, Giuseppe; Milano, Leopoldo; Acernese, Fausto; Barone, Fabrizio; Ciaramella, Angelo; De Rosa, Rosario; Donalek, Ciro; Eleuteri, Antonio; Raiconi, Giancarlo; Sessa, Salvatore; Staiano, Antonino; Volpicelli, Alfredo

    2003-01-01

    In the last decade, the use of neural networks (NN) and of other soft computing methods has begun to spread also in the astronomical community which, due to the required accuracy of the measurements, is usually reluctant to use automatic tools to perform even the most common tasks of data reduction and data mining. The federation of heterogeneous large astronomical databases which is foreseen in the framework of the astrophysical virtual observatory and national virtual observatory projects, is, however, posing unprecedented data mining and visualization problems which will find a rather natural and user friendly answer in artificial intelligence tools based on NNs, fuzzy sets or genetic algorithms. This review is aimed to both astronomers (who often have little knowledge of the methodological background) and computer scientists (who often know little about potentially interesting applications), and therefore will be structured as follows: after giving a short introduction to the subject, we shall summarize the methodological background and focus our attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining. Most of the original work described in the paper has been performed in the framework of the AstroNeural collaboration (Napoli-Salerno).

  15. Exponential synchronization of a class of neural networks with time-varying delays.

    PubMed

    Cheng, Chao-Jung; Liao, Teh-Lu; Yan, Jun-Juh; Hwang, Chi-Chuan

    2006-02-01

    This paper aims to present a synchronization scheme for a class of delayed neural networks, which covers the Hopfield neural networks and cellular neural networks with time-varying delays. A feedback control gain matrix is derived to achieve the exponential synchronization of the drive-response structure of neural networks by using the Lyapunov stability theory, and its exponential synchronization condition can be verified if a certain Hamiltonian matrix with no eigenvalues on the imaginary axis. This condition can avoid solving an algebraic Riccati equation. Both the cellular neural networks and Hopfield neural networks with time-varying delays are given as examples for illustration.

  16. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  17. New Markov-Shannon Entropy models to assess connectivity quality in complex networks: from molecular to cellular pathway, Parasite-Host, Neural, Industry, and Legal-Social networks.

    PubMed

    Riera-Fernández, Pablo; Munteanu, Cristian R; Escobar, Manuel; Prado-Prado, Francisco; Martín-Romalde, Raquel; Pereira, David; Villalba, Karen; Duardo-Sánchez, Aliuska; González-Díaz, Humberto

    2012-01-21

    Graph and Complex Network theory is expanding its application to different levels of matter organization such as molecular, biological, technological, and social networks. A network is a set of items, usually called nodes, with connections between them, which are called links or edges. There are many different experimental and/or theoretical methods to assign node-node links depending on the type of network we want to construct. Unfortunately, the use of a method for experimental reevaluation of the entire network is very expensive in terms of time and resources; thus the development of cheaper theoretical methods is of major importance. In addition, different methods to link nodes in the same type of network are not totally accurate in such a way that they do not always coincide. In this sense, the development of computational methods useful to evaluate connectivity quality in complex networks (a posteriori of network assemble) is a goal of major interest. In this work, we report for the first time a new method to calculate numerical quality scores S(L(ij)) for network links L(ij) (connectivity) based on the Markov-Shannon Entropy indices of order k-th (θ(k)) for network nodes. The algorithm may be summarized as follows: (i) first, the θ(k)(j) values are calculated for all j-th nodes in a complex network already constructed; (ii) A Linear Discriminant Analysis (LDA) is used to seek a linear equation that discriminates connected or linked (L(ij)=1) pairs of nodes experimentally confirmed from non-linked ones (L(ij)=0); (iii) the new model is validated with external series of pairs of nodes; (iv) the equation obtained is used to re-evaluate the connectivity quality of the network, connecting/disconnecting nodes based on the quality scores calculated with the new connectivity function. This method was used to study different types of large networks. The linear models obtained produced the following results in terms of overall accuracy for network reconstruction

  18. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  19. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  20. Neural Networks for Readability Analysis.

    ERIC Educational Resources Information Center

    McEneaney, John E.

    This paper describes and reports on the performance of six related artificial neural networks that have been developed for the purpose of readability analysis. Two networks employ counts of linguistic variables that simulate a traditional regression-based approach to readability. The remaining networks determine readability from "visual…

  1. A Complexity Theory of Neural Networks

    DTIC Science & Technology

    1990-04-14

    Significant results have been obtained on the computation complexity of analog neural networks , and distribute voting. The computing power and...learning algorithms for limited precision analog neural networks have been investigated. Lower bounds for constant depth, polynomial size analog neural ... networks , and a limited version of discrete neural networks have been obtained. The work on distributed voting has important applications for distributed

  2. Collective Computation of Neural Network

    DTIC Science & Technology

    1990-03-15

    Sciences, Beijing ABSTRACT Computational neuroscience is a new branch of neuroscience originating from current research on the theory of computer...scientists working in artificial intelligence engineering and neuroscience . The paper introduces the collective computational properties of model neural...vision research. On this basis, the authors analyzed the significance of the Hopfield model. Key phrases: Computational Neuroscience , Neural Network, Model

  3. Artificial Neural Network Analysis System

    DTIC Science & Technology

    2007-11-02

    Target detection, multi-target tracking and threat identification of ICBM and its warheads by sensor fusion and data fusion of sensors in a fuzzy neural network system based on the compound eye of a fly.

  4. The holographic neural network: Performance comparison with other neural networks

    NASA Astrophysics Data System (ADS)

    Klepko, Robert

    1991-10-01

    The artificial neural network shows promise for use in recognition of high resolution radar images of ships. The holographic neural network (HNN) promises a very large data storage capacity and excellent generalization capability, both of which can be achieved with only a few learning trials, unlike most neural networks which require on the order of thousands of learning trials. The HNN is specially designed for pattern association storage, and mathematically realizes the storage and retrieval mechanisms of holograms. The pattern recognition capability of the HNN was studied, and its performance was compared with five other commonly used neural networks: the Adaline, Hamming, bidirectional associative memory, recirculation, and back propagation networks. The patterns used for testing represented artificial high resolution radar images of ships, and appear as a two dimensional topology of peaks with various amplitudes. The performance comparisons showed that the HNN does not perform as well as the other neural networks when using the same test data. However, modification of the data to make it appear more Gaussian distributed, improved the performance of the network. The HNN performs best if the data is completely Gaussian distributed.

  5. Digital Neural Networks for New Media

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Lambert; Malki, Suleyman

    Neural Networks perform computationally intensive tasks offering smart solutions for many new media applications. A number of analog and mixed digital/analog implementations have been proposed to smooth the algorithmic gap. But gradually, the digital implementation has become feasible, and the dedicated neural processor is on the horizon. A notable example is the Cellular Neural Network (CNN). The analog direction has matured for low-power, smart vision sensors; the digital direction is gradually being shaped into an IP-core for algorithm acceleration, especially for use in FPGA-based high-performance systems. The chapter discusses the next step towards a flexible and scalable multi-core engine using Application-Specific Integrated Processors (ASIP). This topographic engine can serve many new media tasks, as illustrated by novel applications in Homeland Security. We conclude with a view on the CNN kaleidoscope for the year 2020.

  6. Interval neural networks

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

  7. Neural-Network-Development Program

    NASA Technical Reports Server (NTRS)

    Phillips, Todd A.

    1993-01-01

    NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.

  8. VLSI implementation of neural networks.

    PubMed

    Wilamowski, B M; Binfet, J; Kaynak, M O

    2000-06-01

    Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach.

  9. Cellular recurrent deep network for image registration

    NASA Astrophysics Data System (ADS)

    Alam, M.; Vidyaratne, L.; Iftekharuddin, Khan M.

    2015-09-01

    Image registration using Artificial Neural Network (ANN) remains a challenging learning task. Registration can be posed as a two-step problem: parameter estimation and actual alignment/transformation using the estimated parameters. To date ANN based image registration techniques only perform the parameter estimation, while affine equations are used to perform the actual transformation. In this paper, we propose a novel deep ANN based image rigid registration that combines parameter estimation and transformation as a simultaneous learning task. Our previous work shows that a complex universal approximator known as Cellular Simultaneous Recurrent Network (CSRN) can successfully approximate affine transformations with known transformation parameters. This study introduces a deep ANN that combines a feed forward network with a CSRN to perform full rigid registration. Layer wise training is used to pre-train feed forward network for parameter estimation and followed by a CSRN for image transformation respectively. The deep network is then fine-tuned to perform the final registration task. Our result shows that the proposed deep ANN architecture achieves comparable registration accuracy to that of image affine transformation using CSRN with known parameters. We also demonstrate the efficacy of our novel deep architecture by a performance comparison with a deep clustered MLP.

  10. Antenna analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  11. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    PubMed

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  12. Preliminary Analysis of the efficacy of Artificial neural Network (ANN) and Cellular Automaton (CA) based Land Use Models in Urban Land-Use Planning

    NASA Astrophysics Data System (ADS)

    Harun, R.

    2013-05-01

    This research provides an opportunity of collaboration between urban planners and modellers by providing a clear theoretical foundations on the two most widely used urban land use models, and assessing the effectiveness of applying the models in urban planning context. Understanding urban land cover change is an essential element for sustainable urban development as it affects ecological functioning in urban ecosystem. Rapid urbanization due to growing inclination of people to settle in urban areas has increased the complexities in predicting that at what shape and size cities will grow. The dynamic changes in the spatial pattern of urban landscapes has exposed the policy makers and environmental scientists to great challenge. But geographic science has grown in symmetry to the advancements in computer science. Models and tools are developed to support urban planning by analyzing the causes and consequences of land use changes and project the future. Of all the different types of land use models available in recent days, it has been found by researchers that the most frequently used models are Cellular Automaton (CA) and Artificial Neural Networks (ANN) models. But studies have demonstrated that the existing land use models have not been able to meet the needs of planners and policy makers. There are two primary causes identified behind this prologue. First, there is inadequate understanding of the fundamental theories and application of the models in urban planning context i.e., there is a gap in communication between modellers and urban planners. Second, the existing models exclude many key drivers in the process of simplification of the complex urban system that guide urban spatial pattern. Thus the models end up being effective in assessing the impacts of certain land use policies, but cannot contribute in new policy formulation. This paper is an attempt to increase the knowledge base of planners on the most frequently used land use model and also assess the

  13. Stimulated photorefractive optical neural networks

    NASA Astrophysics Data System (ADS)

    Owechko, Y.; Dunning, G.; Nordin, G.; Soffer, B. H.

    1992-12-01

    This final report describes research in optical neural networks performed under DARPA sponsorship at Hughes Aircraft Company during the period 1989-1992. The objective of demonstrating a programmable optical computer for flexible implementation of multi-layer neural network models was successfully achieved. The advantages of optics for neural network implementations include large storage capacity, high connectivity, and massive parallelism which result in high computation rates. The optical neurocomputer developed on this program is based on a new type of holography, cascaded grating holography (CGH), in which the neural network weights are distributed among angularly- and spatially-multiplexed gratings generated by stimulated processes in photorefractive crystals. This approach reduces crosstalk and improves the utilization of the optical input device. Successfully implemented neural networks include the Perceptron, Bidirectional Associative Memory, and multi-layer backpropagation networks. Up to 104 neurons, 2x10(7) weights, and processing rates of 2x10(7) connection updates per second were achieved. Packaging concepts for future versions of the neurocomputer were also studied.

  14. Optical disk based neural network

    NASA Astrophysics Data System (ADS)

    Lu, Taiwei; Choi, Kyusun; Wu, Shudong; Xu, Xin; Yu, Francis T. S.

    1989-11-01

    An optical disk (OD)-based optical neural network architecture for high-speed and large-capacity associative processing is proposed. The information storage by the OD is described, and an optical neural network using an OD for large-capacity storage of interconnection weight matrices (IWMs) is shown and discussed. The ways that optical interconnections are established between the IWM and the input pattern is shown, as is the way that the loop is closed. The operation of the OD in the network is examined.

  15. Multiprocessor Neural Network in Healthcare.

    PubMed

    Godó, Zoltán Attila; Kiss, Gábor; Kocsis, Dénes

    2015-01-01

    A possible way of creating a multiprocessor artificial neural network is by the use of microcontrollers. The RISC processors' high performance and the large number of I/O ports mean they are greatly suitable for creating such a system. During our research, we wanted to see if it is possible to efficiently create interaction between the artifical neural network and the natural nervous system. To achieve as much analogy to the living nervous system as possible, we created a frequency-modulated analog connection between the units. Our system is connected to the living nervous system through 128 microelectrodes. Two-way communication is provided through A/D transformation, which is even capable of testing psychopharmacons. The microcontroller-based analog artificial neural network can play a great role in medical singal processing, such as ECG, EEG etc.

  16. Micromechanics of cellularized biopolymer networks

    PubMed Central

    Jones, Christopher A. R.; Cibula, Matthew; Feng, Jingchen; Krnacik, Emma A.; McIntyre, David H.; Levine, Herbert; Sun, Bo

    2015-01-01

    Collagen gels are widely used in experiments on cell mechanics because they mimic the extracellular matrix in physiological conditions. Collagen gels are often characterized by their bulk rheology; however, variations in the collagen fiber microstructure and cell adhesion forces cause the mechanical properties to be inhomogeneous at the cellular scale. We study the mechanics of type I collagen on the scale of tens to hundreds of microns by using holographic optical tweezers to apply pN forces to microparticles embedded in the collagen fiber network. We find that in response to optical forces, particle displacements are inhomogeneous, anisotropic, and asymmetric. Gels prepared at 21 °C and 37 °C show qualitative difference in their micromechanical characteristics. We also demonstrate that contracting cells remodel the micromechanics of their surrounding extracellular matrix in a strain- and distance-dependent manner. To further understand the micromechanics of cellularized extracellular matrix, we have constructed a computational model which reproduces the main experiment findings. PMID:26324923

  17. Neural network computer simulation of medical aerosols.

    PubMed

    Richardson, C J; Barlow, D J

    1996-06-01

    Preliminary investigations have been conducted to assess the potential for using artificial neural networks to simulate aerosol behaviour, with a view to employing this type of methodology in the evaluation and design of pulmonary drug-delivery systems. Details are presented of the general purpose software developed for these tasks; it implements a feed-forward back-propagation algorithm with weight decay and connection pruning, the user having complete run-time control of the network architecture and mode of training. A series of exploratory investigations is then reported in which different network structures and training strategies are assessed in terms of their ability to simulate known patterns of fluid flow in simple model systems. The first of these involves simulations of cellular automata-generated data for fluid flow through a partially obstructed two-dimensional pipe. The artificial neural networks are shown to be highly successful in simulating the behaviour of this simple linear system, but with important provisos relating to the information content of the training data and the criteria used to judge when the network is properly trained. A second set of investigations is then reported in which similar networks are used to simulate patterns of fluid flow through aerosol generation devices, using training data furnished through rigorous computational fluid dynamics modelling. These more complex three-dimensional systems are modelled with equal success. It is concluded that carefully tailored, well trained networks could provide valuable tools not just for predicting but also for analysing the spatial dynamics of pharmaceutical aerosols.

  18. Plant Growth Models Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  19. Signal Approximation with a Wavelet Neural Network

    DTIC Science & Technology

    1992-12-01

    specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .

  20. A Neural Network Based Speech Recognition System

    DTIC Science & Technology

    1990-02-01

    encoder and identifies individual words. This use of neural networks offers two advantages over conventional algorithmic detectors: the detection...environment. Keywords: Artificial intelligence; Neural networks : Back propagation; Speech recognition.

  1. Plant Growth Models Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  2. Neural Networks for Flight Control

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  3. Neural networks and applications tutorial

    NASA Astrophysics Data System (ADS)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  4. Fault Tolerance of Neural Networks

    DTIC Science & Technology

    1994-07-01

    Systematic Ap - proach, Proc. Government Microcircuit Application Conf. (GOMAC), San Diego, Nov. 1986. [10] D.E.Goldberg, Genetic Algorithms in Search...s l m n ttempt to develop fault tolerant neural networks. The lows. Given a well-trained network, we first eliminate temp todevlopfaut tlernt eurl ...both ap - proaches, and this resulted in very slight improve- ments over the addition/deletion procedure. 103 Fisher’s Iris data in average case Fisher’s

  5. Analysis and Design of Neural Networks

    DTIC Science & Technology

    1992-01-01

    The training problem for feedforward neural networks is nonlinear parameter estimation that can be solved by a variety of optimization techniques...Much of the literature of neural networks has focused on variants of gradient descent. The training of neural networks using such techniques is known to...be a slow process with more sophisticated techniques not always performing significantly better. It is shown that feedforward neural networks can

  6. Radar System Classification Using Neural Networks

    DTIC Science & Technology

    1991-12-01

    This study investigated methods of improving the accuracy of neural networks in the classification of large numbers of classes. A literature search...revealed that neural networks have been successful in the radar classification problem, and that many complex problems have been solved using systems...of multiple neural networks . The experiments conducted were based on 32 classes of radar system data. The neural networks were modelled using a program

  7. Artificial neural networks in medicine

    SciTech Connect

    Keller, P.E.

    1994-07-01

    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  8. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  9. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  10. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  11. Semantic Interpretation of An Artificial Neural Network

    DTIC Science & Technology

    1995-12-01

    success for stock market analysis/prediction is artificial neural networks. However, knowledge embedded in the neural network is not easily translated...interpret neural network knowledge. The first, called Knowledge Math, extends the use of connection weights, generating rules for general (i.e. non-binary

  12. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  13. Are artificial neural networks white boxes?

    PubMed

    Kolman, Eyal; Margaliot, Michael

    2005-07-01

    In this paper, we introduce a novel Mamdani-type fuzzy model, referred to as the all-permutations fuzzy rule base (APFRB), and show that it is mathematically equivalent to a standard feedforward neural network. We describe several applications of this equivalence between a neural network and our fuzzy rule base (FRB), including knowledge extraction from and knowledge insertion into neural networks.

  14. Neural networks for atmospheric retrievals

    NASA Technical Reports Server (NTRS)

    Motteler, Howard E.; Gualtieri, J. A.; Strow, L. Larrabee; Mcmillin, Larry

    1993-01-01

    We use neural networks to perform retrievals of temperature and water fractions from simulated clear air radiances for the Atmospheric Infrared Sounder (AIRS). Neural networks allow us to make effective use of the large AIRS channel set, and give good performance with noisy input. We retrieve surface temperature, air temperature at 64 distinct pressure levels, and water fractions at 50 distinct pressure levels. Using 728 temperature and surface sensitive channels, the RMS error for temperature retrievals with 0.2K input noise is 1.2K. Using 586 water and temperature sensitive channels, the mean error with 0.2K input noise is 16 percent. Our implementation of backpropagation training for neural networks on the 16,000-processor MasPar MP-1 runs at a rate of 90 million weight updates per second, and allows us to train large networks in a reasonable amount of time. Once trained, the network can be used to perform retrievals quickly on a workstation of moderate power.

  15. Neural network explanation using inversion.

    PubMed

    Saad, Emad W; Wunsch, Donald C

    2007-01-01

    An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.

  16. Discontinuities in recurrent neural networks.

    PubMed

    Gavaldá, R; Siegelmann, H T

    1999-04-01

    This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.

  17. Training Neural Networks with Weight Constraints

    DTIC Science & Technology

    1993-03-01

    Hardware implementation of artificial neural networks imposes a variety of constraints. Finite weight magnitudes exist in both digital and analog...optimizing a network with weight constraints. Comparisons are made to the backpropagation training algorithm for networks with both unconstrained and hard-limited weight magnitudes. Neural networks , Analog, Digital, Stochastic

  18. Effects of Nerve Injury and Segmental Regeneration on the Cellular Correlates of Neural Morphallaxis

    PubMed Central

    Martinez, Veronica G.; Manson, Josiah M.B.; Zoran, Mark J.

    2009-01-01

    Functional recovery of neural networks after injury requires a series of signaling events similar to the embryonic processes that governed initial network construction. Neural morphallaxis, a form of nervous system regeneration, involves reorganization of adult neural connectivity patterns. Neural morphallaxis in the worm, Lumbriculus variegatus, occurs during asexual reproduction and segmental regeneration, as body fragments acquire new positional identities along the anterior–posterior axis. Ectopic head (EH) formation, induced by ventral nerve cord lesion, generated morphallactic plasticity including the reorganization of interneuronal sensory fields and the induction of a molecular marker of neural morphallaxis. Morphallactic changes occurred only in segments posterior to an EH. Neither EH formation, nor neural morphallaxis was observed after dorsal body lesions, indicating a role for nerve cord injury in morphallaxis induction. Furthermore, a hierarchical system of neurobehavioral control was observed, where anterior heads were dominant and an EH controlled body movements only in the absence of the anterior head. Both suppression of segmental regeneration and blockade of asexual fission, after treatment with boric acid, disrupted the maintenance of neural morphallaxis, but did not block its induction. Therefore, segmental regeneration (i.e., epimorphosis) may not be required for the induction of morphallactic remodeling of neural networks. However, on-going epimorphosis appears necessary for the long-term consolidation of cellular and molecular mechanisms underlying the morphallaxis of neural circuitry. PMID:18561185

  19. Neural tube closure: cellular, molecular and biomechanical mechanisms.

    PubMed

    Nikolopoulou, Evanthia; Galea, Gabriel L; Rolo, Ana; Greene, Nicholas D E; Copp, Andrew J

    2017-02-15

    Neural tube closure has been studied for many decades, across a range of vertebrates, as a paradigm of embryonic morphogenesis. Neurulation is of particular interest in view of the severe congenital malformations - 'neural tube defects' - that result when closure fails. The process of neural tube closure is complex and involves cellular events such as convergent extension, apical constriction and interkinetic nuclear migration, as well as precise molecular control via the non-canonical Wnt/planar cell polarity pathway, Shh/BMP signalling, and the transcription factors Grhl2/3, Pax3, Cdx2 and Zic2. More recently, biomechanical inputs into neural tube morphogenesis have also been identified. Here, we review these cellular, molecular and biomechanical mechanisms involved in neural tube closure, based on studies of various vertebrate species, focusing on the most recent advances in the field.

  20. Neural tube closure: cellular, molecular and biomechanical mechanisms

    PubMed Central

    Nikolopoulou, Evanthia; Galea, Gabriel L.; Rolo, Ana; Greene, Nicholas D. E.; Copp, Andrew J.

    2017-01-01

    Neural tube closure has been studied for many decades, across a range of vertebrates, as a paradigm of embryonic morphogenesis. Neurulation is of particular interest in view of the severe congenital malformations – ‘neural tube defects’– that result when closure fails. The process of neural tube closure is complex and involves cellular events such as convergent extension, apical constriction and interkinetic nuclear migration, as well as precise molecular control via the non-canonical Wnt/planar cell polarity pathway, Shh/BMP signalling, and the transcription factors Grhl2/3, Pax3, Cdx2 and Zic2. More recently, biomechanical inputs into neural tube morphogenesis have also been identified. Here, we review these cellular, molecular and biomechanical mechanisms involved in neural tube closure, based on studies of various vertebrate species, focusing on the most recent advances in the field. PMID:28196803

  1. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  2. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  3. Fiber optic Adaline neural networks

    NASA Astrophysics Data System (ADS)

    Ghosh, Anjan K.; Trepka, Jim; Paparao, Palacharla

    1993-02-01

    Optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators has been discussed recently. We describe the design of a single layer fiber optic Adaline neural network which can be used as a bit pattern classifier. In our realization we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The new optical neural network described in this paper is designed for optical processing of guided lightwave signals, not electronic signals. We analyzed the convergence or learning characteristics of the optically implemented Adaline in the presence of errors in the hardware, and we studied methods for improving the convergence rate of the Adaline.

  4. Prototype neural network pattern recognition testbed

    NASA Astrophysics Data System (ADS)

    Worrell, Steven W.; Robertson, James A.; Varner, Thomas L.; Garvin, Charles G.

    1991-02-01

    Recent successes ofneural networks has led to an optimistic outlook for neural network applications to image processing(IP). This paperpresents a general architecture for performing comparative studies of neural processing and more conventional IF techniques as well as hybrid pattern recognition (PR) systems. Two hybrid PR systems have been simulated each of which incorporate both conventional and neural processing techniques.

  5. The LILARTI neural network system

    SciTech Connect

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  6. Neural Network for Visual Search Classification

    DTIC Science & Technology

    2007-11-02

    neural network used to perform visual search classification. The neural network consists of a Learning vector quantization network (LVQ) and a single layer perceptron. The objective of this neural network is to classify the various human visual search patterns into predetermined classes. The classes signify the different search strategies used by individuals to scan the same target pattern. The input search patterns are quantified with respect to an ideal search pattern, determined by the user. A supervised learning rule,

  7. Neural Network-Based Hyperspectral Algorithms

    DTIC Science & Technology

    2016-06-07

    Neural Network -Based Hyperspectral Algorithms Walter F. Smith, Jr. and Juanita Sandidge Naval Research Laboratory Code 7340, Bldg 1105 Stennis Space...combination of in-situ and model data of water column variables (IOP’s, depth, bottom type, upwelling radiance, etc.) a neural network non-linear... network (Lippman, 1987). Neural network -based algorithms have been demonstrated by the investigators for retrieval of water depth from Airborne Visible

  8. Neural network subtyping of depression.

    PubMed

    Florio, T M; Parker, G; Austin, M P; Hickie, I; Mitchell, P; Wilhelm, K

    1998-10-01

    To examine the applicability of a neural network classification strategy to examine the independent contribution of psychomotor disturbance (PMD) and endogeneity symptoms to the DSM-III-R definition of melancholia. We studied 407 depressed patients with the clinical dataset comprising 17 endogeneity symptoms and the 18-item CORE measure of behaviourally rated PMD. A multilayer perception neural network was used to fit non-linear models of varying complexity. A linear discriminant function analysis was also used to generate a model for comparison with the non-linear models. Models (linear and non-linear) using PMD items only and endogeneity symptoms only had similar rates of successful classification, while non-linear models combining both PMD and symptoms scores achieved the best classifications. Our current non-linear model was superior to a linear analysis, a finding which may have wider application to psychiatric classification. Our non-linear analysis of depressive subtypes supports the binary view that melancholic and non-melancholic depression are separate clinical disorders rather than different forms of the same entity. This study illustrates how non-linear modelling with neural networks is a potentially fruitful approach to the study of the diagnostic taxonomy of psychiatric disorders and to clinical decision-making.

  9. Dynamic Neural Networks Supporting Memory Retrieval

    PubMed Central

    St. Jacques, Peggy L.; Kragel, Philip A.; Rubin, David C.

    2011-01-01

    How do separate neural networks interact to support complex cognitive processes such as remembrance of the personal past? Autobiographical memory (AM) retrieval recruits a consistent pattern of activation that potentially comprises multiple neural networks. However, it is unclear how such large-scale neural networks interact and are modulated by properties of the memory retrieval process. In the present functional MRI (fMRI) study, we combined independent component analysis (ICA) and dynamic causal modeling (DCM) to understand the neural networks supporting AM retrieval. ICA revealed four task-related components consistent with the previous literature: 1) Medial Prefrontal Cortex (PFC) Network, associated with self-referential processes, 2) Medial Temporal Lobe (MTL) Network, associated with memory, 3) Frontoparietal Network, associated with strategic search, and 4) Cingulooperculum Network, associated with goal maintenance. DCM analysis revealed that the medial PFC network drove activation within the system, consistent with the importance of this network to AM retrieval. Additionally, memory accessibility and recollection uniquely altered connectivity between these neural networks. Recollection modulated the influence of the medial PFC on the MTL network during elaboration, suggesting that greater connectivity among subsystems of the default network supports greater re-experience. In contrast, memory accessibility modulated the influence of frontoparietal and MTL networks on the medial PFC network, suggesting that ease of retrieval involves greater fluency among the multiple networks contributing to AM. These results show the integration between neural networks supporting AM retrieval and the modulation of network connectivity by behavior. PMID:21550407

  10. Neural network modeling of emotion

    NASA Astrophysics Data System (ADS)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  11. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  12. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  13. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  14. Feature Extraction Using an Unsupervised Neural Network

    DTIC Science & Technology

    1991-05-03

    A novel unsupervised neural network for dimensionality reduction which seeks directions emphasizing distinguishing features in the data is presented. A statistical framework for the parameter estimation problem associated with this neural network is given and its connection to exploratory projection pursuit methods is established. The network is shown to minimize a loss function (projection index) over a

  15. Neural Networks in Nonlinear Aircraft Control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.

    1990-01-01

    Recent research indicates that artificial neural networks offer interesting learning or adaptive capabilities. The current research focuses on the potential for application of neural networks in a nonlinear aircraft control law. The current work has been to determine which networks are suitable for such an application and how they will fit into a nonlinear control law.

  16. Neural networks and MIMD-multiprocessors

    NASA Technical Reports Server (NTRS)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  17. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  18. Adaptive optimization and control using neural networks

    SciTech Connect

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  19. Neural Network Retinal Model Real Time Implementation

    DTIC Science & Technology

    1992-09-02

    addresses the specific needs of vision processing. The goal of this SBIR Phase I project has been to take a significant neural network vision...application and to map it onto dedicated hardware for real time implementation. The neural network was already demonstrated using software simulation on a...general purpose computer. During Phase 1, HNC took a neural network model of the retina and, using HNC’s Vision Processor (ViP) prototype hardware

  20. Neural Network False Alarm Filter. Volume 1.

    DTIC Science & Technology

    1994-12-01

    This effort identified, developed and demonstrated a set of approaches for applying neural network learning techniques to the development of a real... neural network models, 9 fault report causes and 12 common groups of BIT techniques was identified. From this space, 4 unique, high-potential...of their strengths and weaknesses were performed along with cost/ benefit analyses. This study concluded that the best candidates for neural network insert

  1. A Neural Network Object Recognition System

    DTIC Science & Technology

    1990-07-01

    useful for exploring different neural network configurations. There are three main computation phases of a model based object recognition system...segmentation, feature extraction, and object classification. This report focuses on the object classification stage. For segmentation, a neural network based...are available with the current system. Neural network based feature extraction may be added at a later date. The classification stage consists of a

  2. A Complexity Theory of Neural Networks

    DTIC Science & Technology

    1991-08-09

    Significant progress has been made in laying the foundations of a complexity theory of neural networks . The fundamental complexity classes have been...identified and studied. The class of problems solvable by small, shallow neural networks has been found to be the same class even if (1) probabilistic...behaviour (2)Multi-valued logic, and (3)analog behaviour, are allowed (subject to certain resonable technical assumptions). Neural networks can be

  3. Oil reservoir properties estimation using neural networks

    SciTech Connect

    Toomarian, N.B.; Barhen, J.; Glover, C.W.; Aminzadeh, F.

    1997-02-01

    This paper investigates the applicability as well as the accuracy of artificial neural networks for estimating specific parameters that describe reservoir properties based on seismic data. This approach relies on JPL`s adjoint operators general purpose neural network code to determine the best suited architecture. The authors believe that results presented in this work demonstrate that artificial neural networks produce surprisingly accurate estimates of the reservoir parameters.

  4. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  5. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  6. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  7. Neural network architecture for crossbar switch control

    NASA Technical Reports Server (NTRS)

    Troudet, Terry P.; Walters, Stephen M.

    1991-01-01

    A Hopfield neural network architecture for the real-time control of a crossbar switch for switching packets at maximum throughput is proposed. The network performance and processing time are derived from a numerical simulation of the transitions of the neural network. A method is proposed to optimize electronic component parameters and synaptic connections, and it is fully illustrated by the computer simulation of a VLSI implementation of 4 x 4 neural net controller. The extension to larger size crossbars is demonstrated through the simulation of an 8 x 8 crossbar switch controller, where the performance of the neural computation is discussed in relation to electronic noise and inhomogeneities of network components.

  8. Advances in neural networks research: an introduction.

    PubMed

    Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar

    2009-01-01

    The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications.

  9. Neural network architecture for crossbar switch control

    NASA Technical Reports Server (NTRS)

    Troudet, Terry P.; Walters, Stephen M.

    1991-01-01

    A Hopfield neural network architecture for the real-time control of a crossbar switch for switching packets at maximum throughput is proposed. The network performance and processing time are derived from a numerical simulation of the transitions of the neural network. A method is proposed to optimize electronic component parameters and synaptic connections, and it is fully illustrated by the computer simulation of a VLSI implementation of 4 x 4 neural net controller. The extension to larger size crossbars is demonstrated through the simulation of an 8 x 8 crossbar switch controller, where the performance of the neural computation is discussed in relation to electronic noise and inhomogeneities of network components.

  10. Aerodynamic Design Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.

    2003-01-01

    The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.

  11. Neural networks for nuclear spectroscopy

    SciTech Connect

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T.

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  12. Neural Networks for Rapid Design and Analysis

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.; Maghami, Peiman G.

    1998-01-01

    Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.

  13. Neural Network Classifies Teleoperation Data

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Giancaspro, Antonio; Losito, Sergio; Pasquariello, Guido

    1994-01-01

    Prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on manipulator hand. Prototype is early, subsystem-level product of continuing effort to develop automated system that assists in training and supervising human control operator: provides symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to operator in real time during successive executions of same task. Also simplifies transition between teleoperation and autonomous modes of telerobotic system.

  14. Flow Control Using Neural Networks

    DTIC Science & Technology

    2007-11-02

    FEB 93 - 31 DEC 96 4. TITLE AND SUBTITLE 5 . FUNDING NUMBERS FLOW CONTROL USING NEURAL NETWORKS F49620-93-1-0135 61102F 6. AUTHOR(S) 2307/BS THORWALD...OFFICE OF SCIENTIFIC RESEARCH (AFOSRO AGENCY REPORT NUMBER 110 DUNCAN AVENUE, ROOM B115 BOLLING AFB DC 20332- 8050 11. SUPPLEMENTARY NOTES 12a...signals. Figure 5 shows a time series for an actuator that performs a ramp motion in the streamwise direction over about 1 % of the TS period and remains

  15. Neural Network Classifies Teleoperation Data

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Giancaspro, Antonio; Losito, Sergio; Pasquariello, Guido

    1994-01-01

    Prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on manipulator hand. Prototype is early, subsystem-level product of continuing effort to develop automated system that assists in training and supervising human control operator: provides symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to operator in real time during successive executions of same task. Also simplifies transition between teleoperation and autonomous modes of telerobotic system.

  16. The Laplacian spectrum of neural networks.

    PubMed

    de Lange, Siemon C; de Reus, Marcel A; van den Heuvel, Martijn P

    2014-01-13

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these "conventional" graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks.

  17. The Laplacian spectrum of neural networks

    PubMed Central

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  18. Neural Network Controlled Visual Saccades

    NASA Astrophysics Data System (ADS)

    Johnson, Jeffrey D.; Grogan, Timothy A.

    1989-03-01

    The paper to be presented will discuss research on a computer vision system controlled by a neural network capable of learning through classical (Pavlovian) conditioning. Through the use of unconditional stimuli (reward and punishment) the system will develop scan patterns of eye saccades necessary to differentiate and recognize members of an input set. By foveating only those portions of the input image that the system has found to be necessary for recognition the drawback of computational explosion as the size of the input image grows is avoided. The model incorporates many features found in animal vision systems, and is governed by understandable and modifiable behavior patterns similar to those reported by Pavlov in his classic study. These behavioral patterns are a result of a neuronal model, used in the network, explicitly designed to reproduce this behavior.

  19. Hand Gesture Recognition Using Neural Networks.

    DTIC Science & Technology

    1996-05-01

    inherent in the model. The high gesture recognition rates and quick network retraining times found in the present study suggest that a neural network approach to gesture recognition be further evaluated.

  20. Complexity, dynamic cellular network, and tumorigenesis.

    PubMed

    Waliszewski, P

    1997-01-01

    A holistic approach to tumorigenesis is proposed. The main element of the model is the existence of dynamic cellular network. This network comprises a molecular and an energetistic structure of a cell connected through the multidirectional flow of information. The interactions within dynamic cellular network are complex, stochastic, nonlinear, and also involve quantum effects. From this non-reductionist perspective, neither tumorigenesis can be limited to the genetic aspect, nor the initial event must be of molecular nature, nor mutations and epigenetic factors are mutually exclusive, nor a link between cause and effect can be established. Due to complexity, an unstable stationary state of dynamic cellular network rather than a group of unrelated genes determines the phenotype of normal and transformed cells. This implies relativity of tumor suppressor genes and oncogenes. A bifurcation point is defined as an unstable state of dynamic cellular network leading to the other phenotype-stationary state. In particular, the bifurcation point may be determined by a change of expression of a single gene. Then, the gene is called bifurcation point gene. The unstable stationary state facilitates the chaotic dynamics. This may result in a fractal dimension of both normal and tumor tissues. The co-existence of chaotic dynamics and complexity is the essence of cellular processes and shapes differentiation, morphogenesis, and tumorigenesis. In consequence, tumorigenesis is a complex, unpredictable process driven by the interplay between self-organisation and selection.

  1. A new formulation for feedforward neural networks.

    PubMed

    Razavi, Saman; Tolson, Bryan A

    2011-10-01

    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization.

  2. Problem Specific applications for Neural Networks

    DTIC Science & Technology

    1988-12-01

    97 iv List Of Figures Figure Page 1. Neural Network Models ...... ............. 2 2. A Single - Layer Perceptron ..... ........... 4...the network is in use. Three of the most well-known neural networks are the single - layer perceptron , the multi-layer perceptron, and the Kohonen self...three of these networks can accept discrete (binary) or continuous inputs (5:6). 3 Single-Laver Perceptron. The single - layer perceptron (shown in Figure 2

  3. Drift chamber tracking with neural networks

    SciTech Connect

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  4. Extrapolation limitations of multilayer feedforward neural networks

    NASA Technical Reports Server (NTRS)

    Haley, Pamela J.; Soloway, Donald

    1992-01-01

    The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.

  5. Extrapolation limitations of multilayer feedforward neural networks

    NASA Technical Reports Server (NTRS)

    Haley, Pamela J.; Soloway, Donald

    1992-01-01

    The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.

  6. Coherence resonance in bursting neural networks.

    PubMed

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal-a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  7. Cellular computational networks--a scalable architecture for learning the dynamics of large networked systems.

    PubMed

    Luitel, Bipul; Venayagamoorthy, Ganesh Kumar

    2014-02-01

    Neural networks for implementing large networked systems such as smart electric power grids consist of multiple inputs and outputs. Many outputs lead to a greater number of parameters to be adapted. Each additional variable increases the dimensionality of the problem and hence learning becomes a challenge. Cellular computational networks (CCNs) are a class of sparsely connected dynamic recurrent networks (DRNs). By proper selection of a set of input elements for each output variable in a given application, a DRN can be modified into a CCN which significantly reduces the complexity of the neural network and allows use of simple training methods for independent learning in each cell thus making it scalable. This article demonstrates this concept of developing a CCN using dimensionality reduction in a DRN for scalability and better performance. The concept has been analytically explained and empirically verified through application. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Creativity in design and artificial neural networks

    SciTech Connect

    Neocleous, C.C.; Esat, I.I.; Schizas, C.N.

    1996-12-31

    The creativity phase is identified as an integral part of the design phase. The characteristics of creative persons which are relevant to designing artificial neural networks manifesting aspects of creativity, are identified. Based on these identifications, a general framework of artificial neural network characteristics to implement such a goal are proposed.

  9. Applications of Neural Networks in Finance.

    ERIC Educational Resources Information Center

    Crockett, Henry; Morrison, Ronald

    1994-01-01

    Discusses research with neural networks in the area of finance. Highlights include bond pricing, theoretical exposition of primary bond pricing, bond pricing regression model, and an example that created networks with corporate bonds and NeuralWare Neuralworks Professional H software using the back-propagation technique. (LRW)

  10. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  11. Neural Networks for Handwritten English Alphabet Recognition

    NASA Astrophysics Data System (ADS)

    Perwej, Yusuf; Chaturvedi, Ashish

    2011-04-01

    This paper demonstrates the use of neural networks for developing a system that can recognize hand-written English alphabets. In this system, each English alphabet is represented by binary values that are used as input to a simple feature extraction system, whose output is fed to our neural network system.

  12. Radiation Behavior of Analog Neural Network Chip

    NASA Technical Reports Server (NTRS)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  13. Neural network classification - A Bayesian interpretation

    NASA Technical Reports Server (NTRS)

    Wan, Eric A.

    1990-01-01

    The relationship between minimizing a mean squared error and finding the optimal Bayesian classifier is reviewed. This provides a theoretical interpretation for the process by which neural networks are used in classification. A number of confidence measures are proposed to evaluate the performance of the neural network classifier within a statistical framework.

  14. Advanced telerobotic control using neural networks

    NASA Technical Reports Server (NTRS)

    Pap, Robert M.; Atkins, Mark; Cox, Chadwick; Glover, Charles; Kissel, Ralph; Saeks, Richard

    1993-01-01

    Accurate Automation is designing and developing adaptive decentralized joint controllers using neural networks. We are then implementing these in hardware for the Marshall Space Flight Center PFMA as well as to be usable for the Remote Manipulator System (RMS) robot arm. Our design is being realized in hardware after completion of the software simulation. This is implemented using a Functional-Link neural network.

  15. Isolated Speech Recognition Using Artificial Neural Networks

    DTIC Science & Technology

    2007-11-02

    In this project Artificial Neural Networks are used as research tool to accomplish Automated Speech Recognition of normal speech. A small size...the first stage of this work are satisfactory and thus the application of artificial neural networks in conjunction with cepstral analysis in isolated word recognition holds promise.

  16. Online guidance updates using neural networks

    NASA Astrophysics Data System (ADS)

    Filici, Cristian; Sánchez Peña, Ricardo S.

    2010-02-01

    The aim of this article is to present a method for the online guidance update for a launcher ascent trajectory that is based on the utilization of a neural network approximator. Generation of training patterns and selection of the input and output spaces of the neural network are presented, and implementation issues are discussed. The method is illustrated by a 2-dimensional launcher simulation.

  17. Neural network based architectures for aerospace applications

    NASA Technical Reports Server (NTRS)

    Ricart, Richard

    1987-01-01

    A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed.

  18. Neural Network Classification of Cerebral Embolic Signals

    DTIC Science & Technology

    2007-11-02

    application of new signal processing techniques to the analysis and classification of embolic signals. We applied a Wavelet Neural Network algorithm...to approximate the embolic signals, with the parameters of the wavelet nodes being used to train a Neural Network to classify these signals as resulting from normal flow, or from gaseous or solid emboli.

  19. Neural Network Research: A Personal Perspective,

    DTIC Science & Technology

    1988-03-01

    These vision preprocessor and ART autonomous classifier examples are just two of the many neural network architectures now being developed by...computational theories with natural realizations as real-time adaptive neural network architectures with promising properties for tackling some of the

  20. Neural Network Based Helicopter Low Airspeed Indicator

    DTIC Science & Technology

    1996-10-24

    This invention relates generally to virtual sensors and, more particularly, to a means and method utilizing a neural network for estimating...helicopter airspeed at speeds below about 50 knots using only fixed system parameters (i.e., parameters measured or determined in a reference frame fixed relative to the helicopter fuselage) as inputs to the neural network .

  1. Evolving Neural Networks for Nonlinear Control.

    DTIC Science & Technology

    1996-09-30

    An approach to creating Amorphous Recurrent Neural Networks (ARNN) using Genetic Algorithms (GA) called 2pGA has been developed and shown to be...effective in evolving neural networks for the control and stabilization of both linear and nonlinear plants, the optimal control for a nonlinear regulator

  2. Advanced telerobotic control using neural networks

    NASA Technical Reports Server (NTRS)

    Pap, Robert M.; Atkins, Mark; Cox, Chadwick; Glover, Charles; Kissel, Ralph; Saeks, Richard

    1993-01-01

    Accurate Automation is designing and developing adaptive decentralized joint controllers using neural networks. We are then implementing these in hardware for the Marshall Space Flight Center PFMA as well as to be usable for the Remote Manipulator System (RMS) robot arm. Our design is being realized in hardware after completion of the software simulation. This is implemented using a Functional-Link neural network.

  3. Neural networks applications to control and computations

    NASA Technical Reports Server (NTRS)

    Luxemburg, Leon A.

    1994-01-01

    Several interrelated problems in the area of neural network computations are described. First an interpolation problem is considered, then a control problem is reduced to a problem of interpolation by a neural network via Lyapunov function approach, and finally a new, faster method of learning as compared with the gradient descent method, was introduced.

  4. Self-organization of neural networks

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Winston, Jeffrey V.; Rafelski, Johann

    1984-05-01

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (“brainwashing”) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conductive to the simulation of memory and learning phenomena.

  5. The neural network approach to parton fitting

    SciTech Connect

    Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea

    2005-10-06

    We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits.

  6. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  8. A Survey of Neural Network Publications.

    ERIC Educational Resources Information Center

    Vijayaraman, Bindiganavale S.; Osyk, Barbara

    This paper is a survey of publications on artificial neural networks published in business journals for the period ending July 1996. Its purpose is to identify and analyze trends in neural network research during that period. This paper shows which topics have been heavily researched, when these topics were researched, and how that research has…

  9. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  10. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  11. Forecasting Jet Fuel Prices Using Artificial Neural Networks.

    DTIC Science & Technology

    1995-03-01

    Artificial neural networks provide a new approach to commodity forecasting that does not require algorithm or rule development. Neural networks have...NeuralWare, more people can take advantage of the power of artificial neural networks . This thesis provides an introduction to neural networks, and reviews

  12. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    PubMed

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  13. Pruning artificial neural networks using neural complexity measures.

    PubMed

    Jorgensen, Thomas D; Haynes, Barry P; Norlund, Charlotte C F

    2008-10-01

    This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.

  14. Enhancing neural-network performance via assortativity.

    PubMed

    de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J

    2011-03-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  15. Enhancing neural-network performance via assortativity

    SciTech Connect

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-03-15

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  16. Wavelet differential neural network observer.

    PubMed

    Chairez, Isaac

    2009-09-01

    State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown.

  17. Sunspot prediction using neural networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Baffes, Paul

    1990-01-01

    The earliest systematic observance of sunspot activity is known to have been discovered by the Chinese in 1382 during the Ming Dynasty (1368 to 1644) when spots on the sun were noticed by looking at the sun through thick, forest fire smoke. Not until after the 18th century did sunspot levels become more than a source of wonderment and curiosity. Since 1834 reliable sunspot data has been collected by the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Naval Observatory. Recently, considerable effort has been placed upon the study of the effects of sunspots on the ecosystem and the space environment. The efforts of the Artificial Intelligence Section of the Mission Planning and Analysis Division of the Johnson Space Center involving the prediction of sunspot activity using neural network technologies are described.

  18. Inferring cellular networks – a review

    PubMed Central

    Markowetz, Florian; Spang, Rainer

    2007-01-01

    In this review we give an overview of computational and statistical methods to reconstruct cellular networks. Although this area of research is vast and fast developing, we show that most currently used methods can be organized by a few key concepts. The first part of the review deals with conditional independence models including Gaussian graphical models and Bayesian networks. The second part discusses probabilistic and graph-based methods for data from experimental interventions and perturbations. PMID:17903286

  19. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  20. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-03-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  1. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2003-12-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  2. Artificial neural networks in neurosurgery.

    PubMed

    Azimi, Parisa; Mohammadi, Hasan Reza; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad; Montazeri, Ali

    2015-03-01

    Artificial neural networks (ANNs) effectively analyze non-linear data sets. The aimed was A review of the relevant published articles that focused on the application of ANNs as a tool for assisting clinical decision-making in neurosurgery. A literature review of all full publications in English biomedical journals (1993-2013) was undertaken. The strategy included a combination of key words 'artificial neural networks', 'prognostic', 'brain', 'tumor tracking', 'head', 'tumor', 'spine', 'classification' and 'back pain' in the title and abstract of the manuscripts using the PubMed search engine. The major findings are summarized, with a focus on the application of ANNs for diagnostic and prognostic purposes. Finally, the future of ANNs in neurosurgery is explored. A total of 1093 citations were identified and screened. In all, 57 citations were found to be relevant. Of these, 50 articles were eligible for inclusion in this review. The synthesis of the data showed several applications of ANN in neurosurgery, including: (1) diagnosis and assessment of disease progression in low back pain, brain tumours and primary epilepsy; (2) enhancing clinically relevant information extraction from radiographic images, intracranial pressure processing, low back pain and real-time tumour tracking; (3) outcome prediction in epilepsy, brain metastases, lumbar spinal stenosis, lumbar disc herniation, childhood hydrocephalus, trauma mortality, and the occurrence of symptomatic cerebral vasospasm in patients with aneurysmal subarachnoid haemorrhage; (4) the use in the biomechanical assessments of spinal disease. ANNs can be effectively employed for diagnosis, prognosis and outcome prediction in neurosurgery. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Neural networks for damage identification

    SciTech Connect

    Paez, T.L.; Klenke, S.E.

    1997-11-01

    Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  4. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2002-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, online, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce {sub x} emissions and improve heat rate

  5. Local Dynamics in Trained Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Rivkind, Alexander; Barak, Omri

    2017-06-01

    Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.

  6. Nonlinear programming with feedforward neural networks.

    SciTech Connect

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  7. VLSI Cells Placement Using the Neural Networks

    SciTech Connect

    Azizi, Hacene; Zouaoui, Lamri; Mokhnache, Salah

    2008-06-12

    The artificial neural networks have been studied for several years. Their effectiveness makes it possible to expect high performances. The privileged fields of these techniques remain the recognition and classification. Various applications of optimization are also studied under the angle of the artificial neural networks. They make it possible to apply distributed heuristic algorithms. In this article, a solution to placement problem of the various cells at the time of the realization of an integrated circuit is proposed by using the KOHONEN network.

  8. Linear programming for learning in neural networks

    NASA Astrophysics Data System (ADS)

    Raghavan, Raghu

    1991-08-01

    The authors have previously proposed a network of probabilistic cellular automata (PCAs) as part of an image recognition system designed to integrate model-based and data-driven approaches in a connectionist framework. The PCA arises from some natural requirements on the system which include incorporation of prior knowledge such as in inference rules, locality of inferences, and full parallelism. This network has been applied to recognize objects in both synthetic and in real data. This approach achieves recognition through the short-, rather than the long-time behavior of the dynamics of the PCA. In this paper, some methods are developed for learning the connection strengths by solving linear inequalities: the figures of merit are tendencies or directions of movement of the dynamical system. These 'dynamical' figures of merit result in inequality constraints on the connection strengths which are solved by linear (LP) or quadratic programs (QP). An algorithm is described for processing a large number of samples to determine weights for the PCA. The work may be regarded as either pointing out another application for constrained optimization, or as pointing out the need to extend the perceptron and similar methods for learning. The extension is needed because the neural network operates on a different principle from that for which the perceptron method was devised.

  9. Coronary Artery Diagnosis Aided by Neural Network

    NASA Astrophysics Data System (ADS)

    Stefko, Kamil

    2007-01-01

    Coronary artery disease is due to atheromatous narrowing and subsequent occlusion of the coronary vessel. Application of optimised feed forward multi-layer back propagation neural network (MLBP) for detection of narrowing in coronary artery vessels is presented in this paper. The research was performed using 580 data records from traditional ECG exercise test confirmed by coronary arteriography results. Each record of training database included description of the state of a patient providing input data for the neural network. Level and slope of ST segment of a 12 lead ECG signal recorded at rest and after effort (48 floating point values) was the main component of input data for neural network was. Coronary arteriography results (verified the existence or absence of more than 50% stenosis of the particular coronary vessels) were used as a correct neural network training output pattern. More than 96% of cases were correctly recognised by especially optimised and a thoroughly verified neural network. Leave one out method was used for neural network verification so 580 data records could be used for training as well as for verification of neural network.

  10. Acute appendicitis diagnosis using artificial neural networks.

    PubMed

    Park, Sung Yun; Kim, Sung Min

    2015-01-01

    Artificial neural networks is one of pattern analyzer method which are rapidly applied on a bio-medical field. The aim of this research was to propose an appendicitis diagnosis system using artificial neural networks (ANNs). Data from 801 patients of the university hospital in Dongguk were used to construct artificial neural networks for diagnosing appendicitis and acute appendicitis. A radial basis function neural network structure (RBF), a multilayer neural network structure (MLNN), and a probabilistic neural network structure (PNN) were used for artificial neural network models. The Alvarado clinical scoring system was used for comparison with the ANNs. The accuracy of the RBF, PNN, MLNN, and Alvarado was 99.80%, 99.41%, 97.84%, and 72.19%, respectively. The area under ROC (receiver operating characteristic) curve of RBF, PNN, MLNN, and Alvarado was 0.998, 0.993, 0.985, and 0.633, respectively. The proposed models using ANNs for diagnosing appendicitis showed good performances, and were significantly better than the Alvarado clinical scoring system (p < 0.001). With cooperation among facilities, the accuracy for diagnosing this serious health condition can be improved.

  11. Neural network regulation driven by autonomous neural firings

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  12. Object detection using pulse coupled neural networks.

    PubMed

    Ranganath, H S; Kuntimad, G

    1999-01-01

    This paper describes an object detection system based on pulse coupled neural networks. The system is designed and implemented to illustrate the power, flexibility and potential the pulse coupled neural networks have in real-time image processing. In the preprocessing stage, a pulse coupled neural network suppresses noise by smoothing the input image. In the segmentation stage, a second pulse coupled neural-network iteratively segments the input image. During each iteration, with the help of a control module, the segmentation network deletes regions that do not satisfy the retention criteria from further processing and produces an improved segmentation of the retained image. In the final stage each group of connected regions that satisfies the detection criteria is identified as an instance of the object of interest.

  13. A neural network prototyping package within IRAF

    NASA Technical Reports Server (NTRS)

    Bazell, D.; Bankman, I.

    1992-01-01

    We outline our plans for incorporating a Neural Network Prototyping Package into the IRAF environment. The package we are developing will allow the user to choose between different types of networks and to specify the details of the particular architecture chosen. Neural networks consist of a highly interconnected set of simple processing units. The strengths of the connections between units are determined by weights which are adaptively set as the network 'learns'. In some cases, learning can be a separate phase of the user cycle of the network while in other cases the network learns continuously. Neural networks have been found to be very useful in pattern recognition and image processing applications. They can form very general 'decision boundaries' to differentiate between objects in pattern space and they can be used for associative recall of patterns based on partial cures and for adaptive filtering. We discuss the different architectures we plan to use and give examples of what they can do.

  14. Deep Neural Networks for Identifying Cough Sounds.

    PubMed

    Amoh, Justice; Odame, Kofi

    2016-10-01

    In this paper, we consider two different approaches of using deep neural networks for cough detection. The cough detection task is cast as a visual recognition problem and as a sequence-to-sequence labeling problem. A convolutional neural network and a recurrent neural network are implemented to address these problems, respectively. We evaluate the performance of the two networks and compare them to other conventional approaches for identifying cough sounds. In addition, we also explore the effect of the network size parameters and the impact of long-term signal dependencies in cough classifier performance. Experimental results show both network architectures outperform traditional methods. Between the two, our convolutional network yields a higher specificity 92.7% whereas the recurrent attains a higher sensitivity of 87.7%.

  15. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  16. Multispectral-image fusion using neural networks

    NASA Astrophysics Data System (ADS)

    Kagel, Joseph H.; Platt, C. A.; Donaven, T. W.; Samstad, Eric A.

    1990-08-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard a circuit card assembly and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations results and a description of the prototype system are presented. 1.

  17. Genetic algorithm for neural networks optimization

    NASA Astrophysics Data System (ADS)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  18. Pricing financial derivatives with neural networks

    NASA Astrophysics Data System (ADS)

    Morelli, Marco J.; Montagna, Guido; Nicrosini, Oreste; Treccani, Michele; Farina, Marco; Amato, Paolo

    2004-07-01

    Neural network algorithms are applied to the problem of option pricing and adopted to simulate the nonlinear behavior of such financial derivatives. Two different kinds of neural networks, i.e. multi-layer perceptrons and radial basis functions, are used and their performances compared in detail. The analysis is carried out both for standard European options and American ones, including evaluation of the Greek letters, necessary for hedging purposes. Detailed numerical investigation show that, after a careful phase of training, neural networks are able to predict the value of options and Greek letters with high accuracy and competitive computational time.

  19. Attitude control of spacecraft using neural networks

    NASA Technical Reports Server (NTRS)

    Vadali, Srinivas R.; Krishnan, S.; Singh, T.

    1993-01-01

    This paper investigates the use of radial basis function neural networks for adaptive attitude control and momentum management of spacecraft. In the first part of the paper, neural networks are trained to learn from a family of open-loop optimal controls parameterized by the initial states and times-to-go. The trained is then used for closed-loop control. In the second part of the paper, neural networks are used for direct adaptive control in the presence of unmodeled effects and parameter uncertainty. The control and learning laws are derived using the method of Lyapunov.

  20. Description of interatomic interactions with neural networks

    NASA Astrophysics Data System (ADS)

    Hajinazar, Samad; Shao, Junping; Kolmogorov, Aleksey N.

    Neural networks are a promising alternative to traditional classical potentials for describing interatomic interactions. Recent research in the field has demonstrated how arbitrary atomic environments can be represented with sets of general functions which serve as an input for the machine learning tool. We have implemented a neural network formalism in the MAISE package and developed a protocol for automated generation of accurate models for multi-component systems. Our tests illustrate the performance of neural networks and known classical potentials for a range of chemical compositions and atomic configurations. Supported by NSF Grant DMR-1410514.

  1. Neural networks in auroral data assimilation

    NASA Astrophysics Data System (ADS)

    Härter, Fabrício P.; de Campos Velho, Haroldo F.; Rempel, Erico L.; Chian, Abraham C.-L.

    2008-07-01

    Data assimilation is an essential step for improving space weather forecasting by means of a weighted combination between observational data and data from a mathematical model. In the present work data assimilation methods based on Kalman filter (KF) and artificial neural networks are applied to a three-wave model of auroral radio emissions. A novel data assimilation method is presented, whereby a multilayer perceptron neural network is trained to emulate a KF for data assimilation by using cross-validation. The results obtained render support for the use of neural networks as an assimilation technique for space weather prediction.

  2. Noise cancellation of memristive neural networks.

    PubMed

    Wen, Shiping; Zeng, Zhigang; Huang, Tingwen; Yu, Xinghuo

    2014-12-01

    This paper investigates noise cancellation problem of memristive neural networks. Based on the reproducible gradual resistance tuning in bipolar mode, a first-order voltage-controlled memristive model is employed with asymmetric voltage thresholds. Since memristive devices are especially tiny to be densely packed in crossbar-like structures and possess long time memory needed by neuromorphic synapses, this paper shows how to approximate the behavior of synapses in neural networks using this memristive device. Also certain templates of memristive neural networks are established to implement the noise cancellation.

  3. Stock market index prediction using neural networks

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  4. Neural networks techniques applied to reservoir engineering

    SciTech Connect

    Flores, M.; Barragan, C.

    1995-12-31

    Neural Networks are considered the greatest technological advance since the transistor. They are expected to be a common household item by the year 2000. An attempt to apply Neural Networks to an important geothermal problem has been made, predictions on the well production and well completion during drilling in a geothermal field. This was done in Los Humeros geothermal field, using two common types of Neural Network models, available in commercial software. Results show the learning capacity of the developed model, and its precision in the predictions that were made.

  5. Neural network with formed dynamics of activity

    SciTech Connect

    Dunin-Barkovskii, V.L.; Osovets, N.B.

    1995-03-01

    The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.

  6. Threshold control of chaotic neural network.

    PubMed

    He, Guoguang; Shrimali, Manish Dev; Aihara, Kazuyuki

    2008-01-01

    The chaotic neural network constructed with chaotic neurons exhibits rich dynamic behaviour with a nonperiodic associative memory. In the chaotic neural network, however, it is difficult to distinguish the stored patterns in the output patterns because of the chaotic state of the network. In order to apply the nonperiodic associative memory into information search, pattern recognition etc. it is necessary to control chaos in the chaotic neural network. We have studied the chaotic neural network with threshold activated coupling, which provides a controlled network with associative memory dynamics. The network converges to one of its stored patterns or/and reverse patterns which has the smallest Hamming distance from the initial state of the network. The range of the threshold applied to control the neurons in the network depends on the noise level in the initial pattern and decreases with the increase of noise. The chaos control in the chaotic neural network by threshold activated coupling at varying time interval provides controlled output patterns with different temporal periods which depend upon the control parameters.

  7. Absolute exponential stability of recurrent neural networks with Lipschitz-continuous activation functions and time delays.

    PubMed

    Cao, Jinde; Wang, Jun

    2004-04-01

    This paper investigates the absolute exponential stability of a general class of delayed neural networks, which require the activation functions to be partially Lipschitz continuous and monotone nondecreasing only, but not necessarily differentiable or bounded. Three new sufficient conditions are derived to ascertain whether or not the equilibrium points of the delayed neural networks with additively diagonally stable interconnection matrices are absolutely exponentially stable by using delay Halanay-type inequality and Lyapunov function. The stability criteria are also suitable for delayed optimization neural networks and delayed cellular neural networks whose activation functions are often nondifferentiable or unbounded. The results herein answer a question: if a neural network without any delay is absolutely exponentially stable, then under what additional conditions, the neural networks with delay is also absolutely exponentially stable.

  8. Nonequilibrium landscape theory of neural networks.

    PubMed

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  9. Nonequilibrium landscape theory of neural networks

    PubMed Central

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  10. Results of the neural network investigation

    NASA Astrophysics Data System (ADS)

    Uvanni, Lee A.

    1992-04-01

    Rome Laboratory has designed and implemented a neural network based automatic target recognition (ATR) system under contract F30602-89-C-0079 with Booz, Allen & Hamilton (BAH), Inc., of Arlington, Virginia. The system utilizes a combination of neural network paradigms and conventional image processing techniques in a parallel environment on the IE- 2000 SUN 4 workstation at Rome Laboratory. The IE-2000 workstation was designed to assist the Air Force and Department of Defense to derive the needs for image exploitation and image exploitation support for the late 1990s - year 2000 time frame. The IE-2000 consists of a developmental testbed and an applications testbed, both with the goal of solving real world problems on real-world facilities for image exploitation. To fully exploit the parallel nature of neural networks, 18 Inmos T800 transputers were utilized, in an attempt to provide a near- linear speed-up for each subsystem component implemented on them. The initial design contained three well-known neural network paradigms, each modified by BAH to some extent: the Selective Attention Neocognitron (SAN), the Binary Contour System/Feature Contour System (BCS/FCS), and Adaptive Resonance Theory 2 (ART-2), and one neural network designed by BAH called the Image Variance Exploitation Network (IVEN). Through rapid prototyping, the initial system evolved into a completely different final design, called the Neural Network Image Exploitation System (NNIES), where the final system consists of two basic components: the Double Variance (DV) layer and the Multiple Object Detection And Location System (MODALS). A rapid prototyping neural network CAD Tool, designed by Booz, Allen & Hamilton, was used to rapidly build and emulate the neural network paradigms. Evaluation of the completed ATR system included probability of detections and probability of false alarms among other measures.

  11. Recognition of Telugu characters using neural networks.

    PubMed

    Sukhaswami, M B; Seetharamulu, P; Pujari, A K

    1995-09-01

    The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.

  12. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  13. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  14. Neural Networks for Dynamic Flight Control

    DTIC Science & Technology

    1993-12-01

    uses the Adaline (22) model for development of the neural networks. Neural Graphics and other AFIT applications use a slightly different model. The...primary difference in the Nguyen application is that the Adaline uses the nonlinear function .f(a) = tanh(a) where standard backprop uses the sigmoid

  15. Radar signal categorization using a neural network

    NASA Technical Reports Server (NTRS)

    Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.

    1991-01-01

    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.

  16. Control of autonomous robot using neural networks

    NASA Astrophysics Data System (ADS)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  17. Imbibition well stimulation via neural network design

    DOEpatents

    Weiss, William

    2007-08-14

    A method for stimulation of hydrocarbon production via imbibition by utilization of surfactants. The method includes use of fuzzy logic and neural network architecture constructs to determine surfactant use.

  18. Neural Network Solutions to Optical Absorption Spectra

    NASA Astrophysics Data System (ADS)

    Rosenbrock, Conrad

    2012-10-01

    Artificial neural networks have been effective in reducing computation time while achieving remarkable accuracy for a variety of difficult physics problems. Neural networks are trained iteratively by adjusting the size and shape of sums of non-linear functions by varying the function parameters to fit results for complex non-linear systems. For smaller structures, ab initio simulation methods can be used to determine absorption spectra under field perturbations. However, these methods are impractical for larger structures. Designing and training an artificial neural network with simulated data from time-dependent density functional theory may allow time-dependent perturbation effects to be calculated more efficiently. I investigate the design considerations and results of neural network implementations for calculating perturbation-coupled electron oscillations in small molecules.

  19. Temporal Coding in Realistic Neural Networks

    NASA Astrophysics Data System (ADS)

    Gerasyuta, S. M.; Ivanov, D. V.

    1995-10-01

    The modification of realistic neural network model have been proposed. The model differs from the Hopfield model because of the two characteristic contributions to synaptic efficacious: the short-time contribution which is determined by the chemical reactions in the synapses and the long-time contribution corresponding to the structural changes of synaptic contacts. The approximation solution of the realistic neural network model equations is obtained. This solution allows us to calculate the postsynaptic potential as function of input. Using the approximate solution of realistic neural network model equations the behaviour of postsynaptic potential of realistic neural network as function of time for the different temporal sequences of stimuli is described. The various outputs are obtained for the different temporal sequences of the given stimuli. These properties of the temporal coding can be exploited as a recognition element capable of being selectively tuned to different inputs.

  20. A neural network for bounded linear programming

    SciTech Connect

    Culioli, J.C.; Protopopescu, V.; Britton, C.; Ericson, N. )

    1989-01-01

    The purpose of this paper is to describe a neural network implementation of an algorithm recently designed at ORNL to solve the Transportation and the Assignment Problems, and, more generally, any explicitly bounded linear program. 9 refs.

  1. A neural network architecture for data classification.

    PubMed

    Lezoray, O

    2001-02-01

    This article aims at showing an architecture of neural networks designed for the classification of data distributed among a high number of classes. A significant gain in the global classification rate can be obtained by using our architecture. This latter is based on a set of several little neural networks, each one discriminating only two classes. The specialization of each neural network simplifies their structure and improves the classification. Moreover, the learning step automatically determines the number of hidden neurons. The discussion is illustrated by tests on databases from the UCI machine learning database repository. The experimental results show that this architecture can achieve a faster learning, simpler neural networks and an improved performance in classification.

  2. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  3. Blood glucose prediction using neural network

    NASA Astrophysics Data System (ADS)

    Soh, Chit Siang; Zhang, Xiqin; Chen, Jianhong; Raveendran, P.; Soh, Phey Hong; Yeo, Joon Hock

    2008-02-01

    We used neural network for blood glucose level determination in this study. The data set used in this study was collected using a non-invasive blood glucose monitoring system with six laser diodes, each laser diode operating at distinct near infrared wavelength between 1500nm and 1800nm. The neural network is specifically used to determine blood glucose level of one individual who participated in an oral glucose tolerance test (OGTT) session. Partial least squares regression is also used for blood glucose level determination for the purpose of comparison with the neural network model. The neural network model performs better in the prediction of blood glucose level as compared with the partial least squares model.

  4. Constructive Autoassociative Neural Network for Facial Recognition

    PubMed Central

    Fernandes, Bruno J. T.; Cavalcanti, George D. C.; Ren, Tsang I.

    2014-01-01

    Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature. PMID:25542018

  5. IR wireless cluster synapses of HYDRA very large neural networks

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Forrester, Thomas

    2008-04-01

    RF/IR wireless (virtual) synapses are critical components of HYDRA (Hyper-Distributed Robotic Autonomy) neural networks, already discussed in two earlier papers. The HYDRA network has the potential to be very large, up to 10 11-neurons and 10 18-synapses, based on already established technologies (cellular RF telephony and IR-wireless LANs). It is organized into almost fully connected IR-wireless clusters. The HYDRA neurons and synapses are very flexible, simple, and low-cost. They can be modified into a broad variety of biologically-inspired brain-like computing capabilities. In this third paper, we focus on neural hardware in general, and on IR-wireless synapses in particular. Such synapses, based on LED/LD-connections, dominate the HYDRA neural cluster.

  6. Neural network for image segmentation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.; Prasad, Lakshman; Schlei, Bernd R.

    2000-10-01

    Image analysis is an important requirement of many artificial intelligence systems. Though great effort has been devoted to inventing efficient algorithms for image analysis, there is still much work to be done. It is natural to turn to mammalian vision systems for guidance because they are the best known performers of visual tasks. The pulse- coupled neural network (PCNN) model of the cat visual cortex has proven to have interesting properties for image processing. This article describes the PCNN application to the processing of images of heterogeneous materials; specifically PCNN is applied to image denoising and image segmentation. Our results show that PCNNs do well at segmentation if we perform image smoothing prior to segmentation. We use PCNN for obth smoothing and segmentation. Combining smoothing and segmentation enable us to eliminate PCNN sensitivity to the setting of the various PCNN parameters whose optimal selection can be difficult and can vary even for the same problem. This approach makes image processing based on PCNN more automatic in our application and also results in better segmentation.

  7. Tensor-Factorized Neural Networks.

    PubMed

    Chien, Jen-Tzung; Bao, Yi-Ting

    2017-04-17

    The growing interests in multiway data analysis and deep learning have drawn tensor factorization (TF) and neural network (NN) as the crucial topics. Conventionally, the NN model is estimated from a set of one-way observations. Such a vectorized NN is not generalized for learning the representation from multiway observations. The classification performance using vectorized NN is constrained, because the temporal or spatial information in neighboring ways is disregarded. More parameters are required to learn the complicated data structure. This paper presents a new tensor-factorized NN (TFNN), which tightly integrates TF and NN for multiway feature extraction and classification under a unified discriminative objective. This TFNN is seen as a generalized NN, where the affine transformation in an NN is replaced by the multilinear and multiway factorization for tensor-based NN. The multiway information is preserved through layerwise factorization. Tucker decomposition and nonlinear activation are performed in each hidden layer. The tensor-factorized error backpropagation is developed to train TFNN with the limited parameter size and computation time. This TFNN can be further extended to realize the convolutional TFNN (CTFNN) by looking at small subtensors through the factorized convolution. Experiments on real-world classification tasks demonstrate that TFNN and CTFNN attain substantial improvement when compared with an NN and a convolutional NN, respectively.

  8. Artificial neural network and medicine.

    PubMed

    Khan, Z H; Mohapatra, S K; Khodiar, P K; Ragu Kumar, S N

    1998-07-01

    The introduction of human brain functions such as perception and cognition into the computer has been made possible by the use of Artificial Neural Network (ANN). ANN are computer models inspired by the structure and behavior of neurons. Like the brain, ANN can recognize patterns, manage data and most significantly, learn. This learning ability, not seen in other computer models simulating human intelligence, constantly improves its functional accuracy as it keeps on performing. Experience is as important for an ANN as it is for man. It is being increasingly used to supplement and even (may be) replace experts, in medicine. However, there is still scope for improvement in some areas. Its ability to classify and interpret various forms of medical data comes as a helping hand to clinical decision making in both diagnosis and treatment. Treatment planning in medicine, radiotherapy, rehabilitation, etc. is being done using ANN. Morbidity and mortality prediction by ANN in different medical situations can be very helpful for hospital management. ANN has a promising future in fundamental research, medical education and surgical robotics.

  9. Limitations of opto-electronic neural networks

    NASA Technical Reports Server (NTRS)

    Yu, Jeffrey; Johnston, Alan; Psaltis, Demetri; Brady, David

    1989-01-01

    Consideration is given to the limitations of implementing neurons, weights, and connections in neural networks for electronics and optics. It is shown that the advantages of each technology are utilized when electronically fabricated neurons are included and a combination of optics and electronics are employed for the weights and connections. The relationship between the types of neural networks being constructed and the choice of technologies to implement the weights and connections is examined.

  10. Using neural networks in software repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  11. Application of artificial neural networks to gaming

    NASA Astrophysics Data System (ADS)

    Baba, Norio; Kita, Tomio; Oda, Kazuhiro

    1995-04-01

    Recently, neural network technology has been applied to various actual problems. It has succeeded in producing a large number of intelligent systems. In this article, we suggest that it could be applied to the field of gaming. In particular, we suggest that the neural network model could be used to mimic players' characters. Several computer simulation results using a computer gaming system which is a modified version of the COMMONS GAME confirm our idea.

  12. Predicting Car Production using a Neural Network

    DTIC Science & Technology

    2003-04-24

    World Almanac Education Group, 2003 [8] E. Petroutsos, Mastering Visual Basic .NET, SYBEX Inc., 2002 [9] D. E. Rumelhart, J. L. McClelland, Parallel...In this example, 100,000 cycles (epochs) were used to train it. The initial weights were randomly selected from values between 1 and -1. Visual ... basic .NET was used to program the neural network [8]. The neural network algorithm followed the steps outlined in [9]. As stated above, a 3 layer

  13. A neural network simulation package in CLIPS

    NASA Technical Reports Server (NTRS)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  14. Application of neural network in medical images

    NASA Astrophysics Data System (ADS)

    Li, Xinxin; Sethi, Ishwar K.

    2000-04-01

    In this paper, we do some pre-processing on the input data to remove some noise before putting them into the network and some post-processing before outputting the results. Different neural networks such as back-propagation, radias basis network with different architecture are tested. We choose the one with the best performance among them. From the experiments we can see that the results of the neural network are similar to those given by the experienced doctors and better than those of previous research, indicating that this approach is very practical and beneficial to doctors comparing with some other methods currently existing.

  15. Neural networks for segmentation, tracking, and identification

    NASA Astrophysics Data System (ADS)

    Rogers, Steven K.; Ruck, Dennis W.; Priddy, Kevin L.; Tarr, Gregory L.

    1992-09-01

    The main thrust of this paper is to encourage the use of neural networks to process raw data for subsequent classification. This article addresses neural network techniques for processing raw pixel information. For this paper the definition of neural networks includes the conventional artificial neural networks such as the multilayer perceptrons and also biologically inspired processing techniques. Previously, we have successfully used the biologically inspired Gabor transform to process raw pixel information and segment images. In this paper we extend those ideas to both segment and track objects in multiframe sequences. It is also desirable for the neural network processing data to learn features for subsequent recognition. A common first step for processing raw data is to transform the data and use the transform coefficients as features for recognition. For example, handwritten English characters become linearly separable in the feature space of the low frequency Fourier coefficients. Much of human visual perception can be modelled by assuming low frequency Fourier as the feature space used by the human visual system. The optimum linear transform, with respect to reconstruction, is the Karhunen-Loeve transform (KLT). It has been shown that some neural network architectures can compute approximations to the KLT. The KLT coefficients can be used for recognition as well as for compression. We tested the use of the KLT on the problem of interfacing a nonverbal patient to a computer. The KLT uses an optimal basis set for object reconstruction. For object recognition, the KLT may not be optimal.

  16. Logarithmic learning for generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network.

  17. Diabetic retinopathy screening using deep neural network.

    PubMed

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  18. Neural-Network Object-Recognition Program

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  19. Fast curve fitting using neural networks

    NASA Astrophysics Data System (ADS)

    Bishop, C. M.; Roach, C. M.

    1992-10-01

    Neural networks provide a new tool for the fast solution of repetitive nonlinear curve fitting problems. In this article we introduce the concept of a neural network, and we show how such networks can be used for fitting functional forms to experimental data. The neural network algorithm is typically much faster than conventional iterative approaches. In addition, further substantial improvements in speed can be obtained by using special purpose hardware implementations of the network, thus making the technique suitable for use in fast real-time applications. The basic concepts are illustrated using a simple example from fusion research, involving the determination of spectral line parameters from measurements of B iv impurity radiation in the COMPASS-C tokamak.

  20. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  1. Cellular computational platform and neurally inspired elements thereof

    DOEpatents

    Okandan, Murat

    2016-11-22

    A cellular computational platform is disclosed that includes a multiplicity of functionally identical, repeating computational hardware units that are interconnected electrically and optically. Each computational hardware unit includes a reprogrammable local memory and has interconnections to other such units that have reconfigurable weights. Each computational hardware unit is configured to transmit signals into the network for broadcast in a protocol-less manner to other such units in the network, and to respond to protocol-less broadcast messages that it receives from the network. Each computational hardware unit is further configured to reprogram the local memory in response to incoming electrical and/or optical signals.

  2. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  3. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  4. Network motifs modulate druggability of cellular targets

    PubMed Central

    Wu, Fan; Ma, Cong; Tan, Cheemeng

    2016-01-01

    Druggability refers to the capacity of a cellular target to be modulated by a small-molecule drug. To date, druggability is mainly studied by focusing on direct binding interactions between a drug and its target. However, druggability is impacted by cellular networks connected to a drug target. Here, we use computational approaches to reveal basic principles of network motifs that modulate druggability. Through quantitative analysis, we find that inhibiting self-positive feedback loop is a more robust and effective treatment strategy than inhibiting other regulations, and adding direct regulations to a drug-target generally reduces its druggability. The findings are explained through analytical solution of the motifs. Furthermore, we find that a consensus topology of highly druggable motifs consists of a negative feedback loop without any positive feedback loops, and consensus motifs with low druggability have multiple positive direct regulations and positive feedback loops. Based on the discovered principles, we predict potential genetic targets in Escherichia coli that have either high or low druggability based on their network context. Our work establishes the foundation toward identifying and predicting druggable targets based on their network topology. PMID:27824147

  5. The H1 neural network trigger project

    NASA Astrophysics Data System (ADS)

    Kiesling, C.; Denby, B.; Fent, J.; Fröchtenicht, W.; Garda, P.; Granado, B.; Grindhammer, G.; Haberer, W.; Janauschek, L.; Kobler, T.; Koblitz, B.; Nellen, G.; Prevotet, J.-C.; Schmidt, S.; Tzamariudaki, E.; Udluft, S.

    2001-08-01

    We present a short overview of neuromorphic hardware and some of the physics projects making use of such devices. As a concrete example we describe an innovative project within the H1-Experiment at the electron-proton collider HERA, instrumenting hardwired neural networks as pattern recognition machines to discriminate between wanted physics and uninteresting background at the trigger level. The decision time of the system is less than 20 microseconds, typical for a modern second level trigger. The neural trigger has been successfully running for the past four years and has turned out new physics results from H1 unobtainable so far with other triggering schemes. We describe the concepts and the technical realization of the neural network trigger system, present the most important physics results, and motivate an upgrade of the system for the future high luminosity running at HERA. The upgrade concentrates on "intelligent preprocessing" of the neural inputs which help to strongly improve the networks' discrimination power.

  6. Hardware implementation of stochastic spiking neural networks.

    PubMed

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  7. Sequential state generation by model neural networks.

    PubMed Central

    Kleinfeld, D

    1986-01-01

    Sequential patterns of neural output activity form the basis of many biological processes, such as the cyclic pattern of outputs that control locomotion. I show how such sequences can be generated by a class of model neural networks that make defined sets of transitions between selected memory states. Sequence-generating networks depend upon the interplay between two sets of synaptic connections. One set acts to stabilize the network in its current memory state, while the second set, whose action is delayed in time, causes the network to make specified transitions between the memories. The dynamic properties of these networks are described in terms of motion along an energy surface. The performance of the networks, both with intact connections and with noisy or missing connections, is illustrated by numerical examples. In addition, I present a scheme for the recognition of externally generated sequences by these networks. PMID:3467316

  8. Fuzzy logic and neural networks

    SciTech Connect

    Loos, J.R.

    1994-11-01

    Combine fuzzy logic`s fuzzy sets, fuzzy operators, fuzzy inference, and fuzzy rules - like defuzzification - with neural networks and you can arrive at very unfuzzy real-time control. Fuzzy logic, cursed with a very whimsical title, simply means multivalued logic, which includes not only the conventional two-valued (true/false) crisp logic, but also the logic of three or more values. This means one can assign logic values of true, false, and somewhere in between. This is where fuzziness comes in. Multi-valued logic avoids the black-and-white, all-or-nothing assignment of true or false to an assertion. Instead, it permits the assignment of shades of gray. When assigning a value of true or false to an assertion, the numbers typically used are {open_quotes}1{close_quotes} or {open_quotes}0{close_quotes}. This is the case for programmed systems. If {open_quotes}0{close_quotes} means {open_quotes}false{close_quotes} and {open_quotes}1{close_quotes} means {open_quotes}true,{close_quotes} then {open_quotes}shades of gray{close_quotes} are any numbers between 0 and 1. Therefore, {open_quotes}nearly true{close_quotes} may be represented by 0.8 or 0.9, {open_quotes}nearly false{close_quotes} may be represented by 0.1 or 0.2, and {close_quotes}your guess is as good as mine{close_quotes} may be represented by 0.5. The flexibility available to one is limitless. One can associate any meaning, such as {open_quotes}nearly true{close_quotes}, to any value of any granularity, such as 0.9999. 2 figs.

  9. Optical neural stimulation modeling on degenerative neocortical neural networks

    NASA Astrophysics Data System (ADS)

    Zverev, M.; Fanjul-Vélez, F.; Salas-García, I.; Arce-Diego, J. L.

    2015-07-01

    Neurodegenerative diseases usually appear at advanced age. Medical advances make people live longer and as a consequence, the number of neurodegenerative diseases continuously grows. There is still no cure for these diseases, but several brain stimulation techniques have been proposed to improve patients' condition. One of them is Optical Neural Stimulation (ONS), which is based on the application of optical radiation over specific brain regions. The outer cerebral zones can be noninvasively stimulated, without the common drawbacks associated to surgical procedures. This work focuses on the analysis of ONS effects in stimulated neurons to determine their influence in neuronal activity. For this purpose a neural network model has been employed. The results show the neural network behavior when the stimulation is provided by means of different optical radiation sources and constitute a first approach to adjust the optical light source parameters to stimulate specific neocortical areas.

  10. Optimal exponential synchronization of general chaotic delayed neural networks: an LMI approach.

    PubMed

    Liu, Meiqin

    2009-09-01

    This paper investigates the optimal exponential synchronization problem of general chaotic neural networks with or without time delays by virtue of Lyapunov-Krasovskii stability theory and the linear matrix inequality (LMI) technique. This general model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, and recurrent multilayer perceptrons (RMLPs) with or without delays. Using the drive-response concept, time-delay feedback controllers are designed to synchronize two identical chaotic neural networks as quickly as possible. The control design equations are shown to be a generalized eigenvalue problem (GEVP) which can be easily solved by various convex optimization algorithms to determine the optimal control law and the optimal exponential synchronization rate. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.

  11. Robust Large Margin Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Sokolic, Jure; Giryes, Raja; Sapiro, Guillermo; Rodrigues, Miguel R. D.

    2017-08-01

    The generalization error of deep neural networks via their classification margin is studied in this work. Our approach is based on the Jacobian matrix of a deep neural network and can be applied to networks with arbitrary non-linearities and pooling layers, and to networks with different architectures such as feed forward networks and residual networks. Our analysis leads to the conclusion that a bounded spectral norm of the network's Jacobian matrix in the neighbourhood of the training samples is crucial for a deep neural network of arbitrary depth and width to generalize well. This is a significant improvement over the current bounds in the literature, which imply that the generalization error grows with either the width or the depth of the network. Moreover, it shows that the recently proposed batch normalization and weight normalization re-parametrizations enjoy good generalization properties, and leads to a novel network regularizer based on the network's Jacobian matrix. The analysis is supported with experimental results on the MNIST, CIFAR-10, LaRED and ImageNet datasets.

  12. Artificial neural network intelligent method for prediction

    NASA Astrophysics Data System (ADS)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  13. Computational inference of neural information flow networks.

    PubMed

    Smith, V Anne; Yu, Jing; Smulders, Tom V; Hartemink, Alexander J; Jarvis, Erich D

    2006-11-24

    Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.

  14. On sparsely connected optimal neural networks

    SciTech Connect

    Beiu, V.; Draghici, S.

    1997-10-01

    This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.

  15. Neural networks as perpetual information generators

    NASA Astrophysics Data System (ADS)

    Englisch, Harald; Xiao, Yegao; Yao, Kailun

    1991-07-01

    The information gain in a neural network cannot be larger than the bit capacity of the synapses. It is shown that the equation derived by Engel et al. [Phys. Rev. A 42, 4998 (1990)] for the strongly diluted network with persistent stimuli contradicts this condition. Furthermore, for any time step the correct equation is derived by taking the correlation between random variables into account.

  16. Higher-Order Neural Networks Recognize Patterns

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly; Ochoa, Ellen

    1996-01-01

    Networks of higher order have enhanced capabilities to distinguish between different two-dimensional patterns and to recognize those patterns. Also enhanced capabilities to "learn" patterns to be recognized: "trained" with far fewer examples and, therefore, in less time than necessary to train comparable first-order neural networks.

  17. Orthogonal Patterns In A Binary Neural Network

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1991-01-01

    Report presents some recent developments in theory of binary neural networks. Subject matter relevant to associate (content-addressable) memories and to recognition of patterns - both of considerable importance in advancement of robotics and artificial intelligence. When probed by any pattern, network converges to one of stored patterns.

  18. An Evolutionary Approach to Designing Neural Networks

    DTIC Science & Technology

    1991-10-01

    Feature-Map Networks .. .. .. .. ... ... .... ... ... ... .... 42 4.5 Evolution of Learning: A Population Genetics Approach. .. .. .. .. ... .... .. 44...principles of biological evolution and population genetics provide the basis for such behavior. The processes of variation and selection, operating at...better understanding of the relationship among neural network theory, evolutionary and population genetics , and some aspects of dynamical systems

  19. Artificial Neural Networks and Instructional Technology.

    ERIC Educational Resources Information Center

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  20. Artificial Neural Networks and Instructional Technology.

    ERIC Educational Resources Information Center

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  1. Neural-Network Modeling Of Arc Welding

    NASA Technical Reports Server (NTRS)

    Anderson, Kristinn; Barnett, Robert J.; Springfield, James F.; Cook, George E.; Strauss, Alvin M.; Bjorgvinsson, Jon B.

    1994-01-01

    Artificial neural networks considered for use in monitoring and controlling gas/tungsten arc-welding processes. Relatively simple network, using 4 welding equipment parameters as inputs, estimates 2 critical weld-bead paramaters within 5 percent. Advantage is computational efficiency.

  2. Some neural networks compute, others don't.

    PubMed

    Piccinini, Gualtiero

    2008-01-01

    I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend the following theses. (1) Many neural networks compute--they perform computations. (2) Some neural networks compute in a classical way. Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may well fall into this last class.

  3. Disruption forecasting at JET using neural networks

    NASA Astrophysics Data System (ADS)

    Cannas, B.; Fanni, A.; Marongiu, E.; Sonato, P.

    2004-01-01

    Neural networks are trained to evaluate the risk of plasma disruptions in a tokamak experiment using several diagnostic signals as inputs. A saliency analysis confirms the goodness of the chosen inputs, all of which contribute to the network performance. Tests that were carried out refer to data collected from succesfully terminated and disruption terminated pulses performed during two years of JET tokamak experiments. Results show the possibility of developing a neural network predictor that intervenes well in advance in order to avoid plasma disruption or mitigate its effects.

  4. Electronic device aspects of neural network memories

    NASA Technical Reports Server (NTRS)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  5. Electronic device aspects of neural network memories

    NASA Technical Reports Server (NTRS)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  6. Improving neural network performance on SIMD architectures

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  7. A quantum-implementable neural network model

    NASA Astrophysics Data System (ADS)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  8. Artificial neural networks for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Daniell, Cindy E.; Kemsley, David; Lincoln, William P.; Tackett, Walter A.; Baraghimian, Gregory A.

    1992-12-01

    The Self Adaptive Hierarchical Target Identification and Recognition Neural Network (SAHTIRNTM), is a unique and powerful combination of state-of-the-art neural network models for automatic target recognition applications. It is a combination of three models: (1) an early vision segmentor based on the Canny edge detector, (2) a hierarchical feature extraction and pattern recognition system based on a modified Neocognitron architecture, and (3) a pattern classifier based on the back-propagation network. Hughes has extensively tested SAHTIRNTM with several ground vehicular targets using terrain board modeled IR imagery under a current neural network program sponsored by the Defense Advanced Research Projects Agency. In addition, extensive testing was conducted using several real IR and handwritten character databases. Hughes has demonstrated successful performance with 91 to 100% probability of correct classification over this wide variety of data. End-to-end system results from these experiments are provided and interim results from each stage of the SAHTIRNTM system are discussed.

  9. Multiwavelet neural network and its approximation properties.

    PubMed

    Jiao, L; Pan, J; Fang, Y

    2001-01-01

    A model of multiwavelet-based neural networks is proposed. Its universal and L(2) approximation properties, together with its consistency are proved, and the convergence rates associated with these properties are estimated. The structure of this network is similar to that of the wavelet network, except that the orthonormal scaling functions are replaced by orthonormal multiscaling functions. The theoretical analyses show that the multiwavelet network converges more rapidly than the wavelet network, especially for smooth functions. To make a comparison between both networks, experiments are carried out with the Lemarie-Meyer wavelet network, the Daubechies2 wavelet network and the GHM multiwavelet network, and the results support the theoretical analysis well. In addition, the results also illustrate that at the jump discontinuities, the approximation performance of the two networks are about the same.

  10. Towards understanding transcriptional networks in cellular reprogramming.

    PubMed

    Firas, Jaber; Polo, Jose M

    2017-10-01

    Most of the knowledge we have on the molecular mechanisms of transcription factor mediated reprogramming comes from studies conducted in induced pluripotency. Recently however, a few studies investigated the mechanisms of cellular reprogramming in direct and indirect transdifferentiation, which allows us to explore whether shared parallel mechanisms can be drawn. Moreover, there are currently several computational tools that have been developed to predict and enhance the reprogramming process by reconstructing the transcriptional networks of reprogramming cells. These new tools have the potential to greatly benefit the field of reprogramming, providing us with new approaches that can transform our understanding of the initiation, progression and successful completion of cellular fate transition. Copyright © 2017. Published by Elsevier Ltd.

  11. Heterogeneous Force Chains in Cellularized Biopolymer Network

    NASA Astrophysics Data System (ADS)

    Liang, Long; Jones, Christopher Allen Rucksack; Sun, Bo; Jiao, Yang

    Biopolymer Networks play an important role in coordinating and regulating collective cellular dynamics via a number of signaling pathways. Here, we investigate the mechanical response of a model biopolymer network due to the active contraction of embedded cells. Specifically, a graph (bond-node) model derived from confocal microscopy data is used to represent the network microstructure, and cell contraction is modeled by applying correlated displacements at specific nodes, representing the focal adhesion sites. A force-based stochastic relaxation method is employed to obtain force-balanced network under cell contraction. We find that the majority of the forces are carried by a small number of heterogeneous force chains emerged from the contracting cells. The force chains consist of fiber segments that either possess a high degree of alignment before cell contraction or are aligned due to the reorientation induced by cell contraction. Large fluctuations of the forces along different force chains are observed. Importantly, the decay of the forces along the force chains is significantly slower than the decay of radially averaged forces in the system, suggesting that the fibreous nature of biopolymer network structure could support long-range mechanical signaling between cells.

  12. Multidisciplinary Studies of Integrated Neural Network Systems

    DTIC Science & Technology

    1994-03-01

    high and low frequencies ........... 12 c. Cellular mechanisms of intensity processing in nucleus angularis ............... 12 D. Publications...Intelligent robotic control systems have been constructed with a hierarchical and modular organization, us- ing antagonistic actuation mechanisms and multi...terrestrial animal studied thus far [ 12]. Considerable progress has been made in determining the acoustic and neural bases of the head saccade (see

  13. Applications of Neural Networks to Adaptive Control

    DTIC Science & Technology

    1989-12-01

    DTIC ;- E py 00 NAVAL POSTGRADUATE SCHOOL Monterey, California I.$ RDTIC IELECTE fl THESIS BEG7V°U APPLICATIONS OF NEURAL NETWORKS TO ADAPTIVE CONTROL...Second keader E . Robert Wood, Chairman, Department of Aeronautics and Astronautics Gordoii E . Schacher, Dean of Faculty and Graduate Education ii ABSTRACT...23: Network Dynamic Stability for q(t) . ............................. 55 ix Figure 24: Network Dynamic Stability for e (t

  14. Neural network technologies for image classification

    NASA Astrophysics Data System (ADS)

    Korikov, A. M.; Tungusova, A. V.

    2015-11-01

    We analyze the classes of problems with an objective necessity to use neural network technologies, i.e. representation and resolution problems in the neural network logical basis. Among these problems, image recognition takes an important place, in particular the classification of multi-dimensional data based on information about textural characteristics. These problems occur in aerospace and seismic monitoring, materials science, medicine and other. We reviewed different approaches for the texture description: statistical, structural, and spectral. We developed a neural network technology for resolving a practical problem of cloud image classification for satellite snapshots from the spectroradiometer MODIS. The cloud texture is described by the statistical characteristics of the GLCM (Gray Level Co- Occurrence Matrix) method. From the range of neural network models that might be applied for image classification, we chose the probabilistic neural network model (PNN) and developed an implementation which performs the classification of the main types and subtypes of clouds. Also, we chose experimentally the optimal architecture and parameters for the PNN model which is used for image classification.

  15. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  16. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  17. A neural network approach to cloud classification

    NASA Technical Reports Server (NTRS)

    Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.

    1990-01-01

    It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.

  18. Representations in neural network based empirical potentials

    NASA Astrophysics Data System (ADS)

    Cubuk, Ekin D.; Malone, Brad D.; Onat, Berk; Waterland, Amos; Kaxiras, Efthimios

    2017-07-01

    Many structural and mechanical properties of crystals, glasses, and biological macromolecules can be modeled from the local interactions between atoms. These interactions ultimately derive from the quantum nature of electrons, which can be prohibitively expensive to simulate. Machine learning has the potential to revolutionize materials modeling due to its ability to efficiently approximate complex functions. For example, neural networks can be trained to reproduce results of density functional theory calculations at a much lower cost. However, how neural networks reach their predictions is not well understood, which has led to them being used as a "black box" tool. This lack of understanding is not desirable especially for applications of neural networks in scientific inquiry. We argue that machine learning models trained on physical systems can be used as more than just approximations since they had to "learn" physical concepts in order to reproduce the labels they were trained on. We use dimensionality reduction techniques to study in detail the representation of silicon atoms at different stages in a neural network, which provides insight into how a neural network learns to model atomic interactions.

  19. Estimates on compressed neural networks regression.

    PubMed

    Zhang, Yongquan; Li, Youmei; Sun, Jianyong; Ji, Jiabing

    2015-03-01

    When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Community structure of complex networks based on continuous neural network

    NASA Astrophysics Data System (ADS)

    Dai, Ting-ting; Shan, Chang-ji; Dong, Yan-shou

    2017-09-01

    As a new subject, the research of complex networks has attracted the attention of researchers from different disciplines. Community structure is one of the key structures of complex networks, so it is a very important task to analyze the community structure of complex networks accurately. In this paper, we study the problem of extracting the community structure of complex networks, and propose a continuous neural network (CNN) algorithm. It is proved that for any given initial value, the continuous neural network algorithm converges to the eigenvector of the maximum eigenvalue of the network modularity matrix. Therefore, according to the stability of the evolution of the network symbol will be able to get two community structure.

  1. Cellular-resolution connectomics: challenges of dense neural circuit reconstruction.

    PubMed

    Helmstaedter, Moritz

    2013-06-01

    Neuronal networks are high-dimensional graphs that are packed into three-dimensional nervous tissue at extremely high density. Comprehensively mapping these networks is therefore a major challenge. Although recent developments in volume electron microscopy imaging have made data acquisition feasible for circuits comprising a few hundreds to a few thousands of neurons, data analysis is massively lagging behind. The aim of this perspective is to summarize and quantify the challenges for data analysis in cellular-resolution connectomics and describe current solutions involving online crowd-sourcing and machine-learning approaches.

  2. Flexible body control using neural networks

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  3. Wireless synapses in bio-inspired neural networks

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Forrester, Thomas; Degrood, Kevin

    2009-05-01

    Wireless (virtual) synapses represent a novel approach to bio-inspired neural networks that follow the infrastructure of the biological brain, except that biological (physical) synapses are replaced by virtual ones based on cellular telephony modeling. Such synapses are of two types: intracluster synapses are based on IR wireless ones, while intercluster synapses are based on RF wireless ones. Such synapses have three unique features, atypical of conventional artificial ones: very high parallelism (close to that of the human brain), very high reconfigurability (easy to kill and to create), and very high plasticity (easy to modify or upgrade). In this paper we analyze the general concept of wireless synapses with special emphasis on RF wireless synapses. Also, biological mammalian (vertebrate) neural models are discussed for comparison, and a novel neural lensing effect is discussed in detail.

  4. Implementing Signature Neural Networks with Spiking Neurons

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  5. Implementing Signature Neural Networks with Spiking Neurons.

    PubMed

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  6. Training Deep Spiking Neural Networks Using Backpropagation

    PubMed Central

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations. PMID:27877107

  7. Foreign currency rate forecasting using neural networks

    NASA Astrophysics Data System (ADS)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  8. Training Deep Spiking Neural Networks Using Backpropagation.

    PubMed

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  9. Kannada character recognition system using neural network

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  10. Parallel analog neural networks for tree searching

    NASA Astrophysics Data System (ADS)

    Saylor, Janet; Stork, David G.

    1986-08-01

    We have modeled parallel analog neural networks designed such that their evolution toward final states is equivalent to finding optimal (or nearly optimal) paths through decision trees. This work extends that done on the Traveling Salesman Problem (TSP)[1] and sheds light on the conditions under which analog neural networks can and cannot find solutions to discrete optimization problems. Neural networks show considerable specificity in finding optimal solutions for tree searches; in the cases when a final state does represent a syntactically correct path, that path will be the best path 70-90% of the time—even for trees with up to two thousand nodes. However, it appears that except for trivial networks lacking the ability to ``think globally,'' there exists no general network architecture that can strictly insure the convergence a state that represents a single, continuous, unambiguous path. In fact, we find that for roughly 15% of trees with six generations, 40% of trees with eight generations, and 70% of trees with ten generations, networks evolve to ``broken paths,'' i.e., combinations of the beginning of one and the end of another path through a tree. Tree searches illustrate well neural dynamics because tree structures make the effects of competition and positive feedback apparent. We have found that 1) convergence times for networks with up to 2000 neurons are very rapid, depend on the gain of neurons and magnitude of neural connections but not on the number of generations or branching factor of a tree, 2) all neurons along a ``winning'' path turn on exponentially with the same exponent, and 3) the general computational mechanism of these networks appears to be the pruning of a tree from the outer branches inward, as chain reactions of neurons being quenched tend to propagate along possible paths.

  11. Neural network approaches for noisy language modeling.

    PubMed

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  12. Cotton genotypes selection through artificial neural networks.

    PubMed

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  13. A neural network model of attention-modulated neurodynamics.

    PubMed

    Gu, Yuqiao; Liljenström, Hans

    2007-12-01

    Visual attention appears to modulate cortical neurodynamics and synchronization through various cholinergic mechanisms. In order to study these mechanisms, we have developed a neural network model of visual cortex area V4, based on psychophysical, anatomical and physiological data. With this model, we want to link selective visual information processing to neural circuits within V4, bottom-up sensory input pathways, top-down attention input pathways, and to cholinergic modulation from the prefrontal lobe. We investigate cellular and network mechanisms underlying some recent analytical results from visual attention experimental data. Our model can reproduce the experimental findings that attention to a stimulus causes increased gamma-frequency synchronization in the superficial layers. Computer simulations and STA power analysis also demonstrate different effects of the different cholinergic attention modulation action mechanisms.

  14. A neural network model of attention-modulated neurodynamics

    PubMed Central

    Gu, Yuqiao

    2007-01-01

    Visual attention appears to modulate cortical neurodynamics and synchronization through various cholinergic mechanisms. In order to study these mechanisms, we have developed a neural network model of visual cortex area V4, based on psychophysical, anatomical and physiological data. With this model, we want to link selective visual information processing to neural circuits within V4, bottom-up sensory input pathways, top-down attention input pathways, and to cholinergic modulation from the prefrontal lobe. We investigate cellular and network mechanisms underlying some recent analytical results from visual attention experimental data. Our model can reproduce the experimental findings that attention to a stimulus causes increased gamma-frequency synchronization in the superficial layers. Computer simulations and STA power analysis also demonstrate different effects of the different cholinergic attention modulation action mechanisms. PMID:19003498

  15. Livermore Big Artificial Neural Network Toolkit

    SciTech Connect

    Essen, Brian Van; Jacobs, Sam; Kim, Hyojin; Dryden, Nikoli; Moon, Tim

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  16. Neural Network Prototyping Package Within IRAF

    NASA Technical Reports Server (NTRS)

    Bazell, David

    1997-01-01

    The purpose of this contract was to develop a neural network package within the IRAF environment to allow users to easily understand and use different neural network algorithms the analysis of astronomical data. The package was developed for use within IRAF to allow portability to different computing environments and to provide a familiar and easy to use interface with the routines. In addition to developing the software and supporting documentation, we planned to use the system for the analysis of several sample problems to prove its viability and usefulness.

  17. Implementation aspects of Graph Neural Networks

    NASA Astrophysics Data System (ADS)

    Barcz, A.; Szymański, Z.; Jankowski, S.

    2013-10-01

    This article summarises the results of implementation of a Graph Neural Network classi er. The Graph Neural Network model is a connectionist model, capable of processing various types of structured data, including non- positional and cyclic graphs. In order to operate correctly, the GNN model must implement a transition function being a contraction map, which is assured by imposing a penalty on model weights. This article presents research results concerning the impact of the penalty parameter on the model training process and the practical decisions that were made during the GNN implementation process.

  18. Signal dispersion within a hippocampal neural network

    NASA Technical Reports Server (NTRS)

    Horowitz, J. M.; Mates, J. W. B.

    1975-01-01

    A model network is described, representing two neural populations coupled so that one population is inhibited by activity it excites in the other. Parameters and operations within the model represent EPSPs, IPSPs, neural thresholds, conduction delays, background activity and spatial and temporal dispersion of signals passing from one population to the other. Simulations of single-shock and pulse-train driving of the network are presented for various parameter values. Neuronal events from 100 to 300 msec following stimulation are given special consideration in model calculations.

  19. Automatic identification of species with neural networks.

    PubMed

    Hernández-Serna, Andrés; Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  20. Simulation of photosynthetic production using neural network

    NASA Astrophysics Data System (ADS)

    Kmet, Tibor; Kmetova, Maria

    2013-10-01

    This paper deals with neural network based optimal control synthesis for solving optimal control problems with control and state constraints and discrete time delay. The optimal control problem is transcribed into nonlinear programming problem which is implemented with adaptive critic neural network. This approach is applicable to a wide class of nonlinear systems. The proposed simulation methods is illustrated by the optimal control problem of photosynthetic production described by discrete time delay differential equations. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  1. Automatic identification of species with neural networks

    PubMed Central

    Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification. PMID:25392749

  2. Intelligent neural network classifier for automatic testing

    NASA Astrophysics Data System (ADS)

    Bai, Baoxing; Yu, Heping

    1996-10-01

    This paper is concerned with an application of a multilayer feedforward neural network for the vision detection of industrial pictures, and introduces a high characteristics image processing and recognizing system which can be used for real-time testing blemishes, streaks and cracks, etc. on the inner walls of high-accuracy pipes. To take full advantage of the functions of the artificial neural network, such as the information distributed memory, large scale self-adapting parallel processing, high fault-tolerance ability, this system uses a multilayer perceptron as a regular detector to extract features of the images to be inspected and classify them.

  3. Autonomous robot behavior based on neural networks

    NASA Astrophysics Data System (ADS)

    Grolinger, Katarina; Jerbic, Bojan; Vranjes, Bozo

    1997-04-01

    The purpose of autonomous robot is to solve various tasks while adapting its behavior to the variable environment, expecting it is able to navigate much like a human would, including handling uncertain and unexpected obstacles. To achieve this the robot has to be able to find solution to unknown situations, to learn experienced knowledge, that means action procedure together with corresponding knowledge on the work space structure, and to recognize working environment. The planning of the intelligent robot behavior presented in this paper implements the reinforcement learning based on strategic and random attempts for finding solution and neural network approach for memorizing and recognizing work space structure (structural assignment problem). Some of the well known neural networks based on unsupervised learning are considered with regard to the structural assignment problem. The adaptive fuzzy shadowed neural network is developed. It has the additional shadowed hidden layer, specific learning rule and initialization phase. The developed neural network combines advantages of networks based on the Adaptive Resonance Theory and using shadowed hidden layer provides ability to recognize lightly translated or rotated obstacles in any direction.

  4. Porosity Log Prediction Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Dwi Saputro, Oki; Lazuardi Maulana, Zulfikar; Dzar Eljabbar Latief, Fourier

    2016-08-01

    Well logging is important in oil and gas exploration. Many physical parameters of reservoir is derived from well logging measurement. Geophysicists often use well logging to obtain reservoir properties such as porosity, water saturation and permeability. Most of the time, the measurement of the reservoir properties are considered expensive. One of method to substitute the measurement is by conducting a prediction using artificial neural network. In this paper, artificial neural network is performed to predict porosity log data from other log data. Three well from ‘yy’ field are used to conduct the prediction experiment. The log data are sonic, gamma ray, and porosity log. One of three well is used as training data for the artificial neural network which employ the Levenberg-Marquardt Backpropagation algorithm. Through several trials, we devise that the most optimal input training is sonic log data and gamma ray log data with 10 hidden layer. The prediction result in well 1 has correlation of 0.92 and mean squared error of 5.67 x10-4. Trained network apply to other well data. The result show that correlation in well 2 and well 3 is 0.872 and 0.9077 respectively. Mean squared error in well 2 and well 3 is 11 x 10-4 and 9.539 x 10-4. From the result we can conclude that sonic log and gamma ray log could be good combination for predicting porosity with neural network.

  5. Experimental fault characterization of a neural network

    NASA Technical Reports Server (NTRS)

    Tan, Chang-Huong

    1990-01-01

    The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.

  6. Wireless Fractal Ultra-Dense Cellular Networks.

    PubMed

    Hao, Yixue; Chen, Min; Hu, Long; Song, Jeungeun; Volk, Mojca; Humar, Iztok

    2017-04-12

    With the ever-growing number of mobile devices, there is an explosive expansion in mobile data services. This represents a challenge for the traditional cellular network architecture to cope with the massive wireless traffic generated by mobile media applications. To meet this challenge, research is currently focused on the introduction of a small cell base station (BS) due to its low transmit power consumption and flexibility of deployment. However, due to a complex deployment environment and low transmit power of small cell BSs, the coverage boundary of small cell BSs will not have a traditional regular shape. Therefore, in this paper, we discuss the coverage boundary of an ultra-dense small cell network and give its main features: aeolotropy of path loss fading and fractal coverage boundary. Simple performance analysis is given, including coverage probability and transmission rate, etc., based on stochastic geometry theory and fractal theory. Finally, we present an application scene and discuss challenges in the ultra-dense small cell network.

  7. Optimal flux patterns in cellular metabolic networks.

    PubMed

    Almaas, Eivind

    2007-06-01

    The availability of whole-cell-level metabolic networks of high quality has made it possible to develop a predictive understanding of bacterial metabolism. Using the optimization framework of flux balance analysis, I investigate the metabolic response and activity patterns to variations in the availability of nutrient and chemical factors such as oxygen and ammonia by simulating 30,000 random cellular environments. The distribution of reaction fluxes is heavy tailed for the bacteria H. pylori and E. coli, and the eukaryote S. cerevisiae. While the majority of flux balance investigations has relied on implementations of the simplex method, it is necessary to use interior-point optimization algorithms to adequately characterize the full range of activity patterns on metabolic networks. The interior-point activity pattern is bimodal for E. coli and S. cerevisiae, suggesting that most metabolic reactions are either in frequent use or are rarely active. The trimodal activity pattern of H. pylori indicates that a group of its metabolic reactions (20%) are active in approximately half of the simulated environments. Constructing the high-flux backbone of the network for every environment, there is a clear trend that the more frequently a reaction is active, the more likely it is a part of the backbone. Finally, I briefly discuss the predicted activity patterns of the central carbon metabolic pathways for the sample of random environments.

  8. Optimal flux patterns in cellular metabolic networks

    SciTech Connect

    Almaas, E

    2007-01-20

    The availability of whole-cell level metabolic networks of high quality has made it possible to develop a predictive understanding of bacterial metabolism. Using the optimization framework of flux balance analysis, I investigate metabolic response and activity patterns to variations in the availability of nutrient and chemical factors such as oxygen and ammonia by simulating 30,000 random cellular environments. The distribution of reaction fluxes is heavy-tailed for the bacteria H. pylori and E. coli, and the eukaryote S. cerevisiae. While the majority of flux balance investigations have relied on implementations of the simplex method, it is necessary to use interior-point optimization algorithms to adequately characterize the full range of activity patterns on metabolic networks. The interior-point activity pattern is bimodal for E. coli and S. cerevisiae, suggesting that most metabolic reaction are either in frequent use or are rarely active. The trimodal activity pattern of H. pylori indicates that a group of its metabolic reactions (20%) are active in approximately half of the simulated environments. Constructing the high-flux backbone of the network for every environment, there is a clear trend that the more frequently a reaction is active, the more likely it is a part of the backbone. Finally, I briefly discuss the predicted activity patterns of the central-carbon metabolic pathways for the sample of random environments.

  9. Optimal flux patterns in cellular metabolic networks

    NASA Astrophysics Data System (ADS)

    Almaas, Eivind

    2007-06-01

    The availability of whole-cell-level metabolic networks of high quality has made it possible to develop a predictive understanding of bacterial metabolism. Using the optimization framework of flux balance analysis, I investigate the metabolic response and activity patterns to variations in the availability of nutrient and chemical factors such as oxygen and ammonia by simulating 30 000 random cellular environments. The distribution of reaction fluxes is heavy tailed for the bacteria H. pylori and E. coli, and the eukaryote S. cerevisiae. While the majority of flux balance investigations has relied on implementations of the simplex method, it is necessary to use interior-point optimization algorithms to adequately characterize the full range of activity patterns on metabolic networks. The interior-point activity pattern is bimodal for E. coli and S. cerevisiae, suggesting that most metabolic reactions are either in frequent use or are rarely active. The trimodal activity pattern of H. pylori indicates that a group of its metabolic reactions (20%) are active in approximately half of the simulated environments. Constructing the high-flux backbone of the network for every environment, there is a clear trend that the more frequently a reaction is active, the more likely it is a part of the backbone. Finally, I briefly discuss the predicted activity patterns of the central carbon metabolic pathways for the sample of random environments.

  10. Payload Invariant Control via Neural Networks: Development and Experimental Evaluation

    DTIC Science & Technology

    1989-12-01

    control is proposed and experimentally evaluated. An Adaptive Model-Based Neural Network Controller (AMBNNC) uses multilayer perceptron artificial neural ... networks to estimate the payload during high speed manipulator motion. The payload estimate adapts the feedforward compensator to unmodeled system

  11. Development of programmable artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J.

    1993-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  12. Computational chaos in massively parallel neural networks

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  13. The labeled systems of multiple neural networks.

    PubMed

    Nemissi, M; Seridi, H; Akdag, H

    2008-08-01

    This paper proposes an implementation scheme of K-class classification problem using systems of multiple neural networks. Usually, a multi-class problem is decomposed into simple sub-problems solved independently using similar single neural networks. For the reason that these sub-problems are not equivalent in their complexity, we propose a system that includes reinforced networks destined to solve complicated parts of the entire problem. Our approach is inspired from principles of the multi-classifiers systems and the labeled classification, which aims to improve performances of the networks trained by the Back-Propagation algorithm. We propose two implementation schemes based on both OAO (one-against-all) and OAA (one-against-one). The proposed models are evaluated using iris and human thigh databases.

  14. A neural network based speech recognition system

    NASA Astrophysics Data System (ADS)

    Carroll, Edward J.; Coleman, Norman P., Jr.; Reddy, G. N.

    1990-02-01

    An overview is presented of the development of a neural network based speech recognition system. The two primary tasks involved were the development of a time invariant speech encoder and a pattern recognizer or detector. The speech encoder uses amplitude normalization and a Fast Fourier Transform to eliminate amplitude and frequency shifts of acoustic clues. The detector consists of a back-propagation network which accepts data from the encoder and identifies individual words. This use of neural networks offers two advantages over conventional algorithmic detectors: the detection time is no more than a few network time constants, and its recognition speed is independent of the number of the words in the vocabulary. The completed system has functioned as expected with high tolerance to input variation and with error rates comparable to a commercial system when used in a noisy environment.

  15. A neural network with modular hierarchical learning

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)

    1994-01-01

    This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.

  16. Integrated semiconductor optical sensors for cellular and neural imaging.

    PubMed

    Levi, Ofer; Lee, Thomas T; Lee, Meredith M; Smith, Stephen J; Harris, James S

    2007-04-01

    We review integrated optical sensors for functional brain imaging, localized index-of-refraction sensing as part of a lab-on-a-chip, and in vivo continuous monitoring of tumor and cancer stem cells. We present semiconductor-based sensors and imaging systems for these applications. Measured intrinsic optical signals and tissue optics simulations indicate the need for high dynamic range and low dark-current neural sensors. Simulated and measured reflectance spectra from our guided resonance filter demonstrate the capability for index-of-refraction sensing on cellular scales, compatible with integrated biosensors. Finally, we characterized a thermally evaporated emission filter that can be used to improve sensitivity for in vivo fluorescence sensing.

  17. Knowledge learning on fuzzy expert neural networks

    NASA Astrophysics Data System (ADS)

    Fu, Hsin-Chia; Shann, J.-J.; Pao, Hsiao-Tien

    1994-03-01

    The proposed fuzzy expert network is an event-driven, acyclic neural network designed for knowledge learning on a fuzzy expert system. Initially, the network is constructed according to a primitive (rough) expert rules including the input and output linguistic variables and values of the system. For each inference rule, it corresponds to an inference network, which contains five types of nodes: Input, Membership-Function, AND, OR, and Defuzzification Nodes. We propose a two-phase learning procedure for the inference network. The first phase is the competitive backpropagation (CBP) training phase, and the second phase is the rule- pruning phase. The CBP learning algorithm in the training phase enables the network to learn the fuzzy rules as precisely as backpropagation-type learning algorithms and yet as quickly as competitive-type learning algorithms. After the CBP training, the rule-pruning process is performed to delete redundant weight connections for simple network structures and yet compatible retrieving performance.

  18. A Wireless Communications Laboratory on Cellular Network Planning

    ERIC Educational Resources Information Center

    Dawy, Z.; Husseini, A.; Yaacoub, E.; Al-Kanj, L.

    2010-01-01

    The field of radio network planning and optimization (RNPO) is central for wireless cellular network design, deployment, and enhancement. Wireless cellular operators invest huge sums of capital on deploying, launching, and maintaining their networks in order to ensure competitive performance and high user satisfaction. This work presents a lab…

  19. A Wireless Communications Laboratory on Cellular Network Planning

    ERIC Educational Resources Information Center

    Dawy, Z.; Husseini, A.; Yaacoub, E.; Al-Kanj, L.

    2010-01-01

    The field of radio network planning and optimization (RNPO) is central for wireless cellular network design, deployment, and enhancement. Wireless cellular operators invest huge sums of capital on deploying, launching, and maintaining their networks in order to ensure competitive performance and high user satisfaction. This work presents a lab…

  20. Neural Networks Applied to Signal Processing

    DTIC Science & Technology

    1989-09-01

    identify by block number) FIELD GROUP SUB-GROUP Neural network, backpropagation, conjugato grad- ient method, Fibonacci line search, nonlinear signal...of the First Layer Gradients ............ 31 e. Calculation of the Input Layer Gradient-. ........... 33 i%" 5. Fibonacci Line Search Parameters...conjugate gradient optimization method is presented and then applied to the neu- ral network model. The Fibonacci line search method used in conjunction

  1. Simplified Learning Scheme For Analog Neural Network

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1991-01-01

    Synaptic connections adjusted one at a time in small increments. Simplified gradient-descent learning scheme for electronic neural-network processor less efficient than better-known back-propagation scheme, but offers two advantages: easily implemented in circuitry because data-access circuitry separated from learning circuitry; and independence of data-access circuitry makes possible to implement feedforward as well as feedback networks, including those of multiple-attractor type. Important in such applications as recognition of patterns.

  2. Using neural networks to model chaos

    SciTech Connect

    Upadhyay, M.D.

    1996-12-31

    Two types of neural networks -- backpropagation and radial basis function -- are presented for modeling dynamical systems. They were trained to model the Henon, Ikeda and Tinkerbell dynamical systems by providing a set of points randomly chosen from orbits under the functions. After training, the networks were used to simulate the functions to determine the extent to which they could generate the chaotic attractors associated with these systems.

  3. Auto-associative nanoelectronic neural network

    SciTech Connect

    Nogueira, C. P. S. M.; Guimarães, J. G.

    2014-05-15

    In this paper, an auto-associative neural network using single-electron tunneling (SET) devices is proposed and simulated at low temperature. The nanoelectronic auto-associative network is able to converge to a stable state, previously stored during training. The recognition of the pattern involves decreasing the energy of the input state until it achieves a point of local minimum energy, which corresponds to one of the stored patterns.

  4. Analysis of Wideband Beamformers Designed with Artificial Neural Networks

    DTIC Science & Technology

    1990-12-01

    TECHNICAL REPORT 0-90-1 ANALYSIS OF WIDEBAND BEAMFORMERS DESIGNED WITH ARTIFICIAL NEURAL NETWORKS by Cary Cox Instrumentation Services Division...included. A briel tutorial on beamformers and neural networks is also provided. 14. SUBJECT TERMS 15, NUMBER OF PAGES Artificial neural networks Fecdforwa:,l...Beamformers Designed with Artificial Neural Networks ". The study was conducted under the general supervision of Messrs. George P. Bonner, Chief

  5. Neural Network Noise Anomaly Recognition System and Method

    DTIC Science & Technology

    2000-10-04

    determine when an input waveform deviates from learned noise characteristics. A plurality of neural networks is preferably provided, which each receives a...plurality of samples of intervals or windows of the input waveform. Each of the neural networks produces an output based on whether an anomaly is...detected with respect to the noise, which the neural network is trained to detect. The plurality of outputs of the neural networks is preferably applied to

  6. Are artificial neural networks black boxes?

    PubMed

    Benitez, J M; Castro, J L; Requena, I

    1997-01-01

    Artificial neural networks are efficient computing models which have shown their strengths in solving hard problems in artificial intelligence. They have also been shown to be universal approximators. Notwithstanding, one of the major criticisms is their being black boxes, since no satisfactory explanation of their behavior has been offered. In this paper, we provide such an interpretation of neural networks so that they will no longer be seen as black boxes. This is stated after establishing the equality between a certain class of neural nets and fuzzy rule-based systems. This interpretation is built with fuzzy rules using a new fuzzy logic operator which is defined after introducing the concept of f-duality. In addition, this interpretation offers an automated knowledge acquisition procedure.

  7. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  8. Neural networks in the former Soviet Union

    SciTech Connect

    Wunsch, D.C. II.

    1993-01-01

    A brief overview is given of neural networks activities in the former Soviet Union that have potential aerospace applications. Activities at institutes in Moscow, the former Leningrad, Kiev, Taganrog, Rostov-on-Don, and Krasnoyarsk are addressed, including the most important scientists involved. 21 refs.

  9. Neural networks and dynamic complex systems

    SciTech Connect

    Fox, G.; Furmanski, Wojtek; Ho, Alex; Koller, J.; Simic, P.; Wong, Isaac

    1989-01-01

    We describe the use of neural networks for optimization and inference associated with a variety of complex systems. We show how a string formalism can be used for parallel computer decomposition, message routing and sequential optimizing compilers. We extend these ideas to a general treatment of spatial assessment and distributed artificial intelligence. 34 refs., 12 figs.

  10. Optoelectronic Integrated Circuits For Neural Networks

    NASA Technical Reports Server (NTRS)

    Psaltis, D.; Katz, J.; Kim, Jae-Hoon; Lin, S. H.; Nouhi, A.

    1990-01-01

    Many threshold devices placed on single substrate. Integrated circuits containing optoelectronic threshold elements developed for use as planar arrays of artificial neurons in research on neural-network computers. Mounted with volume holograms recorded in photorefractive crystals serving as dense arrays of variable interconnections between neurons.

  11. Multidimensional neural growing networks and computer intelligence

    SciTech Connect

    Yashchenko, V.A.

    1995-03-01

    This paper examines information-computation processes in time and in space and some aspects of computer intelligence using multidimensional matrix neural growing networks. In particular, issues of object-oriented {open_quotes}thinking{close_quotes} of computers are considered.

  12. Annual Meeting of International Neural Network Society

    DTIC Science & Technology

    1990-07-31

    Applications Session Max Planck Institut fur Biophysik- Michael Buffa Chemie Nestor, Inc. Daniel Amit Wilfrid Veldkamp Hebrew University MIT, Lincoln...30 AM Amit, Daniel Hebrew University Title To Be A nnounced...Poster Session Stanbro Room Thursday, September 8, 1988 Morning (continued) Vowel -Feature Extraction from Cochlear Vibration Using Neural Networks Irino T

  13. Neural Network Classification of Environmental Samples

    DTIC Science & Technology

    1996-12-01

    Biological and Artificial Neural Networks. Air Force Institute of Technology, 1990. 24. Rosenblatt. Principles of Neurodynamics . New York, NY: Spartan...Parallel Distributed Processing: Explorations in the Microstructure of Cognition . MIT Press, 1986. 29. Smagt, Patrick P. Van Der. "Minimisation Methods

  14. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  15. Nonlinear Time Series Analysis via Neural Networks

    NASA Astrophysics Data System (ADS)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  16. Localizing Tortoise Nests by Neural Networks.

    PubMed

    Barbuti, Roberto; Chessa, Stefano; Micheli, Alessio; Pucci, Rita

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  17. Automatic target identification using neural networks

    NASA Astrophysics Data System (ADS)

    Abdallah, Mahmoud A.; Samu, Tayib I.; Grissom, William A.

    1995-10-01

    Neural network theories are applied to attain human-like performance in areas such as speech recognition, statistical mapping, and target recognition or identification. In target identification, one of the difficult tasks has been the extraction of features to be used to train the neural network which is subsequently used for the target's identification. The purpose of this paper is to describe the development of an automatic target identification system using features extracted from a specific class of targets. The extracted features were the graphical representations of the silhouettes of the targets. Image processing techniques and some Fast Fourier Transform (FFT) properties were implemented to extract the features. The FFT eliminates variations in the extracted features due to rotation or scaling. A Neural Network was trained with the extracted features using the Learning Vector Quantization paradigm. An identification system was set up to test the algorithm. The image processing software was interfaced with MATLAB Neural Network Toolbox via a computer program written in C language to automate the target identification process. The system performed well as at classified the objects used to train it irrespective of rotation, scaling, and translation. This automatic target identification system had a classification success rate of about 95%.

  18. Neural network application to comprehensive engine diagnostics

    NASA Technical Reports Server (NTRS)

    Marko, Kenneth A.

    1994-01-01

    We have previously reported on the use of neural networks for detection and identification of faults in complex microprocessor controlled powertrain systems. The data analyzed in those studies consisted of the full spectrum of signals passing between the engine and the real-time microprocessor controller. The specific task of the classification system was to classify system operation as nominal or abnormal and to identify the fault present. The primary concern in earlier work was the identification of faults, in sensors or actuators in the powertrain system as it was exercised over its full operating range. The use of data from a variety of sources, each contributing some potentially useful information to the classification task, is commonly referred to as sensor fusion and typifies the type of problems successfully addressed using neural networks. In this work we explore the application of neural networks to a different diagnostic problem, the diagnosis of faults in newly manufactured engines and the utility of neural networks for process control.

  19. Localizing Tortoise Nests by Neural Networks

    PubMed Central

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  20. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    PubMed

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  1. [Application of artificial neural networks in infectious diseases].

    PubMed

    Xu, Jun-fang; Zhou, Xiao-nong

    2011-02-28

    With the development of information technology, artificial neural networks has been applied to many research fields. Due to the special features such as nonlinearity, self-adaptation, and parallel processing, artificial neural networks are applied in medicine and biology. This review summarizes the application of artificial neural networks in the relative factors, prediction and diagnosis of infectious diseases in recent years.

  2. Electrically Modifiable Nonvolatile SONOS Synapses for Electronic Neural Networks.

    DTIC Science & Technology

    1992-09-30

    for the electrically reprogrammable analog conductance in an artificial neural network. We have demonstrated the attractive featuies of this synaptic ...Electrically Modifiable Synaptic Element for VLSI Neural Network Implementation", Proceedings of the 1991 IEEE Nonvolatile Semiconductor Memory Workshop...Nonvolatile Eletrically Modifiable Synaptic Element for VLSI Neural Network Implementation", 11th IEEE Nonvolatile Semiconductor Memory Workshop, 1991. 19. A

  3. Neural Network Design on the SRC-6 Reconfigurable Computer

    DTIC Science & Technology

    2006-12-01

    speeds of FPGA systems. This thesis explores the use of a Feed-forward, Multi-Layer Perceptron (MLP) Artificial Neural Network (ANN) architecture... Implementation of a Fast Artificial Neural Network Library (FANN), Graduate Project Report, Department of Computer Science, University of Copenhagen (DIKU...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited NEURAL NETWORK

  4. Hyperspectral Imagery Classification Using a Backpropagation Neural Network

    DTIC Science & Technology

    1993-12-01

    A backpropagation neural network was developed and implemented for classifying AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) hyperspectral...imagery. It is a fully interconnected linkage of three layers of neural network . Fifty input layer neurons take in signals from Bands 41 to 90 of the...moderate AVIRIS pixel resolution of 20 meters by 20 meters. Backpropagation neural network , Hyperspectral imagery

  5. Optimal input sizes for neural network de-interlacing

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee

    2009-02-01

    Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.

  6. Chaotic time series prediction using artificial neural networks

    SciTech Connect

    Bartlett, E.B.

    1991-12-31

    This paper describes the use of artificial neural networks to model the complex oscillations defined by a chaotic Verhuist animal population dynamic. A predictive artificial neural network model is developed and tested, and results of computer simulations are given. These results show that the artificial neural network model predicts the chaotic time series with various initial conditions, growth parameters, or noise.

  7. Chaotic time series prediction using artificial neural networks

    SciTech Connect

    Bartlett, E.B.

    1991-01-01

    This paper describes the use of artificial neural networks to model the complex oscillations defined by a chaotic Verhuist animal population dynamic. A predictive artificial neural network model is developed and tested, and results of computer simulations are given. These results show that the artificial neural network model predicts the chaotic time series with various initial conditions, growth parameters, or noise.

  8. Global exponential stability of recurrent neural networks with time-varying delays in the presence of strong external stimuli.

    PubMed

    Zeng, Zhigang; Wang, Jun

    2006-12-01

    This paper presents new theoretical results on the global exponential stability of recurrent neural networks with bounded activation functions and bounded time-varying delays in the presence of strong external stimuli. It is shown that the Cohen-Grossberg neural network is globally exponentially stable, if the absolute value of the input vector exceeds a criterion. As special cases, the Hopfield neural network and the cellular neural network are examined in detail. In addition, it is shown that criteria herein, if partially satisfied, can still be used in combination with existing stability conditions. Simulation results are also discussed in two illustrative examples.

  9. Optical implementation of neural networks

    NASA Astrophysics Data System (ADS)

    Yu, Francis T. S.; Guo, Ruyan

    2002-12-01

    An adaptive optical neuro-computing (ONC) using inexpensive pocket size liquid crystal televisions (LCTVs) had been developed by the graduate students in the Electro-Optics Laboratory at The Pennsylvania State University. Although this neuro-computing has only 8×8=64 neurons, it can be easily extended to 16×20=320 neurons. The major advantages of this LCTV architecture as compared with other reported ONCs, are low cost and the flexibility to operate. To test the performance, several neural net models are used. These models are Interpattern Association, Hetero-association and unsupervised learning algorithms. The system design considerations and experimental demonstrations are also included.

  10. Hybrid neural networks--combining abstract and realistic neural units.

    PubMed

    Lytton, William W; Hines, Michael

    2004-01-01

    There is a trade-off in neural network simulation between simulations that embody the details of neuronal biology and those that omit these details in favor of abstractions. The former approach appeals to physiologists and pharmacologists who can directly relate their experimental manipulations to parameter changes in the model. The latter approach appeals to physicists and mathematicians who seek analytic understanding of the behavior of large numbers of coupled simple units. This simplified approach is also valuable for practical reasons a highly simplified unit will run several orders of magnitude faster than a complex, biologically realistic unit. In order to have our cake and eat it, we have developed hybrid networks in the Neuron simulator package. These make use of Neuron's local variable timestep method to permit simplified integrate-and-fire units to move ahead quickly while realistic neurons in the same network are integrated slowly.

  11. Classifying multispectral data by neural networks

    NASA Technical Reports Server (NTRS)

    Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.

    1993-01-01

    Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.

  12. Back propagation neural networks for facial verification

    SciTech Connect

    Garnett, A.E.; Solheim, I.; Payne, T.; Castain, R.H.

    1992-10-01

    We conducted a test to determine the aptitude of neural networks to recognize human faces. The pictures we collected of 511 subjects captured both profiles and many natural expressions. Some of the subjects were wearing glasses, sunglasses, or hats in some of the pictures. The images were compressed by a factor of 100 and converted into image vectors of 1400 pixels. The image vectors were fed into a back propagation neural network with one hidden layer and one output node. The networks were trained to recognize one target person and to reject all other persons. Neural networks for 37 target subjects were trained with 8 different training sets that consisted of different subsets of the data. The networks were then tested on the rest of the data, which consisted of 7000 or more unseen pictures. Results indicate that a false acceptance rate of less than 1 percent can be obtained, and a false rejection rate of 2 percent can be obtained when certain restrictions are followed.

  13. Neural networks in windprofiler data processing

    NASA Astrophysics Data System (ADS)

    Weber, H.; Richner, H.; Kretzschmar, R.; Ruffieux, D.

    2003-04-01

    Wind profilers are basically Doppler radars yielding 3-dimensional wind profiles that are deduced from the Doppler shift caused by turbulent elements in the atmosphere. These signals can be contaminated by other airborne elements such as birds or hydrometeors. Using a feed-forward neural network with one hidden layer and one output unit, birds and hydrometeors can be successfully identified in non-averaged single spectra; theses are subsequently removed in the wind computation. An infrared camera was used to identify birds in one of the beams of the wind profiler. After training the network with about 6000 contaminated data sets, it was able to identify contaminated data in a test data set with a reliability of 96 percent. The assumption was made that the neural network parameters obtained in the beam for which bird data was collected can be transferred to the other beams (at least three beams are needed for computing wind vectors). Comparing the evolution of a wind field with and without the neural network shows a significant improvement of wind data quality. Current work concentrates on training the network also for hydrometeors. It is hoped that the instrument's capability can thus be expanded to measure not only correct winds, but also observe bird migration, estimate precipitation and -- by combining precipitation information with vertical velocity measurement -- the monitoring of the height of the melting layer.

  14. Back propagation neural networks for facial verification

    SciTech Connect

    Garnett, A.E.; Solheim, I.; Payne, T.; Castain, R.H.

    1992-10-01

    We conducted a test to determine the aptitude of neural networks to recognize human faces. The pictures we collected of 511 subjects captured both profiles and many natural expressions. Some of the subjects were wearing glasses, sunglasses, or hats in some of the pictures. The images were compressed by a factor of 100 and converted into image vectors of 1400 pixels. The image vectors were fed into a back propagation neural network with one hidden layer and one output node. The networks were trained to recognize one target person and to reject all other persons. Neural networks for 37 target subjects were trained with 8 different training sets that consisted of different subsets of the data. The networks were then tested on the rest of the data, which consisted of 7000 or more unseen pictures. Results indicate that a false acceptance rate of less than 1 percent can be obtained, and a false rejection rate of 2 percent can be obtained when certain restrictions are followed.

  15. Computationally Efficient Neural Network Intrusion Security Awareness

    SciTech Connect

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  16. Multiscale Modeling of Cortical Neural Networks

    NASA Astrophysics Data System (ADS)

    Torben-Nielsen, Benjamin; Stiefel, Klaus M.

    2009-09-01

    In this study, we describe efforts at modeling the electrophysiological dynamics of cortical networks in a multi-scale manner. Specifically, we describe the implementation of a network model composed of simple single-compartmental neuron models, in which a single complex multi-compartmental model of a pyramidal neuron is embedded. The network is capable of generating Δ (2 Hz, observed during deep sleep states) and γ (40 Hz, observed during wakefulness) oscillations, which are then imposed onto the multi-compartmental model, thus providing realistic, dynamic boundary conditions. We furthermore discuss the challenges and chances involved in multi-scale modeling of neural function.

  17. Intrinsic adaptation in autonomous recurrent neural networks.

    PubMed

    Marković, Dimitrije; Gros, Claudius

    2012-02-01

    A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the quality of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.

  18. A Topological Perspective of Neural Network Structure

    NASA Astrophysics Data System (ADS)

    Sizemore, Ann; Giusti, Chad; Cieslak, Matthew; Grafton, Scott; Bassett, Danielle

    The wiring patterns of white matter tracts between brain regions inform functional capabilities of the neural network. Indeed, densely connected and cyclically arranged cognitive systems may communicate and thus perform distinctly. However, previously employed graph theoretical statistics are local in nature and thus insensitive to such global structure. Here we present an investigation of the structural neural network in eight healthy individuals using persistent homology. An extension of homology to weighted networks, persistent homology records both circuits and cliques (all-to-all connected subgraphs) through a repetitive thresholding process, thus perceiving structural motifs. We report structural features found across patients and discuss brain regions responsible for these patterns, finally considering the implications of such motifs in relation to cognitive function.

  19. Controlling neural network responsiveness: tradeoffs and constraints

    PubMed Central

    Keren, Hanna; Marom, Shimon

    2014-01-01

    In recent years much effort is invested in means to control neural population responses at the whole brain level, within the context of developing advanced medical applications. The tradeoffs and constraints involved, however, remain elusive due to obvious complications entailed by studying whole brain dynamics. Here, we present effective control of response features (probability and latency) of cortical networks in vitro over many hours, and offer this approach as an experimental toy for studying controllability of neural networks in the wider context. Exercising this approach we show that enforcement of stable high activity rates by means of closed loop control may enhance alteration of underlying global input–output relations and activity dependent dispersion of neuronal pair-wise correlations across the network. PMID:24808860

  20. Fuzzy logic and neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  1. Noise in genetic and neural networks

    NASA Astrophysics Data System (ADS)

    Swain, Peter S.; Longtin, André

    2006-06-01

    Both neural and genetic networks are significantly noisy, and stochastic effects in both cases ultimately arise from molecular events. Nevertheless, a gulf exists between the two fields, with researchers in one often being unaware of similar work in the other. In this Special Issue, we focus on bridging this gap and present a collection of papers from both fields together. For each field, the networks studied range from just a single gene or neuron to endogenous networks. In this introductory article, we describe the sources of noise in both genetic and neural systems. We discuss the modeling techniques in each area and point out similarities. We hope that, by reading both sets of papers, ideas developed in one field will give insight to scientists from the other and that a common language and methodology will develop.

  2. Neural networks: Application to medical imaging

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  3. a Heterosynaptic Learning Rule for Neural Networks

    NASA Astrophysics Data System (ADS)

    Emmert-Streib, Frank

    In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.

  4. Heterogeneous force network in 3D cellularized collagen networks.

    PubMed

    Liang, Long; Jones, Christopher; Chen, Shaohua; Sun, Bo; Jiao, Yang

    2016-10-25

    Collagen networks play an important role in coordinating and regulating collective cellular dynamics via a number of signaling pathways. Here, we investigate the transmission of forces generated by contractile cells in 3D collagen-I networks. Specifically, the graph (bond-node) representations of collagen networks with collagen concentrations of 1, 2 and 4 mg ml(-1) are derived from confocal microscopy data and used to model the network microstructure. Cell contraction is modeled by applying correlated displacements at specific nodes of the network, representing the focal adhesion sites. A nonlinear elastic model is employed to characterize the mechanical behavior of individual fiber bundles including strain hardening during stretching and buckling under compression. A force-based relaxation method is employed to obtain equilibrium network configurations under cell contraction. We find that for all collagen concentrations, the majority of the forces are carried by a small number of heterogeneous force chains emitted from the contracting cells, which is qualitatively consistent with our experimental observations. The force chains consist of fiber segments that either possess a high degree of alignment before cell contraction or are aligned due to fiber reorientation induced by cell contraction. The decay of the forces along the force chains is significantly slower than the decay of radially averaged forces in the system, suggesting that the fibreous nature of biopolymer network structure can support long-range force transmission. The force chains emerge even at very small cell contractions, and the number of force chains increases with increasing cell contraction. At large cell contractions, the fibers close to the cell surface are in the nonlinear regime, and the nonlinear region is localized in a small neighborhood of the cell. In addition, the number of force chains increases with increasing collagen concentration, due to the larger number of focal adhesion sites

  5. Heterogeneous force network in 3D cellularized collagen networks

    NASA Astrophysics Data System (ADS)

    Liang, Long; Jones, Christopher; Chen, Shaohua; Sun, Bo; Jiao, Yang

    2016-12-01

    Collagen networks play an important role in coordinating and regulating collective cellular dynamics via a number of signaling pathways. Here, we investigate the transmission of forces generated by contractile cells in 3D collagen-I networks. Specifically, the graph (bond-node) representations of collagen networks with collagen concentrations of 1, 2 and 4 mg ml-1 are derived from confocal microscopy data and used to model the network microstructure. Cell contraction is modeled by applying correlated displacements at specific nodes of the network, representing the focal adhesion sites. A nonlinear elastic model is employed to characterize the mechanical behavior of individual fiber bundles including strain hardening during stretching and buckling under compression. A force-based relaxation method is employed to obtain equilibrium network configurations under cell contraction. We find that for all collagen concentrations, the majority of the forces are carried by a small number of heterogeneous force chains emitted from the contracting cells, which is qualitatively consistent with our experimental observations. The force chains consist of fiber segments that either possess a high degree of alignment before cell contraction or are aligned due to fiber reorientation induced by cell contraction. The decay of the forces along the force chains is significantly slower than the decay of radially averaged forces in the system, suggesting that the fibreous nature of biopolymer network structure can support long-range force transmission. The force chains emerge even at very small cell contractions, and the number of force chains increases with increasing cell contraction. At large cell contractions, the fibers close to the cell surface are in the nonlinear regime, and the nonlinear region is localized in a small neighborhood of the cell. In addition, the number of force chains increases with increasing collagen concentration, due to the larger number of focal adhesion sites

  6. Do neural networks offer something for you?

    SciTech Connect

    Ramchandran, S.; Rhinehart, R.R.

    1995-11-01

    The concept of neural network computation was inspired by the hope to artifically reproduce some of the flexibility and power of the human brain. Human beings can recognize different patterns and voices even though these signals do not have a simple phenomenological understanding. Scientists have developed artificial neural networks (ANNs) for modeling processes that do not have a simple phenomenological explanation, such as voice recognition. Consequently, ANN jargon can be confusing to process and control engineers. In simple terms, ANNs take a nonlinear regression modeling approach. Like any regression curve-fitting approach, a least-squares optimization can generate model parameters. One advantage of ANNs is that they require neither a priori understanding of the process behavior nor phenomenological understanding of the process. ANNs use data describing the input/output relationship in a process to {open_quotes}learn{close_quotes} about the underlying process behavior. As a result of this, ANNs have a wide range of applicability. Furthermore, ANNs are computationally efficient and can replace models that are computationally intensive. This can make real-time online model-based applications practicable. A neural network is a dense mesh of nodes and connections. The basic processing elements of a network are called neurons. Neural networks are organized in layers, and typically consist of at least three layers: an input layer, one or more hidden layers, and an output layer. The input and output layers serve as interfaces that perform appropriate scaling between `real-world` and network data. Hidden layers are so termed because their neurons are hidden to the real-world data. Connections are the means for information flow. Each connection has an associated adjustable weight, w{sub i}. The weight can be regarded as a measure of the importance of the signals between the two neurons. 7 figs.

  7. Neural networks in the process industries

    SciTech Connect

    Ben, L.R.; Heavner, L.

    1996-12-01

    Neural networks, or more precisely, artificial neural networks (ANNs), are rapidly gaining in popularity. They first began to appear on the process-control scene in the early 1990s, but have been a research focus for more than 30 years. Neural networks are really empirical models that approximate the way man thinks neurons in the human brain work. Neural-net technology is not trying to produce computerized clones, but to model nature in an effort to mimic some of the brain`s capabilities. Modeling, for the purposes of this article, means developing a mathematical description of physical phenomena. The physics and chemistry of industrial processes are usually quite complex and sometimes poorly understood. Our process understanding, and our imperfect ability to describe complexity in mathematical terms, limit fidelity of first-principle models. Computational requirements for executing these complex models are a further limitation. It is often not possible to execute first-principle model algorithms at the high rate required for online control. Nevertheless, rigorous first principle models are commonplace design tools. Process control is another matter. Important model inputs are often not available as process measurements, making real-time application difficult. In fact, engineers often use models to infer unavailable measurements. 5 figs.

  8. Supporting performance and configuration management of GTE cellular networks

    SciTech Connect

    Tan, Ming; Lafond, C.; Jakobson, G.; Young, G.

    1996-12-31

    GTE Laboratories, in cooperation with GTE Mobilnet, has developed and deployed PERFFEX (PERFormance Expert), an intelligent system for performance and configuration management of cellular networks. PERFEX assists cellular network performance and radio engineers in the analysis of large volumes of cellular network performance and configuration data. It helps them locate and determine the probable causes of performance problems, and provides intelligent suggestions about how to correct them. The system combines an expert cellular network performance tuning capability with a map-based graphical user interface, data visualization programs, and a set of special cellular engineering tools. PERFEX is in daily use at more than 25 GTE Mobile Switching Centers. Since the first deployment of the system in late 1993, PERFEX has become a major GTE cellular network performance optimization tool.

  9. Pruning Neural Networks with Distribution Estimation Algorithms

    SciTech Connect

    Cantu-Paz, E

    2003-01-15

    This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than the original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.

  10. Membership generation using multilayer neural network

    NASA Technical Reports Server (NTRS)

    Kim, Jaeseok

    1992-01-01

    There has been intensive research in neural network applications to pattern recognition problems. Particularly, the back-propagation network has attracted many researchers because of its outstanding performance in pattern recognition applications. In this section, we describe a new method to generate membership functions from training data using a multilayer neural network. The basic idea behind the approach is as follows. The output values of a sigmoid activation function of a neuron bear remarkable resemblance to membership values. Therefore, we can regard the sigmoid activation values as the membership values in fuzzy set theory. Thus, in order to generate class membership values, we first train a suitable multilayer network using a training algorithm such as the back-propagation algorithm. After the training procedure converges, the resulting network can be treated as a membership generation network, where the inputs are feature values and the outputs are membership values in the different classes. This method allows fairly complex membership functions to be generated because the network is highly nonlinear in general. Also, it is to be noted that the membership functions are generated from a classification point of view. For pattern recognition applications, this is highly desirable, although the membership values may not be indicative of the degree of typicality of a feature value in a particular class.

  11. Adaptive Neural Networks for Automatic Negotiation

    SciTech Connect

    Sakas, D. P.; Vlachos, D. S.; Simos, T. E.

    2007-12-26

    The use of fuzzy logic and fuzzy neural networks has been found effective for the modelling of the uncertain relations between the parameters of a negotiation procedure. The problem with these configurations is that they are static, that is, any new knowledge from theory or experiment lead to the construction of entirely new models. To overcome this difficulty, we apply in this work, an adaptive neural topology to model the negotiation process. Finally a simple simulation is carried in order to test the new method.

  12. Gait Recognition Based on Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Sokolova, A.; Konushin, A.

    2017-05-01

    In this work we investigate the problem of people recognition by their gait. For this task, we implement deep learning approach using the optical flow as the main source of motion information and combine neural feature extraction with the additional embedding of descriptors for representation improvement. In order to find the best heuristics, we compare several deep neural network architectures, learning and classification strategies. The experiments were made on two popular datasets for gait recognition, so we investigate their advantages and disadvantages and the transferability of considered methods.

  13. The importance of artificial neural networks in biomedicine

    SciTech Connect

    Burke, H.B.

    1995-12-31

    The future explanatory power in biomedicine will be at the molecular-genetic level of analysis (rather than the epidemiologic-demographic or anatomic-cellular levels). This is the level of complex systems. Complex systems are characterized by nonlinearity and complex interactions. It is difficult for traditional statistical methods to capture complex systems because traditional methods attempt to find the model that best fits the statistician`s understanding of the phenomenon; complex systems are difficult to understand and therefore difficult to fit with a simple model. Artificial neural networks are nonparametric regression models. They can capture any phenomena, to any degree of accuracy (depending on the adequacy of the data and the power of the predictors), without prior knowledge of the phenomena. Further, artificial neural networks can be represented, not only as formulae, but also as graphical models. Graphical models can increase analytic power and flexibility. Artificial neural networks are a powerful method for capturing complex phenomena, but their use requires a paradigm shift, from exploratory analysis of the data to exploratory analysis of the model.

  14. Exceptional reducibility of complex-valued neural networks.

    PubMed

    Kobayashi, Masaki

    2010-07-01

    A neural network is referred to as minimal if it cannot reduce the number of hidden neurons that maintain the input-output map. The condition in which the number of hidden neurons can be reduced is referred to as reducibility. Real-valued neural networks have only three simple types of reducibility. It can be naturally extended to complex-valued neural networks without bias terms of hidden neurons. However, general complex-valued neural networks have another type of reducibility, referred to herein as exceptional reducibility. In this paper, another type of reducibility is presented, and a method by which to minimize complex-valued neural networks is proposed.

  15. Non-Intrusive Gaze Tracking Using Artificial Neural Networks

    DTIC Science & Technology

    1994-01-05

    Artificial Neural Networks Shumeet Baluja & Dean...this paper appear in: Baluja, S. & Pomerleau, D.A. "Non-Intrusive Gaze Tracking Using Artificial Neural Networks ", Advances in Neural Information...document hLc-s been opproved t0T 011bhiC leleWOe cad ý’ir/4 its di stT-b’ution Ls •_nii•ite6. - Keywords Gaze Tracking, Artificial Neural Networks ,

  16. Applications of neural networks in training science.

    PubMed

    Pfeiffer, Mark; Hohmann, Andreas

    2012-04-01

    Training science views itself as an integrated and applied science, developing practical measures founded on scientific method. Therefore, it demands consideration of a wide spectrum of approaches and methods. Especially in the field of competitive sports, research questions are usually located in complex environments, so that mainly field studies are drawn upon to obtain broad external validity. Here, the interrelations between different variables or variable sets are mostly of a nonlinear character. In these cases, methods like neural networks, e.g., the pattern recognizing methods of Self-Organizing Kohonen Feature Maps or similar instruments to identify interactions might be successfully applied to analyze data. Following on from a classification of data analysis methods in training-science research, the aim of the contribution is to give examples of varied sports in which network approaches can be effectually used in training science. First, two examples are given in which neural networks are employed for pattern recognition. While one investigation deals with the detection of sporting talent in swimming, the other is located in game sports research, identifying tactical patterns in team handball. The third and last example shows how an artificial neural network can be used to predict competitive performance in swimming.

  17. Diagnostic ECG classification based on neural networks.

    PubMed

    Bortolan, G; Willems, J L

    1993-01-01

    This study illustrates the use of the neural network approach in the problem of diagnostic classification of resting 12-lead electrocardiograms. A large electrocardiographic library (the CORDA database established at the University of Leuven, Belgium) has been utilized in this study, whose classification is validated by electrocardiographic-independent clinical data. In particular, a subset of 3,253 electrocardiographic signals with single diseases has been selected. Seven diagnostic classes have been considered: normal, left, right, and biventricular hypertrophy, and anterior, inferior, and combined myocardial infarction. The basic architecture used is a feed-forward neural network and the backpropagation algorithm for the training phase. Sensitivity, specificity, total accuracy, and partial accuracy are the indices used for testing and comparing the results with classical methodologies. In order to validate this approach, the accuracy of two statistical models (linear discriminant analysis and logistic discriminant analysis) tuned on the same dataset have been taken as the reference point. Several nets have been trained, either adjusting some components of the architecture of the networks, considering subsets and clusters of the original learning set, or combining different neural networks. The results have confirmed the potentiality and good performance of the connectionist approach when compared with classical methodologies.

  18. Functional expansion representations of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1992-01-01

    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  19. Character Recognition Using Genetically Trained Neural Networks

    SciTech Connect

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the amount of

  20. Toward implementation of artificial neural networks that "really work".

    PubMed Central

    Leon, M. A.; Keller, J.

    1997-01-01

    Artificial neural networks are established analytical methods in bio-medical research. They have repeatedly outperformed traditional tools for pattern recognition and clinical outcome prediction while assuring continued adaptation and learning. However, successful experimental neural networks systems seldom reach a production state. That is, they are not incorporated into clinical information systems. It could be speculated that neural networks simply must undergo a lengthy acceptance process before they become part of the day to day operations of health care systems. However, our experience trying to incorporate experimental neural networks into information systems lead us to believe that there are technical and operational barriers that greatly difficult neural network implementation. A solution for these problems may be the delineation of policies and procedures for neural network implementation and the development a new class of neural network client/server applications that fit the needs of current clinical information systems. PMID:9357613

  1. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    PubMed

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  2. Neural network models of categorical perception.

    PubMed

    Damper, R I; Harnad, S R

    2000-05-01

    Studies of the categorical perception (CP) of sensory continua have a long and rich history in psychophysics. In 1977, Macmillan, Kaplan, and Creelman introduced the use of signal detection theory to CP studies. Anderson and colleagues simultaneously proposed the first neural model for CP, yet this line of research has been less well explored. In this paper, we assess the ability of neural-network models of CP to predict the psychophysical performance of real observers with speech sounds and artificial/novel stimuli. We show that a variety of neural mechanisms are capable of generating the characteristics of CP. Hence, CP may not be a special model of perception but an emergent property of any sufficiently powerful general learning system.

  3. Neural networks as a control methodology

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1990-01-01

    While conventional computers must be programmed in a logical fashion by a person who thoroughly understands the task to be performed, the motivation behind neural networks is to develop machines which can train themselves to perform tasks, using available information about desired system behavior and learning from experience. There are three goals of this fellowship program: (1) to evaluate various neural net methods and generate computer software to implement those deemed most promising on a personal computer equipped with Matlab; (2) to evaluate methods currently in the professional literature for system control using neural nets to choose those most applicable to control of flexible structures; and (3) to apply the control strategies chosen in (2) to a computer simulation of a test article, the Control Structures Interaction Suitcase Demonstrator, which is a portable system consisting of a small flexible beam driven by a torque motor and mounted on springs tuned to the first flexible mode of the beam. Results of each are discussed.

  4. On lateral competition in dynamic neural networks

    SciTech Connect

    Bellyustin, N.S.

    1995-02-01

    Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.

  5. Speed up Neural Network Learning by GPGPU

    NASA Astrophysics Data System (ADS)

    Tsuchida, Yuta; Yoshioka, Michifumi

    Recently, graphic boards have higher performance with development of 3DCG and movie processing than CPU, and widely used with progress of computer entertainment. Implementation of the General-purpose computing on GPU (GPGPU) become more easier by the integrated development environment, CUDA distributed by NVIDIA. GPU has dozens or a hundred arithmetic circuits, whose allocations are controlled by CUDA. In the previous researches, the implementation of the neural network using GPGPU have been studied, however the learning of networks was not mentioned because the GPU performance is low in conditional processing whereas high in linear algebra processing. Therefore we have proposed two methods. At first, a whole network is implemented as a thread, and some networks are taught in parallel to shorten the time necessary to find the optimal weight coefficients. Secondly, this paper introduces parallelization in the neural network structure, that is, the calculation of neurons in the same layers can be paralleled. And the processes to teach for same network with different patterns are independent also. As a result, the second method is 20 times faster than CPU, and compared with the first proposed method, that is about 6 times faster.

  6. Visual grammars and their neural networks

    NASA Astrophysics Data System (ADS)

    Mjolsness, Eric

    1992-07-01

    We exhibit a systematic way to derive neural nets for vision problems. It involves formulating a vision problem as Bayesian inference or decision on a comprehensive model of the visual domain given by a probabilistic grammar. A key feature of this grammar is the way in which it eliminates model information, such as object labels, as it produces an image; correspondence problems and other noise removal tasks result. The neural nets that arise most directly are generalized assignment networks. Also there are transformations which naturally yield improved algorithms such as correlation matching in scale space and the Frameville neural nets for high-level vision. Networks derived this way generally have objective functions with spurious local minima; such minima may commonly be avoided by dynamics that include deterministic annealing, for example recent improvements to Mean Field Theory dynamics. The grammatical method of neural net design allows domain knowledge to enter from all levels of the grammar, including `abstract' levels remote from the final image data, and may permit new kinds of learning as well.

  7. When Networks Disagree: Ensemble Methods for Hybrid Neural Networks

    DTIC Science & Technology

    1992-10-27

    takes the form of repeated on-line stochastic gradient descent of randomly initialized nets. However, unlike the combination process in parametric ... estimation which usually takes the form of a simple average in parameter space, the parameters in a neural network take the form of neuronal weights which

  8. Representing Shape Primitives In Neural Networks

    NASA Astrophysics Data System (ADS)

    Pawlicki, Ted

    1988-08-01

    Parallel distributed, connectionist, neural networks present powerful computational metaphors for diverse applications ranging from machine perception to artificial intelligence [1-3,6]. Historically, such systems have been appealing for their ability to perform self-organization and learning[7, 8, 11]. However, while simple systems of this type can perform interesting tasks, results from such systems perform little better than existing template matchers in some real world applications [9,10]. The definition of a more complex structure made from simple units can be used to enhance performance of these models [4, 5], but the addition of extra complexity raises representational issues. This paper reports on attempts to code information and features which have classically been useful to shape analysis into a neural network system.

  9. Iris Data Classification Using Quantum Neural Networks

    NASA Astrophysics Data System (ADS)

    Sahni, Vishal; Patvardhan, C.

    2006-11-01

    Quantum computing is a novel paradigm that promises to be the future of computing. The performance of quantum algorithms has proved to be stunning. ANN within the context of classical computation has been used for approximation and classification tasks with some success. This paper presents an idea of quantum neural networks along with the training algorithm and its convergence property. It synergizes the unique properties of quantum bits or qubits with the various techniques in vogue in neural networks. An example application of Fisher's Iris data set, a benchmark classification problem has also been presented. The results obtained amply demonstrate the classification capabilities of the quantum neuron and give an idea of their promising capabilities.

  10. Privacy-preserving backpropagation neural network learning.

    PubMed

    Chen, Tingting; Zhong, Sheng

    2009-10-01

    With the development of distributed computing environment , many learning problems now have to deal with distributed input data. To enhance cooperations in learning, it is important to address the privacy concern of each data holder by extending the privacy preservation notion to original learning algorithms. In this paper, we focus on preserving the privacy in an important learning model, multilayer neural networks. We present a privacy-preserving two-party distributed algorithm of backpropagation which allows a neural network to be trained without requiring either party to reveal her data to the other. We provide complete correctness and security analysis of our algorithms. The effectiveness of our algorithms is verified by experiments on various real world data sets.

  11. Application of neural networks in space construction

    NASA Technical Reports Server (NTRS)

    Thilenius, Stephen C.; Barnes, Frank

    1990-01-01

    When trying to decide what task should be done by robots and what tasks should be done by humans with respect to space construction, there has been one decisive barrier which ultimately divides the tasks: can a computer do the job? Von Neumann type computers have great difficulty with problems that the human brain seems to do instantaneously and with little effort. Some of these problems are pattern recognition, speech recognition, content addressable memories, and command interpretation. In an attempt to simulate these talents of the human brain, much research was currently done into the operations and construction of artificial neural networks. The efficiency of the interface between man and machine, robots in particular, can therefore be greatly improved with the use of neural networks. For example, wouldn't it be easier to command a robot to 'fetch an object' rather then having to remotely control the entire operation with remote control?

  12. Hardware neural network on an SOPC platform

    NASA Astrophysics Data System (ADS)

    Liu, Yifei; Ding, Mingyue; Hu, Xia; Zhou, Yanhong

    2009-10-01

    SOPC (System on Programmable Chip) is an on-chip programmable system based on large scale Field Programmable Arrays (FPGAs). This paper presented an implementation of an SOPC system with a custom hardware neural network using Altera FPGA chip-EP2C35F672C. The embedded Nios processor was used as the test bench. The test result showed that the SOPC Platform with hardware neural network is faster than the software implementation respectively and the accuracy of the design meets the requirement of system. The verified SOPC system can closely model real-world system, which will have wide applications in different areas such as pattern recognition, data mining and signal processing.

  13. Neural networks predict tomato maturity stage

    NASA Astrophysics Data System (ADS)

    Hahn, Federico

    1999-03-01

    Almost 40% of the total horticultural produce exported from Mexico the USA is tomato, and quality is fundamental for maintaining the market. Many fruits packed at the green-mature stage do not mature towards a red color as they were harvested before achieving its physiological maturity. Tomato gassed for advancing maturation does not respond on those fruits, and repacking is necessary at terminal markets, causing losses to the producer. Tomato spectral signatures are different on each maturity stage and tomato size was poorly correlated against peak wavelengths. A back-propagation neural network was used to predict tomato maturity using reflectance ratios as inputs. Higher success rates were achieved on tomato maturity stage recognition with neural networks than with discriminant analysis.

  14. On analog implementations of discrete neural networks

    SciTech Connect

    Beiu, V.; Moore, K.R.

    1998-12-01

    The paper will show that in order to obtain minimum size neural networks (i.e., size-optimal) for implementing any Boolean function, the nonlinear activation function of the neutrons has to be the identity function. The authors shall shortly present many results dealing with the approximation capabilities of neural networks, and detail several bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions they will show that implementing Boolean functions can be done using neurons having an identity nonlinear function. It follows that size-optimal solutions can be obtained only using analog circuitry. Conclusions, and several comments on the required precision are ending the paper.

  15. Evaluating neural networks and artificial intelligence systems

    NASA Astrophysics Data System (ADS)

    Alberts, David S.

    1994-02-01

    Systems have no intrinsic value in and of themselves, but rather derive value from the contributions they make to the missions, decisions, and tasks they are intended to support. The estimation of the cost-effectiveness of systems is a prerequisite for rational planning, budgeting, and investment documents. Neural network and expert system applications, although similar in their incorporation of a significant amount of decision-making capability, differ from each other in ways that affect the manner in which they can be evaluated. Both these types of systems are, by definition, evolutionary systems, which also impacts their evaluation. This paper discusses key aspects of neural network and expert system applications and their impact on the evaluation process. A practical approach or methodology for evaluating a certain class of expert systems that are particularly difficult to measure using traditional evaluation approaches is presented.

  16. Automatic breast density classification using neural network

    NASA Astrophysics Data System (ADS)

    Arefan, D.; Talebpour, A.; Ahmadinejhad, N.; Kamali Asl, A.

    2015-12-01

    According to studies, the risk of breast cancer directly associated with breast density. Many researches are done on automatic diagnosis of breast density using mammography. In the current study, artifacts of mammograms are removed by using image processing techniques and by using the method presented in this study, including the diagnosis of points of the pectoral muscle edges and estimating them using regression techniques, pectoral muscle is detected with high accuracy in mammography and breast tissue is fully automatically extracted. In order to classify mammography images into three categories: Fatty, Glandular, Dense, a feature based on difference of gray-levels of hard tissue and soft tissue in mammograms has been used addition to the statistical features and a neural network classifier with a hidden layer. Image database used in this research is the mini-MIAS database and the maximum accuracy of system in classifying images has been reported 97.66% with 8 hidden layers in neural network.

  17. Neural Flows in Hopfield Network Approach

    NASA Astrophysics Data System (ADS)

    Ionescu, Carmen; Panaitescu, Emilian; Stoicescu, Mihai

    2013-12-01

    In most of the applications involving neural networks, the main problem consists in finding an optimal procedure to reduce the real neuron to simpler models which still express the biological complexity but allow highlighting the main characteristics of the system. We effectively investigate a simple reduction procedure which leads from complex models of Hodgkin-Huxley type to very convenient binary models of Hopfield type. The reduction will allow to describe the neuron interconnections in a quite large network and to obtain information concerning its symmetry and stability. Both cases, on homogeneous voltage across the membrane and inhomogeneous voltage along the axon will be tackled out. Few numerical simulations of the neural flow based on the cable-equation will be also presented.

  18. A Novel Higher Order Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuxiang

    2010-05-01

    In this paper a new Higher Order Neural Network (HONN) model is introduced and applied in several data mining tasks. Data Mining extracts hidden patterns and valuable information from large databases. A hyperbolic tangent function is used as the neuron activation function for the new HONN model. Experiments are conducted to demonstrate the advantages and disadvantages of the new HONN model, when compared with several conventional Artificial Neural Network (ANN) models: Feedforward ANN with the sigmoid activation function; Feedforward ANN with the hyperbolic tangent activation function; and Radial Basis Function (RBF) ANN with the Gaussian activation function. The experimental results seem to suggest that the new HONN holds higher generalization capability as well as abilities in handling missing data.

  19. Design of fiber optic adaline neural networks

    NASA Astrophysics Data System (ADS)

    Ghosh, Anjan K.; Trepka, Jim

    1997-03-01

    Based on possible optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators we describe the design of a single-layer fiber optic Adaline neural network that can be used as a bit pattern classifier. In our design, we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The described new optical neural network design is for optical processing of guided light wave signals, not electronic signals. We analyze the convergence or learning characteristics of the optoelectronic Adaline in the presence of errors in the hardware. We show that with such an optoelectronic Adaline it is possible to detect a desired code word/token/header with good accuracy.

  20. Neural networks for aerosol particles characterization

    NASA Astrophysics Data System (ADS)

    Berdnik, V. V.; Loiko, V. A.

    2016-11-01

    Multilayer perceptron neural networks with one, two and three inputs are built to retrieve parameters of spherical homogeneous nonabsorbing particle. The refractive index ranges from 1.3 to 1.7; particle radius ranges from 0.251 μm to 56.234 μm. The logarithms of the scattered radiation intensity are used as input signals. The problem of the most informative scattering angles selection is elucidated. It is shown that polychromatic illumination helps one to increase significantly the retrieval accuracy. In the absence of measurement errors relative error of radius retrieval by the neural network with three inputs is 0.54%, relative error of the refractive index retrieval is 0.84%. The effect of measurement errors on the result of retrieval is simulated.

  1. Complex Chebyshev-polynomial-based unified model (CCPBUM) neural networks

    NASA Astrophysics Data System (ADS)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1998-03-01

    In this paper, we propose complex Chebyshev Polynomial Based unified model neural network for the approximation of complex- valued function. Based on this approximate transformable technique, we have derived the relationship between the single-layered neural network and multi-layered perceptron neural network. It is shown that the complex Chebyshev Polynomial Based unified model neural network can be represented as a functional link network that are based on Chebyshev polynomial. We also derived a new learning algorithm for the proposed network. It turns out that the complex Chebyshev Polynomial Based unified model neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional complex feedforward/recurrent neural network.

  2. Pattern recognition, neural networks, and artificial intelligence

    NASA Astrophysics Data System (ADS)

    Bezdek, James C.

    1991-03-01

    We write about the relationship between numerical patten recognition and neural-like computation networks. Extensive research that proposes the use of neural models for a wide variety of applications has been conducted in the past few years. Sometimes justification for investigating the potential of neural nets (NNs) is obvious. On the other hand, current enthusiasm for this approach has also led to the use of neural models when the apparent rationale for their use has been justified by what is best described as 'feeding frenzy'. In this latter instance there is at times concomitant lack of concern about many 'side issues' connected with algorithms (e.g., complexity, convergence, stability, robustness and performance validation) that need attention before any computational model becomes part of an operation system. These issues are examined with a view towards guessing how best to integrate and exploit the promise of the neural approach with there efforts aimed at advancing the art and science of pattern recognition and its applications in fielded systems in the next decade.

  3. Compact 4-D Optical Neural Network Architecture

    DTIC Science & Technology

    1990-04-25

    realized using electronics (about equal to a dumb honeybee ). For example, vision applications, including infrared search and track, may require more than...which might require more interconnections than can be realized using electronics (about equal to a dumb honeybee ). For example, vision applications... populated 1000 x 1000 element planes and operating at a frame rate of 1 KHz. Applications for artificial neural networks include robotic control, speech

  4. Analog hardware for learning neural networks

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P. (Inventor)

    1991-01-01

    This is a recurrent or feedforward analog neural network processor having a multi-level neuron array and a synaptic matrix for storing weighted analog values of synaptic connection strengths which is characterized by temporarily changing one connection strength at a time to determine its effect on system output relative to the desired target. That connection strength is then adjusted based on the effect, whereby the processor is taught the correct response to training examples connection by connection.

  5. Neural network architectures to analyze OPAD data

    NASA Technical Reports Server (NTRS)

    Whitaker, Kevin W.

    1992-01-01

    A prototype Optical Plume Anomaly Detection (OPAD) system is now installed on the space shuttle main engine (SSME) Technology Test Bed (TTB) at MSFC. The OPAD system requirements dictate the need for fast, efficient data processing techniques. To address this need of the OPAD system, a study was conducted into how artificial neural networks could be used to assist in the analysis of plume spectral data.

  6. Artificial neural network cardiopulmonary modeling and diagnosis

    DOEpatents

    Kangas, L.J.; Keller, P.E.

    1997-10-28

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis. 12 figs.

  7. Artificial neural network cardiopulmonary modeling and diagnosis

    DOEpatents

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  8. Nonvolatile Array Of Synapses For Neural Network

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Elements of array programmed with help of ultraviolet light. A 32 x 32 very-large-scale integrated-circuit array of electronic synapses serves as building-block chip for analog neural-network computer. Synaptic weights stored in nonvolatile manner. Makes information content of array invulnerable to loss of power, and, by eliminating need for circuitry to refresh volatile synaptic memory, makes architecture simpler and more compact.

  9. Cognitively Inspired Neural Network for Situation Recognition

    DTIC Science & Technology

    2010-01-14

    Neurodynamics of Higher-Level Cognition and Consciousness, Eds. Perlovsky, 1.I., Kozma, R. Springer Verlag, Heidelberg, Germany. Perlovsky, L.I., Deming...Perlovsky L. I., Kozma R. (2007) Eds. Neurodynamics of Higher-Level Cognition and Consciousness. Heidelberg, Germany: Springer-Verlag. Perlovsky, L.1...AFRL-RY -HS-TR-20 10-0028 Cognitively Inspired Neural Network for Situation Recognition Roman Ilin and Leonid Perlovsky AFRURYHE 80 Scott Drive

  10. Learning in Neural Networks: VLSI Implementation Strategies

    NASA Technical Reports Server (NTRS)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  11. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  12. Polarized DIS Structure Functions from Neural Networks

    SciTech Connect

    Del Debbio, L.; Guffanti, A.; Piccione, A.

    2007-06-13

    We present a parametrization of polarized Deep-Inelastic-Scattering (DIS) structure functions based on Neural Networks. The parametrization provides a bias-free determination of the probability measure in the space of structure functions, which retains information on experimental errors and correlations. As an example we discuss the application of this method to the study of the structure function g{sub 1}{sup p}(x,Q{sup 2})

  13. Nonvolatile Array Of Synapses For Neural Network

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Elements of array programmed with help of ultraviolet light. A 32 x 32 very-large-scale integrated-circuit array of electronic synapses serves as building-block chip for analog neural-network computer. Synaptic weights stored in nonvolatile manner. Makes information content of array invulnerable to loss of power, and, by eliminating need for circuitry to refresh volatile synaptic memory, makes architecture simpler and more compact.

  14. Approximation by Ridge Functions and Neural Networks

    DTIC Science & Technology

    1997-01-01

    univariate spaces Xn Other authors most notably Micchelli and Mhaskar MM MM and Mhaskar M have also considered approximation problems of the...type treated here The work of Micchelli and Mhaskar does not give the best order of approximation Mhaskar M has given best possible results but...function from its projections Duke Math J pp M H Mhaskar Neural networks for optimal approximation of smooth and ana lytic

  15. Program PSNN (Plasma Spectroscopy Neural Network)

    SciTech Connect

    Morgan, W.L.; Larsen, J.T.

    1993-08-01

    This program uses the standard ``delta rule`` back-propagation supervised training algorithm for multi-layer neural networks. The inputs are line intensities in arbitrary units, which are then normalized within the program. The outputs are T{sub e}(eV), N{sub e}(cm{sup {minus}3}), and a fractional ionization, which in our testing using H- and He-like spectra, was N(He)/[N(H) + N(He)].

  16. Correlation Filter Synthesis Using Neural Networks.

    DTIC Science & Technology

    1993-12-01

    distortions, and this approach has clear advantages compared to searching stored filters. I)jL.,i.A I £EJTCMD 3 14. SUBJECT TERMS I NUMBER OF...distortions. They also indicate possible significant advantages compared to searching stored filters. ii 1. INTRODUCTION This section briefly...possible significant advantages compared to searching stored filters. The technical effort on correlation filter synthesis using neural networks was

  17. Neural network with dynamically adaptable neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.

  18. Development and Organization of Neural Networks.

    DTIC Science & Technology

    1988-01-01

    the Hopfield relaxation model . 9 br GENERAL POTENTIAL SURFACES AN4D NEURAL NETWORKS Amir Dembo and Ofer Zeitouni Division of Applied Mathematics...Report, June 9, 1987. The Hopfield Model and Beyond, Bachmann, C. M., ARO Technical Report, December 15, 1986. A Relaxation Model for Memory with High...storage efficiencey in the Hopfield model . The original model was capable of accurate storage and retrieval, with some error correction, for up to

  19. Living ordered neural networks as model systems for signal processing

    NASA Astrophysics Data System (ADS)

    Villard, C.; Amblard, P. O.; Becq, G.; Gory-Fauré, S.; Brocard, J.; Roth, S.

    2007-06-01

    Neural circuit architecture is a fundamental characteristic of the brain, and how architecture is bound to biological functions is still an open question. Some neuronal geometries seen in the retina or the cochlea are intriguing: information is processed in parallel by several entities like in "pooling" networks which have recently drawn the attention of signal processing scientists. These systems indeed exhibit the noise-enhanced processing effect, which is also actively discussed in the neuroscience community at the neuron scale. The aim of our project is to use in-vitro ordered neuron networks as living paradigms to test ideas coming from the computational science. The different technological bolts that have to be solved are enumerated and the first results are presented. A neuron is a polarised cell, with an excitatory axon and a receiving dendritic tree. We present how soma confinement and axon differentiation can be induced by surface functionalization techniques. The recording of large neuron networks, ordered or not, is also detailed and biological signals shown. The main difficulty to access neural noise in the case of weakly connected networks grown on micro electrode arrays is explained. This open the door to a new detection technology suitable for sub-cellular analysis and stimulation, whose development will constitute the next step of this project.

  20. Neural network error correction for solving coupled ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.