Sample records for neural network patterns

  1. Creative-Dynamics Approach To Neural Intelligence

    NASA Technical Reports Server (NTRS)

    Zak, Michail A.

    1992-01-01

    Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.

  2. Method and system for pattern analysis using a coarse-coded neural network

    NASA Technical Reports Server (NTRS)

    Spirkovska, Liljana (Inventor); Reid, Max B. (Inventor)

    1994-01-01

    A method and system for performing pattern analysis with a neural network coarse-coding a pattern to be analyzed so as to form a plurality of sub-patterns collectively defined by data. Each of the sub-patterns comprises sets of pattern data. The neural network includes a plurality fields, each field being associated with one of the sub-patterns so as to receive the sub-pattern data therefrom. Training and testing by the neural network then proceeds in the usual way, with one modification: the transfer function thresholds the value obtained from summing the weighted products of each field over all sub-patterns associated with each pattern being analyzed by the system.

  3. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1997-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  4. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1998-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  5. Patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks.

    PubMed

    Aguiar, Manuela A D; Dias, Ana Paula S; Ferreira, Flora

    2017-01-01

    We consider feed-forward and auto-regulation feed-forward neural (weighted) coupled cell networks. In feed-forward neural networks, cells are arranged in layers such that the cells of the first layer have empty input set and cells of each other layer receive only inputs from cells of the previous layer. An auto-regulation feed-forward neural coupled cell network is a feed-forward neural network where additionally some cells of the first layer have auto-regulation, that is, they have a self-loop. Given a network structure, a robust pattern of synchrony is a space defined in terms of equalities of cell coordinates that is flow-invariant for any coupled cell system (with additive input structure) associated with the network. In this paper, we describe the robust patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks. Regarding feed-forward neural networks, we show that only cells in the same layer can synchronize. On the other hand, in the presence of auto-regulation, we prove that cells in different layers can synchronize in a robust way and we give a characterization of the possible patterns of synchrony that can occur for auto-regulation feed-forward neural networks.

  6. Angle of Arrival Detection Through Artificial Neural Network Analysis of Optical Fiber Intensity Patterns

    DTIC Science & Technology

    1990-12-01

    ARTIFICIAL NEURAL NETWORK ANALYSIS OF OPTICAL FIBER INTENSITY PATTERNS THESIS Scott Thomas Captain, USAF AFIT/GE/ENG/90D-62 DTIC...ELECTE ao • JAN08 1991 Approved for public release; distribution unlimited. AFIT/GE/ENG/90D-62 ANGLE OF ARRIVAL DETECTION THROUGH ARTIFICIAL NEURAL NETWORK ANALYSIS... ARTIFICIAL NEURAL NETWORK ANALYSIS OF OPTICAL FIBER INTENSITY PATTERNS L Introduction The optical sensors of United States Air Force reconnaissance

  7. Deinterlacing using modular neural network

    NASA Astrophysics Data System (ADS)

    Woo, Dong H.; Eom, Il K.; Kim, Yoo S.

    2004-05-01

    Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.

  8. Improvement of the Hopfield Neural Network by MC-Adaptation Rule

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Zhao, Hong

    2006-06-01

    We show that the performance of the Hopfield neural networks, especially the quality of the recall and the capacity of the effective storing, can be greatly improved by making use of a recently presented neural network designing method without altering the whole structure of the network. In the improved neural network, a memory pattern is recalled exactly from initial states having a given degree of similarity with the memory pattern, and thus one can avoids to apply the overlap criterion as carried out in the Hopfield neural networks.

  9. Classification of 2-dimensional array patterns: assembling many small neural networks is better than using a large one.

    PubMed

    Chen, Liang; Xue, Wei; Tokuda, Naoyuki

    2010-08-01

    In many pattern classification/recognition applications of artificial neural networks, an object to be classified is represented by a fixed sized 2-dimensional array of uniform type, which corresponds to the cells of a 2-dimensional grid of the same size. A general neural network structure, called an undistricted neural network, which takes all the elements in the array as inputs could be used for problems such as these. However, a districted neural network can be used to reduce the training complexity. A districted neural network usually consists of two levels of sub-neural networks. Each of the lower level neural networks, called a regional sub-neural network, takes the elements in a region of the array as its inputs and is expected to output a temporary class label, called an individual opinion, based on the partial information of the entire array. The higher level neural network, called an assembling sub-neural network, uses the outputs (opinions) of regional sub-neural networks as inputs, and by consensus derives the label decision for the object. Each of the sub-neural networks can be trained separately and thus the training is less expensive. The regional sub-neural networks can be trained and performed in parallel and independently, therefore a high speed can be achieved. We prove theoretically in this paper, using a simple model, that a districted neural network is actually more stable than an undistricted neural network in noisy environments. We conjecture that the result is valid for all neural networks. This theory is verified by experiments involving gender classification and human face recognition. We conclude that a districted neural network is highly recommended for neural network applications in recognition or classification of 2-dimensional array patterns in highly noisy environments. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  10. Antenna analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern shaping. The interesting thing about D-C synthesis is that the side lobes have the same amplitude. Five-element arrays were used. Again, 41 pattern samples were used for the input. Nine actual D-C patterns ranging from -10 dB to -30 dB side lobe levels were used to train the network. A comparison between simulated and actual D-C techniques for a pattern with -22 dB side lobe level is shown. The goal for this research was to evaluate the performance of neural network computing with antennas. Future applications will employ the backpropagation training algorithm to drastically reduce the computational complexity involved in performing EM compensation for surface errors in large space reflector antennas.

  11. Antenna analysis using neural networks

    NASA Astrophysics Data System (ADS)

    Smith, William T.

    1992-09-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary).

  12. Introduction to Neural Networks.

    DTIC Science & Technology

    1992-03-01

    parallel processing of information that can greatly reduce the time required to perform operations which are needed in pattern recognition. Neural network, Artificial neural network , Neural net, ANN.

  13. Higher-Order Neural Networks Recognize Patterns

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly; Ochoa, Ellen

    1996-01-01

    Networks of higher order have enhanced capabilities to distinguish between different two-dimensional patterns and to recognize those patterns. Also enhanced capabilities to "learn" patterns to be recognized: "trained" with far fewer examples and, therefore, in less time than necessary to train comparable first-order neural networks.

  14. Pattern classification and recognition of invertebrate functional groups using self-organizing neural networks.

    PubMed

    Zhang, WenJun

    2007-07-01

    Self-organizing neural networks can be used to mimic non-linear systems. The main objective of this study is to make pattern classification and recognition on sampling information using two self-organizing neural network models. Invertebrate functional groups sampled in the irrigated rice field were classified and recognized using one-dimensional self-organizing map and self-organizing competitive learning neural networks. Comparisons between neural network models, distance (similarity) measures, and number of neurons were conducted. The results showed that self-organizing map and self-organizing competitive learning neural network models were effective in pattern classification and recognition of sampling information. Overall the performance of one-dimensional self-organizing map neural network was better than self-organizing competitive learning neural network. The number of neurons could determine the number of classes in the classification. Different neural network models with various distance (similarity) measures yielded similar classifications. Some differences, dependent upon the specific network structure, would be found. The pattern of an unrecognized functional group was recognized with the self-organizing neural network. A relative consistent classification indicated that the following invertebrate functional groups, terrestrial blood sucker; terrestrial flyer; tourist (nonpredatory species with no known functional role other than as prey in ecosystem); gall former; collector (gather, deposit feeder); predator and parasitoid; leaf miner; idiobiont (acarine ectoparasitoid), were classified into the same group, and the following invertebrate functional groups, external plant feeder; terrestrial crawler, walker, jumper or hunter; neustonic (water surface) swimmer (semi-aquatic), were classified into another group. It was concluded that reliable conclusions could be drawn from comparisons of different neural network models that use different distance (similarity) measures. Results with the larger consistency will be more reliable.

  15. Neural-Net Processing of Characteristic Patterns From Electronic Holograms of Vibrating Blades

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    1999-01-01

    Finite-element-model-trained artificial neural networks can be used to process efficiently the characteristic patterns or mode shapes from electronic holograms of vibrating blades. The models used for routine design may not yet be sufficiently accurate for this application. This document discusses the creation of characteristic patterns; compares model generated and experimental characteristic patterns; and discusses the neural networks that transform the characteristic patterns into strain or damage information. The current potential to adapt electronic holography to spin rigs, wind tunnels and engines provides an incentive to have accurate finite element models lor training neural networks.

  16. Neural constraints on learning.

    PubMed

    Sadtler, Patrick T; Quick, Kristin M; Golub, Matthew D; Chase, Steven M; Ryu, Stephen I; Tyler-Kabara, Elizabeth C; Yu, Byron M; Batista, Aaron P

    2014-08-28

    Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain-computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain-computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess.

  17. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  18. Orthogonal Patterns In A Binary Neural Network

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1991-01-01

    Report presents some recent developments in theory of binary neural networks. Subject matter relevant to associate (content-addressable) memories and to recognition of patterns - both of considerable importance in advancement of robotics and artificial intelligence. When probed by any pattern, network converges to one of stored patterns.

  19. Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.

    PubMed

    Hoya, T; Chambers, J A

    2001-01-01

    In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.

  20. Optimization of Training Sets For Neural-Net Processing of Characteristic Patterns From Vibrating Solids

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J. (Inventor)

    2006-01-01

    An artificial neural network is disclosed that processes holography generated characteristic pattern of vibrating structures along with finite-element models. The present invention provides for a folding operation for conditioning training sets for optimally training forward-neural networks to process characteristic fringe pattern. The folding pattern increases the sensitivity of the feed-forward network for detecting changes in the characteristic pattern The folding routine manipulates input pixels so as to be scaled according to the location in an intensity range rather than the position in the characteristic pattern.

  1. Patterns recognition of electric brain activity using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  2. Automation of Some Operations of a Wind Tunnel Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Buggele, Alvin E.

    1996-01-01

    Artificial neural networks were used successfully to sequence operations in a small, recently modernized, supersonic wind tunnel at NASA-Lewis Research Center. The neural nets generated correct estimates of shadowgraph patterns, pressure sensor readings and mach numbers for conditions occurring shortly after startup and extending to fully developed flow. Artificial neural networks were trained and tested for estimating: sensor readings from shadowgraph patterns, shadowgraph patterns from shadowgraph patterns and sensor readings from sensor readings. The 3.81 by 10 in. (0.0968 by 0.254 m) tunnel was operated with its mach 2.0 nozzle, and shadowgraph was recorded near the nozzle exit. These results support the thesis that artificial neural networks can be combined with current workstation technology to automate wind tunnel operations.

  3. Optical-Correlator Neural Network Based On Neocognitron

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  4. Electronic system with memristive synapses for pattern recognition

    PubMed Central

    Park, Sangsu; Chu, Myonglae; Kim, Jongin; Noh, Jinwoo; Jeon, Moongu; Hun Lee, Byoung; Hwang, Hyunsang; Lee, Boreom; Lee, Byung-geun

    2015-01-01

    Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction. PMID:25941950

  5. Spatio-Temporal Neural Networks for Vision, Reasoning and Rapid Decision Making

    DTIC Science & Technology

    1994-08-31

    something that is obviously not pattern for long-term knowledge base (LTKB) facts. As a matter possiblc in common neural networks (as units in a...Conferences on Neural Davis, P. (19W0) Application of op~tical chaos to temporal pattern search in a Networks . Piscataway, NJ. [SC] nonlinear optical...Science Institute PROJECT TITLE: Spatio-temporal Neural Networks for Vision, Reasoning and Rapid Decision Making (N00014-93-1-1149) Number of ONR

  6. An FPGA Implementation of a Polychronous Spiking Neural Network with Delay Adaptation.

    PubMed

    Wang, Runchun; Cohen, Gregory; Stiefel, Klaus M; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, André

    2013-01-01

    We present an FPGA implementation of a re-configurable, polychronous spiking neural network with a large capacity for spatial-temporal patterns. The proposed neural network generates delay paths de novo, so that only connections that actually appear in the training patterns will be created. This allows the proposed network to use all the axons (variables) to store information. Spike Timing Dependent Delay Plasticity is used to fine-tune and add dynamics to the network. We use a time multiplexing approach allowing us to achieve 4096 (4k) neurons and up to 1.15 million programmable delay axons on a Virtex 6 FPGA. Test results show that the proposed neural network is capable of successfully recalling more than 95% of all spikes for 96% of the stored patterns. The tests also show that the neural network is robust to noise from random input spikes.

  7. Pattern learning with deep neural networks in EMG-based speech recognition.

    PubMed

    Wand, Michael; Schultz, Tanja

    2014-01-01

    We report on classification of phones and phonetic features from facial electromyographic (EMG) data, within the context of our EMG-based Silent Speech interface. In this paper we show that a Deep Neural Network can be used to perform this classification task, yielding a significant improvement over conventional Gaussian Mixture models. Our central contribution is the visualization of patterns which are learned by the neural network. With increasing network depth, these patterns represent more and more intricate electromyographic activity.

  8. Neural Networks for the Beginner.

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  9. Neural net target-tracking system using structured laser patterns

    NASA Astrophysics Data System (ADS)

    Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun

    1996-06-01

    In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.

  10. Firing patterns transition and desynchronization induced by time delay in neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Shoufang; Zhang, Jiqian; Wang, Maosheng; Hu, Chin-Kun

    2018-06-01

    We used the Hindmarsh-Rose (HR) model (Hindmarsh and Rose, 1984) to study the effect of time delay on the transition of firing behaviors and desynchronization in neural networks. As time delay is increased, neural networks exhibit diversity of firing behaviors, including regular spiking or bursting and firing patterns transitions (FPTs). Meanwhile, the desynchronization of firing and unstable bursting with decreasing amplitude in neural system, are also increasingly enhanced with the increase of time delay. Furthermore, we also studied the effect of coupling strength and network randomness on these phenomena. Our results imply that time delays can induce transition and desynchronization of firing behaviors in neural networks. These findings provide new insight into the role of time delay in the firing activities of neural networks, and can help to better understand the firing phenomena in complex systems of neural networks. A possible mechanism in brain that can cause the increase of time delay is discussed.

  11. Neural networks: Alternatives to conventional techniques for automatic docking

    NASA Technical Reports Server (NTRS)

    Vinz, Bradley L.

    1994-01-01

    Automatic docking of orbiting spacecraft is a crucial operation involving the identification of vehicle orientation as well as complex approach dynamics. The chaser spacecraft must be able to recognize the target spacecraft within a scene and achieve accurate closing maneuvers. In a video-based system, a target scene must be captured and transformed into a pattern of pixels. Successful recognition lies in the interpretation of this pattern. Due to their powerful pattern recognition capabilities, artificial neural networks offer a potential role in interpretation and automatic docking processes. Neural networks can reduce the computational time required by existing image processing and control software. In addition, neural networks are capable of recognizing and adapting to changes in their dynamic environment, enabling enhanced performance, redundancy, and fault tolerance. Most neural networks are robust to failure, capable of continued operation with a slight degradation in performance after minor failures. This paper discusses the particular automatic docking tasks neural networks can perform as viable alternatives to conventional techniques.

  12. Space-Time Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  13. Higher-order neural network software for distortion invariant object recognition

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly

    1991-01-01

    The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.

  14. The neural network classification of false killer whale (Pseudorca crassidens) vocalizations.

    PubMed

    Murray, S O; Mercado, E; Roitblat, H L

    1998-12-01

    This study reports the use of unsupervised, self-organizing neural network to categorize the repertoire of false killer whale vocalizations. Self-organizing networks are capable of detecting patterns in their input and partitioning those patterns into categories without requiring that the number or types of categories be predefined. The inputs for the neural networks were two-dimensional characterization of false killer whale vocalization, where each vocalization was characterized by a sequence of short-time measurements of duty cycle and peak frequency. The first neural network used competitive learning, where units in a competitive layer distributed themselves to recognize frequently presented input vectors. This network resulted in classes representing typical patterns in the vocalizations. The second network was a Kohonen feature map which organized the outputs topologically, providing a graphical organization of pattern relationships. The networks performed well as measured by (1) the average correlation between the input vectors and the weight vectors for each category, and (2) the ability of the networks to classify novel vocalizations. The techniques used in this study could easily be applied to other species and facilitate the development of objective, comprehensive repertoire models.

  15. Long-Term Memory Stabilized by Noise-Induced Rehearsal

    PubMed Central

    Wei, Yi

    2014-01-01

    Cortical networks can maintain memories for decades despite the short lifetime of synaptic strengths. Can a neural network store long-lasting memories in unstable synapses? Here, we study the effects of ongoing spike-timing-dependent plasticity (STDP) on the stability of memory patterns stored in synapses of an attractor neural network. We show that certain classes of STDP rules can stabilize all stored memory patterns despite a short lifetime of synapses. In our model, unstructured neural noise, after passing through the recurrent network connections, carries the imprint of all memory patterns in temporal correlations. STDP, combined with these correlations, leads to reinforcement of all stored patterns, even those that are never explicitly visited. Our findings may provide the functional reason for irregular spiking displayed by cortical neurons and justify models of system memory consolidation. Therefore, we propose that irregular neural activity is the feature that helps cortical networks maintain stable connections. PMID:25411507

  16. Nested neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    Nested neural networks, consisting of small interconnected subnetworks, allow for the storage and retrieval of neural state patterns of different sizes. The subnetworks are naturally categorized by layers of corresponding to spatial frequencies in the pattern field. The storage capacity and the error correction capability of the subnetworks generally increase with the degree of connectivity between layers (the nesting degree). Storage of only few subpatterns in each subnetworks results in a vast storage capacity of patterns and subpatterns in the nested network, maintaining high stability and error correction capability.

  17. A deep convolutional neural network to analyze position averaged convergent beam electron diffraction patterns.

    PubMed

    Xu, W; LeBeau, J M

    2018-05-01

    We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of  ∼ 0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Developing neuronal networks: Self-organized criticality predicts the future

    NASA Astrophysics Data System (ADS)

    Pu, Jiangbo; Gong, Hui; Li, Xiangning; Luo, Qingming

    2013-01-01

    Self-organized criticality emerged in neural activity is one of the key concepts to describe the formation and the function of developing neuronal networks. The relationship between critical dynamics and neural development is both theoretically and experimentally appealing. However, whereas it is well-known that cortical networks exhibit a rich repertoire of activity patterns at different stages during in vitro maturation, dynamical activity patterns through the entire neural development still remains unclear. Here we show that a series of metastable network states emerged in the developing and ``aging'' process of hippocampal networks cultured from dissociated rat neurons. The unidirectional sequence of state transitions could be only observed in networks showing power-law scaling of distributed neuronal avalanches. Our data suggest that self-organized criticality may guide spontaneous activity into a sequential succession of homeostatically-regulated transient patterns during development, which may help to predict the tendency of neural development at early ages in the future.

  19. A New Local Bipolar Autoassociative Memory Based on External Inputs of Discrete Recurrent Neural Networks With Time Delay.

    PubMed

    Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang

    In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.

  20. Fractal Patterns of Neural Activity Exist within the Suprachiasmatic Nucleus and Require Extrinsic Network Interactions

    PubMed Central

    Hu, Kun; Meijer, Johanna H.; Shea, Steven A.; vanderLeest, Henk Tjebbe; Pittman-Polletta, Benjamin; Houben, Thijs; van Oosterhout, Floor; Deboer, Tom; Scheer, Frank A. J. L.

    2012-01-01

    The mammalian central circadian pacemaker (the suprachiasmatic nucleus, SCN) contains thousands of neurons that are coupled through a complex network of interactions. In addition to the established role of the SCN in generating rhythms of ∼24 hours in many physiological functions, the SCN was recently shown to be necessary for normal self-similar/fractal organization of motor activity and heart rate over a wide range of time scales—from minutes to 24 hours. To test whether the neural network within the SCN is sufficient to generate such fractal patterns, we studied multi-unit neural activity of in vivo and in vitro SCNs in rodents. In vivo SCN-neural activity exhibited fractal patterns that are virtually identical in mice and rats and are similar to those in motor activity at time scales from minutes up to 10 hours. In addition, these patterns remained unchanged when the main afferent signal to the SCN, namely light, was removed. However, the fractal patterns of SCN-neural activity are not autonomous within the SCN as these patterns completely broke down in the isolated in vitro SCN despite persistence of circadian rhythmicity. Thus, SCN-neural activity is fractal in the intact organism and these fractal patterns require network interactions between the SCN and extra-SCN nodes. Such a fractal control network could underlie the fractal regulation observed in many physiological functions that involve the SCN, including motor control and heart rate regulation. PMID:23185285

  1. Development of a computational model on the neural activity patterns of a visual working memory in a hierarchical feedforward Network

    NASA Astrophysics Data System (ADS)

    An, Soyoung; Choi, Woochul; Paik, Se-Bum

    2015-11-01

    Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.

  2. Neural dynamics based on the recognition of neural fingerprints

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2015-01-01

    Experimental evidence has revealed the existence of characteristic spiking features in different neural signals, e.g., individual neural signatures identifying the emitter or functional signatures characterizing specific tasks. These neural fingerprints may play a critical role in neural information processing, since they allow receptors to discriminate or contextualize incoming stimuli. This could be a powerful strategy for neural systems that greatly enhances the encoding and processing capacity of these networks. Nevertheless, the study of information processing based on the identification of specific neural fingerprints has attracted little attention. In this work, we study (i) the emerging collective dynamics of a network of neurons that communicate with each other by exchange of neural fingerprints and (ii) the influence of the network topology on the self-organizing properties within the network. Complex collective dynamics emerge in the network in the presence of stimuli. Predefined inputs, i.e., specific neural fingerprints, are detected and encoded into coexisting patterns of activity that propagate throughout the network with different spatial organization. The patterns evoked by a stimulus can survive after the stimulation is over, which provides memory mechanisms to the network. The results presented in this paper suggest that neural information processing based on neural fingerprints can be a plausible, flexible, and powerful strategy. PMID:25852531

  3. A neural network prototyping package within IRAF

    NASA Technical Reports Server (NTRS)

    Bazell, D.; Bankman, I.

    1992-01-01

    We outline our plans for incorporating a Neural Network Prototyping Package into the IRAF environment. The package we are developing will allow the user to choose between different types of networks and to specify the details of the particular architecture chosen. Neural networks consist of a highly interconnected set of simple processing units. The strengths of the connections between units are determined by weights which are adaptively set as the network 'learns'. In some cases, learning can be a separate phase of the user cycle of the network while in other cases the network learns continuously. Neural networks have been found to be very useful in pattern recognition and image processing applications. They can form very general 'decision boundaries' to differentiate between objects in pattern space and they can be used for associative recall of patterns based on partial cures and for adaptive filtering. We discuss the different architectures we plan to use and give examples of what they can do.

  4. Organization of Anti-Phase Synchronization Pattern in Neural Networks: What are the Key Factors?

    PubMed Central

    Li, Dong; Zhou, Changsong

    2011-01-01

    Anti-phase oscillation has been widely observed in cortical neural network. Elucidating the mechanism underlying the organization of anti-phase pattern is of significance for better understanding more complicated pattern formations in brain networks. In dynamical systems theory, the organization of anti-phase oscillation pattern has usually been considered to relate to time delay in coupling. This is consistent to conduction delays in real neural networks in the brain due to finite propagation velocity of action potentials. However, other structural factors in cortical neural network, such as modular organization (connection density) and the coupling types (excitatory or inhibitory), could also play an important role. In this work, we investigate the anti-phase oscillation pattern organized on a two-module network of either neuronal cell model or neural mass model, and analyze the impact of the conduction delay times, the connection densities, and coupling types. Our results show that delay times and coupling types can play key roles in this organization. The connection densities may have an influence on the stability if an anti-phase pattern exists due to the other factors. Furthermore, we show that anti-phase synchronization of slow oscillations can be achieved with small delay times if there is interaction between slow and fast oscillations. These results are significant for further understanding more realistic spatiotemporal dynamics of cortico-cortical communications. PMID:22232576

  5. Pattern recognition neural-net by spatial mapping of biology visual field

    NASA Astrophysics Data System (ADS)

    Lin, Xin; Mori, Masahiko

    2000-05-01

    The method of spatial mapping in biology vision field is applied to artificial neural networks for pattern recognition. By the coordinate transform that is called the complex-logarithm mapping and Fourier transform, the input images are transformed into scale- rotation- and shift- invariant patterns, and then fed into a multilayer neural network for learning and recognition. The results of computer simulation and an optical experimental system are described.

  6. Flow Pattern Identification of Horizontal Two-Phase Refrigerant Flow Using Neural Networks

    DTIC Science & Technology

    2015-12-31

    AFRL-RQ-WP-TP-2016-0079 FLOW PATTERN IDENTIFICATION OF HORIZONTAL TWO-PHASE REFRIGERANT FLOW USING NEURAL NETWORKS (POSTPRINT) Abdeel J...Journal Article Postprint 01 October 2013 – 22 June 2015 4. TITLE AND SUBTITLE FLOW PATTERN IDENTIFICATION OF HORIZONTAL TWO-PHASE REFRIGERANT FLOW USING...networks were used to automatically identify two-phase flow patterns for refrigerant R-134a flowing in a horizontal tube. In laboratory experiments

  7. Effect of inhibitory firing pattern on coherence resonance in random neural networks

    NASA Astrophysics Data System (ADS)

    Yu, Haitao; Zhang, Lianghao; Guo, Xinmeng; Wang, Jiang; Cao, Yibin; Liu, Jing

    2018-01-01

    The effect of inhibitory firing patterns on coherence resonance (CR) in random neuronal network is systematically studied. Spiking and bursting are two main types of firing pattern considered in this work. Numerical results show that, irrespective of the inhibitory firing patterns, the regularity of network is maximized by an optimal intensity of external noise, indicating the occurrence of coherence resonance. Moreover, the firing pattern of inhibitory neuron indeed has a significant influence on coherence resonance, but the efficacy is determined by network property. In the network with strong coupling strength but weak inhibition, bursting neurons largely increase the amplitude of resonance, while they can decrease the noise intensity that induced coherence resonance within the neural system of strong inhibition. Different temporal windows of inhibition induced by different inhibitory neurons may account for the above observations. The network structure also plays a constructive role in the coherence resonance. There exists an optimal network topology to maximize the regularity of the neural systems.

  8. Continuous monitoring of the lunar or Martian subsurface using on-board pattern recognition and neural processing of Rover geophysical data

    NASA Technical Reports Server (NTRS)

    Glass, Charles E.; Boyd, Richard V.; Sternberg, Ben K.

    1991-01-01

    The overall aim is to provide base technology for an automated vision system for on-board interpretation of geophysical data. During the first year's work, it was demonstrated that geophysical data can be treated as patterns and interpreted using single neural networks. Current research is developing an integrated vision system comprising neural networks, algorithmic preprocessing, and expert knowledge. This system is to be tested incrementally using synthetic geophysical patterns, laboratory generated geophysical patterns, and field geophysical patterns.

  9. Long-term memory stabilized by noise-induced rehearsal.

    PubMed

    Wei, Yi; Koulakov, Alexei A

    2014-11-19

    Cortical networks can maintain memories for decades despite the short lifetime of synaptic strengths. Can a neural network store long-lasting memories in unstable synapses? Here, we study the effects of ongoing spike-timing-dependent plasticity (STDP) on the stability of memory patterns stored in synapses of an attractor neural network. We show that certain classes of STDP rules can stabilize all stored memory patterns despite a short lifetime of synapses. In our model, unstructured neural noise, after passing through the recurrent network connections, carries the imprint of all memory patterns in temporal correlations. STDP, combined with these correlations, leads to reinforcement of all stored patterns, even those that are never explicitly visited. Our findings may provide the functional reason for irregular spiking displayed by cortical neurons and justify models of system memory consolidation. Therefore, we propose that irregular neural activity is the feature that helps cortical networks maintain stable connections. Copyright © 2014 the authors 0270-6474/14/3415804-12$15.00/0.

  10. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks

    PubMed Central

    2018-01-01

    Much of the information the brain processes and stores is temporal in nature—a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds—we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. PMID:29537963

  11. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.

    PubMed

    Goudar, Vishwa; Buonomano, Dean V

    2018-03-14

    Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.

  12. Neural network-based system for pattern recognition through a fiber optic bundle

    NASA Astrophysics Data System (ADS)

    Gamo-Aranda, Javier; Rodriguez-Horche, Paloma; Merchan-Palacios, Miguel; Rosales-Herrera, Pablo; Rodriguez, M.

    2001-04-01

    A neural network based system to identify images transmitted through a Coherent Fiber-optic Bundle (CFB) is presented. Patterns are generated in a computer, displayed on a Spatial Light Modulator, imaged onto the input face of the CFB, and recovered optically by a CCD sensor array for further processing. Input and output optical subsystems were designed and used to that end. The recognition step of the transmitted patterns is made by a powerful, widely-used, neural network simulator running on the control PC. A complete PC-based interface was developed to control the different tasks involved in the system. An optical analysis of the system capabilities was carried out prior to performing the recognition step. Several neural network topologies were tested, and the corresponding numerical results are also presented and discussed.

  13. Noise in Neural Networks: Thresholds, Hysteresis, and Neuromodulation of Signal-To-Noise

    NASA Astrophysics Data System (ADS)

    Keeler, James D.; Pichler, Elgar E.; Ross, John

    1989-03-01

    We study a neural-network model including Gaussian noise, higher-order neuronal interactions, and neuromodulation. For a first-order network, there is a threshold in the noise level (phase transition) above which the network displays only disorganized behavior and critical slowing down near the noise threshold. The network can tolerate more noise if it has higher-order feedback interactions, which also lead to hysteresis and multistability in the network dynamics. The signal-to-noise ratio can be adjusted in a biological neural network by neuromodulators such as norepinephrine. Comparisons are made to experimental results and further investigations are suggested to test the effects of hysteresis and neuromodulation in pattern recognition and learning. We propose that norepinephrine may ``quench'' the neural patterns of activity to enhance the ability to learn details.

  14. Neural coding in graphs of bidirectional associative memories.

    PubMed

    Bouchain, A David; Palm, Günther

    2012-01-24

    In the last years we have developed large neural network models for the realization of complex cognitive tasks in a neural network architecture that resembles the network of the cerebral cortex. We have used networks of several cortical modules that contain two populations of neurons (one excitatory, one inhibitory). The excitatory populations in these so-called "cortical networks" are organized as a graph of Bidirectional Associative Memories (BAMs), where edges of the graph correspond to BAMs connecting two neural modules and nodes of the graph correspond to excitatory populations with associative feedback connections (and inhibitory interneurons). The neural code in each of these modules consists essentially of the firing pattern of the excitatory population, where mainly it is the subset of active neurons that codes the contents to be represented. The overall activity can be used to distinguish different properties of the patterns that are represented which we need to distinguish and control when performing complex tasks like language understanding with these cortical networks. The most important pattern properties or situations are: exactly fitting or matching input, incomplete information or partially matching pattern, superposition of several patterns, conflicting information, and new information that is to be learned. We show simple simulations of these situations in one area or module and discuss how to distinguish these situations based on the overall internal activation of the module. This article is part of a Special Issue entitled "Neural Coding". Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Reconstruction of magnetic configurations in W7-X using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Böckenhoff, Daniel; Blatzheim, Marko; Hölbe, Hauke; Niemann, Holger; Pisano, Fabio; Labahn, Roger; Pedersen, Thomas Sunn; The W7-X Team

    2018-05-01

    It is demonstrated that artificial neural networks can be used to accurately and efficiently predict details of the magnetic topology at the plasma edge of the Wendelstein 7-X stellarator, based on simulated as well as measured heat load patterns onto plasma-facing components observed with infrared cameras. The connection between heat load patterns and the magnetic topology is a challenging regression problem, but one that suits artificial neural networks well. The use of a neural network makes it feasible to analyze and control the plasma exhaust in real-time, an important goal for Wendelstein 7-X, and for magnetic confinement fusion research in general.

  16. Optical computing and neural networks; Proceedings of the Meeting, National Chiao Tung Univ., Hsinchu, Taiwan, Dec. 16, 17, 1992

    NASA Technical Reports Server (NTRS)

    Hsu, Ken-Yuh (Editor); Liu, Hua-Kuang (Editor)

    1992-01-01

    The present conference discusses optical neural networks, photorefractive nonlinear optics, optical pattern recognition, digital and analog processors, and holography and its applications. Attention is given to bifurcating optical information processing, neural structures in digital halftoning, an exemplar-based optical neural net classifier for color pattern recognition, volume storage in photorefractive disks, and microlaser-based compact optical neuroprocessors. Also treated are the optical implementation of a feature-enhanced optical interpattern-associative neural network model and its optical implementation, an optical pattern binary dual-rail logic gate module, a theoretical analysis for holographic associative memories, joint transform correlators, image addition and subtraction via the Talbot effect, and optical wavelet-matched filters. (No individual items are abstracted in this volume)

  17. Optical computing and neural networks; Proceedings of the Meeting, National Chiao Tung Univ., Hsinchu, Taiwan, Dec. 16, 17, 1992

    NASA Astrophysics Data System (ADS)

    Hsu, Ken-Yuh; Liu, Hua-Kuang

    The present conference discusses optical neural networks, photorefractive nonlinear optics, optical pattern recognition, digital and analog processors, and holography and its applications. Attention is given to bifurcating optical information processing, neural structures in digital halftoning, an exemplar-based optical neural net classifier for color pattern recognition, volume storage in photorefractive disks, and microlaser-based compact optical neuroprocessors. Also treated are the optical implementation of a feature-enhanced optical interpattern-associative neural network model and its optical implementation, an optical pattern binary dual-rail logic gate module, a theoretical analysis for holographic associative memories, joint transform correlators, image addition and subtraction via the Talbot effect, and optical wavelet-matched filters. (No individual items are abstracted in this volume)

  18. Predicting neural network firing pattern from phase resetting curve

    NASA Astrophysics Data System (ADS)

    Oprisan, Sorinel; Oprisan, Ana

    2007-04-01

    Autonomous neural networks called central pattern generators (CPG) are composed of endogenously bursting neurons and produce rhythmic activities, such as flying, swimming, walking, chewing, etc. Simplified CPGs for quadrupedal locomotion and swimming are modeled by a ring of neural oscillators such that the output of one oscillator constitutes the input for the subsequent neural oscillator. The phase response curve (PRC) theory discards the detailed conductance-based description of the component neurons of a network and reduces them to ``black boxes'' characterized by a transfer function, which tabulates the transient change in the intrinsic period of a neural oscillator subject to external stimuli. Based on open-loop PRC, we were able to successfully predict the phase-locked period and relative phase between neurons in a half-center network. We derived existence and stability criteria for heterogeneous ring neural networks that are in good agreement with experimental data.

  19. Optimization of the kernel functions in a probabilistic neural network analyzing the local pattern distribution.

    PubMed

    Galleske, I; Castellanos, J

    2002-05-01

    This article proposes a procedure for the automatic determination of the elements of the covariance matrix of the gaussian kernel function of probabilistic neural networks. Two matrices, a rotation matrix and a matrix of variances, can be calculated by analyzing the local environment of each training pattern. The combination of them will form the covariance matrix of each training pattern. This automation has two advantages: First, it will free the neural network designer from indicating the complete covariance matrix, and second, it will result in a network with better generalization ability than the original model. A variation of the famous two-spiral problem and real-world examples from the UCI Machine Learning Repository will show a classification rate not only better than the original probabilistic neural network but also that this model can outperform other well-known classification techniques.

  20. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  1. Minimal perceptrons for memorizing complex patterns

    NASA Astrophysics Data System (ADS)

    Pastor, Marissa; Song, Juyong; Hoang, Danh-Tai; Jo, Junghyo

    2016-11-01

    Feedforward neural networks have been investigated to understand learning and memory, as well as applied to numerous practical problems in pattern classification. It is a rule of thumb that more complex tasks require larger networks. However, the design of optimal network architectures for specific tasks is still an unsolved fundamental problem. In this study, we consider three-layered neural networks for memorizing binary patterns. We developed a new complexity measure of binary patterns, and estimated the minimal network size for memorizing them as a function of their complexity. We formulated the minimal network size for regular, random, and complex patterns. In particular, the minimal size for complex patterns, which are neither ordered nor disordered, was predicted by measuring their Hamming distances from known ordered patterns. Our predictions agree with simulations based on the back-propagation algorithm.

  2. Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.

    PubMed

    Woodward, Alexander; Froese, Tom; Ikegami, Takashi

    2015-02-01

    The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Oscillator Neural Network Retrieving Sparsely Coded Phase Patterns

    NASA Astrophysics Data System (ADS)

    Aoyagi, Toshio; Nomura, Masaki

    1999-08-01

    Little is known theoretically about the associative memory capabilities of neural networks in which information is encoded not only in the mean firing rate but also in the timing of firings. Particularly, in the case of sparsely coded patterns, it is biologically important to consider the timings of firings and to study how such consideration influences storage capacities and quality of recalled patterns. For this purpose, we propose a simple extended model of oscillator neural networks to allow for expression of a nonfiring state. Analyzing both equilibrium states and dynamical properties in recalling processes, we find that the system possesses good associative memory.

  4. Optimization of Training Sets for Neural-Net Processing of Characteristic Patterns from Vibrating Solids

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2001-01-01

    Artificial neural networks have been used for a number of years to process holography-generated characteristic patterns of vibrating structures. This technology depends critically on the selection and the conditioning of the training sets. A scaling operation called folding is discussed for conditioning training sets optimally for training feed-forward neural networks to process characteristic fringe patterns. Folding allows feed-forward nets to be trained easily to detect damage-induced vibration-displacement-distribution changes as small as 10 nm. A specific application to aerospace of neural-net processing of characteristic patterns is presented to motivate the conditioning and optimization effort.

  5. Neural Network Computing and Natural Language Processing.

    ERIC Educational Resources Information Center

    Borchardt, Frank

    1988-01-01

    Considers the application of neural network concepts to traditional natural language processing and demonstrates that neural network computing architecture can: (1) learn from actual spoken language; (2) observe rules of pronunciation; and (3) reproduce sounds from the patterns derived by its own processes. (Author/CB)

  6. Artificial Neural Network with Regular Graph for Maximum Air Temperature Forecasting:. the Effect of Decrease in Nodes Degree on Learning

    NASA Astrophysics Data System (ADS)

    Ghaderi, A. H.; Darooneh, A. H.

    The behavior of nonlinear systems can be analyzed by artificial neural networks. Air temperature change is one example of the nonlinear systems. In this work, a new neural network method is proposed for forecasting maximum air temperature in two cities. In this method, the regular graph concept is used to construct some partially connected neural networks that have regular structures. The learning results of fully connected ANN and networks with proposed method are compared. In some case, the proposed method has the better result than conventional ANN. After specifying the best network, the effect of input pattern numbers on the prediction is studied and the results show that the increase of input patterns has a direct effect on the prediction accuracy.

  7. Rotation and scale change invariant point pattern relaxation matching by the Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Sang, Nong; Zhang, Tianxu

    1997-12-01

    Relaxation matching is one of the most relevant methods for image matching. The original relaxation matching technique using point patterns is sensitive to rotations and scale changes. We improve the original point pattern relaxation matching technique to be invariant to rotations and scale changes. A method that makes the Hopfield neural network perform this matching process is discussed. An advantage of this is that the relaxation matching process can be performed in real time with the neural network's massively parallel capability to process information. Experimental results with large simulated images demonstrate the effectiveness and feasibility of the method to perform point patten relaxation matching invariant to rotations and scale changes and the method to perform this matching by the Hopfield neural network. In addition, we show that the method presented can be tolerant to small random error.

  8. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Influence of Blurred Ways on Pattern Recognition of a Scale-Free Hopfield Neural Network

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Li

    2010-01-01

    We investigate the influence of blurred ways on pattern recognition of a Barabási-Albert scale-free Hopfield neural network (SFHN) with a small amount of errors. Pattern recognition is an important function of information processing in brain. Due to heterogeneous degree of scale-free network, different blurred ways have different influences on pattern recognition with same errors. Simulation shows that among partial recognition, the larger loading ratio (the number of patterns to average degree P/langlekrangle) is, the smaller the overlap of SFHN is. The influence of directed (large) way is largest and the directed (small) way is smallest while random way is intermediate between them. Under the ratio of the numbers of stored patterns to the size of the network P/N is less than 0. 1 conditions, there are three families curves of the overlap corresponding to directed (small), random and directed (large) blurred ways of patterns and these curves are not associated with the size of network and the number of patterns. This phenomenon only occurs in the SFHN. These conclusions are benefit for understanding the relation between neural network structure and brain function.

  9. An Intelligent Pattern Recognition System Based on Neural Network and Wavelet Decomposition for Interpretation of Heart Sounds

    DTIC Science & Technology

    2001-10-25

    wavelet decomposition of signals and classification using neural network. Inputs to the system are the heart sound signals acquired by a stethoscope in a...Proceedings. pp. 415–418, 1990. [3] G. Ergun, “An intelligent diagnostic system for interpretation of arterpartum fetal heart rate tracings based on ANNs and...AN INTELLIGENT PATTERN RECOGNITION SYSTEM BASED ON NEURAL NETWORK AND WAVELET DECOMPOSITION FOR INTERPRETATION OF HEART SOUNDS I. TURKOGLU1, A

  10. Neural network system for purposeful behavior based on foveal visual preprocessor

    NASA Astrophysics Data System (ADS)

    Golovan, Alexander V.; Shevtsova, Natalia A.; Klepatch, Arkadi A.

    1996-10-01

    Biologically plausible model of the system with an adaptive behavior in a priori environment and resistant to impairment has been developed. The system consists of input, learning, and output subsystems. The first subsystems classifies input patterns presented as n-dimensional vectors in accordance with some associative rule. The second one being a neural network determines adaptive responses of the system to input patterns. Arranged neural groups coding possible input patterns and appropriate output responses are formed during learning by means of negative reinforcement. Output subsystem maps a neural network activity into the system behavior in the environment. The system developed has been studied by computer simulation imitating a collision-free motion of a mobile robot. After some learning period the system 'moves' along a road without collisions. It is shown that in spite of impairment of some neural network elements the system functions reliably after relearning. Foveal visual preprocessor model developed earlier has been tested to form a kind of visual input to the system.

  11. Compact holographic optical neural network system for real-time pattern recognition

    NASA Astrophysics Data System (ADS)

    Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.

    1996-08-01

    One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.

  12. Improved Adjoint-Operator Learning For A Neural Network

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1995-01-01

    Improved method of adjoint-operator learning reduces amount of computation and associated computational memory needed to make electronic neural network learn temporally varying pattern (e.g., to recognize moving object in image) in real time. Method extension of method described in "Adjoint-Operator Learning for a Neural Network" (NPO-18352).

  13. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity.

    PubMed

    Srinivasa, Narayan; Cho, Youngkwan

    2014-01-01

    A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns-both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.

  14. Classification of Company Performance using Weighted Probabilistic Neural Network

    NASA Astrophysics Data System (ADS)

    Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi

    2018-05-01

    Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.

  15. A neural network for recognizing movement patterns during repetitive self-paced movements of the fingers in opposition to the thumb.

    PubMed

    Van Vaerenbergh, J; Vranken, R; Briers, L; Briers, H

    2001-11-01

    A data glove is a typical input device to control a virtual environment. At the same time it measures movements of wrist and fingers. The purposes of this investigation were to assess the ability of BrainMaker, a neural network, to recognize movement patterns during an opposition task that consisted of repetitive self-paced movements of the fingers in opposition to the thumb. The neural network contained 56 inputs, 3 hidden layers of 20 neurons, and one output. The 5th glove '95 (5DT), a commercial glove especially designed for virtual reality games, was used for finger motion capture. The training of the neural network was successful for recognizing the thumb, the index finger and the ring finger movements during the repetitive self-paced movements and neural network performed well during testing.

  16. Development and function of human cerebral cortex neural networks from pluripotent stem cells in vitro

    PubMed Central

    Kirwan, Peter; Turner-Bridger, Benita; Peter, Manuel; Momoh, Ayiba; Arambepola, Devika; Robinson, Hugh P. C.; Livesey, Frederick J.

    2015-01-01

    A key aspect of nervous system development, including that of the cerebral cortex, is the formation of higher-order neural networks. Developing neural networks undergo several phases with distinct activity patterns in vivo, which are thought to prune and fine-tune network connectivity. We report here that human pluripotent stem cell (hPSC)-derived cerebral cortex neurons form large-scale networks that reflect those found in the developing cerebral cortex in vivo. Synchronised oscillatory networks develop in a highly stereotyped pattern over several weeks in culture. An initial phase of increasing frequency of oscillations is followed by a phase of decreasing frequency, before giving rise to non-synchronous, ordered activity patterns. hPSC-derived cortical neural networks are excitatory, driven by activation of AMPA- and NMDA-type glutamate receptors, and can undergo NMDA-receptor-mediated plasticity. Investigating single neuron connectivity within PSC-derived cultures, using rabies-based trans-synaptic tracing, we found two broad classes of neuronal connectivity: most neurons have small numbers (<10) of presynaptic inputs, whereas a small set of hub-like neurons have large numbers of synaptic connections (>40). These data demonstrate that the formation of hPSC-derived cortical networks mimics in vivo cortical network development and function, demonstrating the utility of in vitro systems for mechanistic studies of human forebrain neural network biology. PMID:26395144

  17. Development and function of human cerebral cortex neural networks from pluripotent stem cells in vitro.

    PubMed

    Kirwan, Peter; Turner-Bridger, Benita; Peter, Manuel; Momoh, Ayiba; Arambepola, Devika; Robinson, Hugh P C; Livesey, Frederick J

    2015-09-15

    A key aspect of nervous system development, including that of the cerebral cortex, is the formation of higher-order neural networks. Developing neural networks undergo several phases with distinct activity patterns in vivo, which are thought to prune and fine-tune network connectivity. We report here that human pluripotent stem cell (hPSC)-derived cerebral cortex neurons form large-scale networks that reflect those found in the developing cerebral cortex in vivo. Synchronised oscillatory networks develop in a highly stereotyped pattern over several weeks in culture. An initial phase of increasing frequency of oscillations is followed by a phase of decreasing frequency, before giving rise to non-synchronous, ordered activity patterns. hPSC-derived cortical neural networks are excitatory, driven by activation of AMPA- and NMDA-type glutamate receptors, and can undergo NMDA-receptor-mediated plasticity. Investigating single neuron connectivity within PSC-derived cultures, using rabies-based trans-synaptic tracing, we found two broad classes of neuronal connectivity: most neurons have small numbers (<10) of presynaptic inputs, whereas a small set of hub-like neurons have large numbers of synaptic connections (>40). These data demonstrate that the formation of hPSC-derived cortical networks mimics in vivo cortical network development and function, demonstrating the utility of in vitro systems for mechanistic studies of human forebrain neural network biology. © 2015. Published by The Company of Biologists Ltd.

  18. Empirical modeling for intelligent, real-time manufacture control

    NASA Technical Reports Server (NTRS)

    Xu, Xiaoshu

    1994-01-01

    Artificial neural systems (ANS), also known as neural networks, are an attempt to develop computer systems that emulate the neural reasoning behavior of biological neural systems (e.g. the human brain). As such, they are loosely based on biological neural networks. The ANS consists of a series of nodes (neurons) and weighted connections (axons) that, when presented with a specific input pattern, can associate specific output patterns. It is essentially a highly complex, nonlinear, mathematical relationship or transform. These constructs have two significant properties that have proven useful to the authors in signal processing and process modeling: noise tolerance and complex pattern recognition. Specifically, the authors have developed a new network learning algorithm that has resulted in the successful application of ANS's to high speed signal processing and to developing models of highly complex processes. Two of the applications, the Weld Bead Geometry Control System and the Welding Penetration Monitoring System, are discussed in the body of this paper.

  19. Artificial Neural Network approach to develop unique Classification and Raga identification tools for Pattern Recognition in Carnatic Music

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Parimala, Y. G.

    2011-12-01

    A unique approach has been developed to study patterns in ragas of Carnatic Classical music based on artificial neural networks. Ragas in Carnatic music which have found their roots in the Vedic period, have grown on a Scientific foundation over thousands of years. However owing to its vastness and complexities it has always been a challenge for scientists and musicologists to give an all encompassing perspective both qualitatively and quantitatively. Cognition, comprehension and perception of ragas in Indian classical music have always been the subject of intensive research, highly intriguing and many facets of these are hitherto not unravelled. This paper is an attempt to view the melakartha ragas with a cognitive perspective using artificial neural network based approach which has given raise to very interesting results. The 72 ragas of the melakartha system were defined through the combination of frequencies occurring in each of them. The data sets were trained using several neural networks. 100% accurate pattern recognition and classification was obtained using linear regression, TLRN, MLP and RBF networks. Performance of the different network topologies, by varying various network parameters, were compared. Linear regression was found to be the best performing network.

  20. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity

    PubMed Central

    Srinivasa, Narayan; Cho, Youngkwan

    2014-01-01

    A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns—both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity. PMID:25566045

  1. The effect of the neural activity on topological properties of growing neural networks.

    PubMed

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  2. Neurodynamics With Spatial Self-Organization

    NASA Technical Reports Server (NTRS)

    Zak, Michail A.

    1993-01-01

    Report presents theoretical study of dynamics of neural network organizing own response in both phase space and in position space. Postulates several mathematical models of dynamics including spatial derivatives representing local interconnections among neurons. Shows how neural responses propagate via these interconnections and how spatial pattern of neural responses formed in homogeneous biological neural network.

  3. Competitive STDP Learning of Overlapping Spatial Patterns.

    PubMed

    Krunglevicius, Dalius

    2015-08-01

    Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly (i.e., patterns are mutually inclusive), however, competition would not preclude trained neuron's responding to a new pattern and adjusting synaptic weights accordingly. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor. This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron's forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.

  4. Human brain networks function in connectome-specific harmonic waves.

    PubMed

    Atasoy, Selen; Donnelly, Isaac; Pearson, Joel

    2016-01-21

    A key characteristic of human brain activity is coherent, spatially distributed oscillations forming behaviour-dependent brain networks. However, a fundamental principle underlying these networks remains unknown. Here we report that functional networks of the human brain are predicted by harmonic patterns, ubiquitous throughout nature, steered by the anatomy of the human cerebral cortex, the human connectome. We introduce a new technique extending the Fourier basis to the human connectome. In this new frequency-specific representation of cortical activity, that we call 'connectome harmonics', oscillatory networks of the human brain at rest match harmonic wave patterns of certain frequencies. We demonstrate a neural mechanism behind the self-organization of connectome harmonics with a continuous neural field model of excitatory-inhibitory interactions on the connectome. Remarkably, the critical relation between the neural field patterns and the delicate excitation-inhibition balance fits the neurophysiological changes observed during the loss and recovery of consciousness.

  5. Neural electrical activity and neural network growth.

    PubMed

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. A neural network with modular hierarchical learning

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)

    1994-01-01

    This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.

  7. Ultrasonographic Diagnosis of Cirrhosis Based on Preprocessing Using Pyramid Recurrent Neural Network

    NASA Astrophysics Data System (ADS)

    Lu, Jianming; Liu, Jiang; Zhao, Xueqin; Yahagi, Takashi

    In this paper, a pyramid recurrent neural network is applied to characterize the hepatic parenchymal diseases in ultrasonic B-scan texture. The cirrhotic parenchymal diseases are classified into 4 types according to the size of hypoechoic nodular lesions. The B-mode patterns are wavelet transformed , and then the compressed data are feed into a pyramid neural network to diagnose the type of cirrhotic diseases. Compared with the 3-layer neural networks, the performance of the proposed pyramid recurrent neural network is improved by utilizing the lower layer effectively. The simulation result shows that the proposed system is suitable for diagnosis of cirrhosis diseases.

  8. Computer interpretation of thallium SPECT studies based on neural network analysis

    NASA Astrophysics Data System (ADS)

    Wang, David C.; Karvelis, K. C.

    1991-06-01

    A class of artificial intelligence (Al) programs known as neural networks are well suited to pattern recognition. A neural network is trained rather than programmed to recognize patterns. This differs from "expert system" Al programs in that it is not following an extensive set of rules determined by the programmer, but rather bases its decision on a gestalt interpretation of the image. The "bullseye" images from cardiac stress thallium tests performed on 50 male patients, as well as several simulated images were used to train the network. The network was able to accurately classify all patients in the training set. The network was then tested against 50 unknown patients and was able to correctly categorize 77% of the areas of ischemia and 92% of the areas of infarction. While not yet matching the ability of a trained physician, the neural network shows great promise in this area and has potential application in other areas of medical imaging.

  9. Rotation-invariant neural pattern recognition system with application to coin recognition.

    PubMed

    Fukumi, M; Omatu, S; Takeda, F; Kosaka, T

    1992-01-01

    In pattern recognition, it is often necessary to deal with problems to classify a transformed pattern. A neural pattern recognition system which is insensitive to rotation of input pattern by various degrees is proposed. The system consists of a fixed invariance network with many slabs and a trainable multilayered network. The system was used in a rotation-invariant coin recognition problem to distinguish between a 500 yen coin and a 500 won coin. The results show that the approach works well for variable rotation pattern recognition.

  10. Large memory capacity in chaotic artificial neural networks: a view of the anti-integrable limit.

    PubMed

    Lin, Wei; Chen, Guanrong

    2009-08-01

    In the literature, it was reported that the chaotic artificial neural network model with sinusoidal activation functions possesses a large memory capacity as well as a remarkable ability of retrieving the stored patterns, better than the conventional chaotic model with only monotonic activation functions such as sigmoidal functions. This paper, from the viewpoint of the anti-integrable limit, elucidates the mechanism inducing the superiority of the model with periodic activation functions that includes sinusoidal functions. Particularly, by virtue of the anti-integrable limit technique, this paper shows that any finite-dimensional neural network model with periodic activation functions and properly selected parameters has much more abundant chaotic dynamics that truly determine the model's memory capacity and pattern-retrieval ability. To some extent, this paper mathematically and numerically demonstrates that an appropriate choice of the activation functions and control scheme can lead to a large memory capacity and better pattern-retrieval ability of the artificial neural network models.

  11. The neural signature of emotional memories in serial crimes.

    PubMed

    Chassy, Philippe

    2017-10-01

    Neural plasticity is the process whereby semantic information and emotional responses are stored in neural networks. It is hypothesized that the neural networks built over time to encode the sexual fantasies that motivate serial killers to act should display a unique, detectable activation pattern. The pathological neural watermark hypothesis posits that such networks comprise activation of brain sites that reflect four cognitive components: autobiographical memory, sexual arousal, aggression, and control over aggression. The neural sites performing these cognitive functions have been successfully identified by previous research. The key findings are reviewed to hypothesise the typical pattern of activity that serial killers should display. Through the integration of biological findings into one framework, the neural approach proposed in this paper is in stark contrast with the many theories accounting for serial killers that offer non-medical taxonomies. The pathological neural watermark hypothesis offers a new framework to understand and detect deviant individuals. The technical and legal issues are briefly discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  13. Reduction of the dimension of neural network models in problems of pattern recognition and forecasting

    NASA Astrophysics Data System (ADS)

    Nasertdinova, A. D.; Bochkarev, V. V.

    2017-11-01

    Deep neural networks with a large number of parameters are a powerful tool for solving problems of pattern recognition, prediction and classification. Nevertheless, overfitting remains a serious problem in the use of such networks. A method of solving the problem of overfitting is proposed in this article. This method is based on reducing the number of independent parameters of a neural network model using the principal component analysis, and can be implemented using existing libraries of neural computing. The algorithm was tested on the problem of recognition of handwritten symbols from the MNIST database, as well as on the task of predicting time series (rows of the average monthly number of sunspots and series of the Lorentz system were used). It is shown that the application of the principal component analysis enables reducing the number of parameters of the neural network model when the results are good. The average error rate for the recognition of handwritten figures from the MNIST database was 1.12% (which is comparable to the results obtained using the "Deep training" methods), while the number of parameters of the neural network can be reduced to 130 times.

  14. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    NASA Technical Reports Server (NTRS)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  15. Behavioral and Physiological Neural Network Analyses: A Common Pathway toward Pattern Recognition and Prediction

    ERIC Educational Resources Information Center

    Ninness, Chris; Lauter, Judy L.; Coffee, Michael; Clary, Logan; Kelly, Elizabeth; Rumph, Marilyn; Rumph, Robin; Kyle, Betty; Ninness, Sharon K.

    2012-01-01

    Using 3 diversified datasets, we explored the pattern-recognition ability of the Self-Organizing Map (SOM) artificial neural network as applied to diversified nonlinear data distributions in the areas of behavioral and physiological research. Experiment 1 employed a dataset obtained from the UCI Machine Learning Repository. Data for this study…

  16. Nonlinear Time Series Analysis via Neural Networks

    NASA Astrophysics Data System (ADS)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  17. Orthogonal patterns in binary neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    A binary neural network that stores only mutually orthogonal patterns is shown to converge, when probed by any pattern, to a pattern in the memory space, i.e., the space spanned by the stored patterns. The latter are shown to be the only members of the memory space under a certain coding condition, which allows maximum storage of M=(2N) sup 0.5 patterns, where N is the number of neurons. The stored patterns are shown to have basins of attraction of radius N/(2M), within which errors are corrected with probability 1 in a single update cycle. When the probe falls outside these regions, the error correction capability can still be increased to 1 by repeatedly running the network with the same probe.

  18. Real-time determination of fringe pattern frequencies: An application to pressure measurement

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Piroozan, Parham

    2007-05-01

    Retrieving information in real time from fringe patterns is a topic of a great deal of interest in scientific and engineering applications of optical methods. This paper presents a method for fringe frequency determination based on the capability of neural networks to recognize signals that are similar but not identical to signals used to train the neural network. Sampled patterns are generated by calibration and stored in memory. Incoming patterns are analyzed by a back-propagation neural network at the speed of the recording device, a CCD camera. This method of information retrieval is utilized to measure pressures on a boundary layer flow. The sensor combines optics and electronics to analyze dynamic pressure distributions and to feed information to a control system that is capable to preserve the stability of the flow.

  19. Using a neural network to proximity correct patterns written with a Cambridge electron beam microfabricator 10.5 lithography system

    NASA Astrophysics Data System (ADS)

    Cummings, K. D.; Frye, R. C.; Rietman, E. A.

    1990-10-01

    This letter describes the initial results of using a theoretical determination of the proximity function and an adaptively trained neural network to proximity-correct patterns written on a Cambridge electron beam lithography system. The methods described are complete and may be applied to any electron beam exposure system that can modify the dose during exposure. The patterns produced in resist show the effects of proximity correction versus noncorrected patterns.

  20. Different propagation speeds of recalled sequences in plastic spiking neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Xuhui; Zheng, Zhigang; Hu, Gang; Wu, Si; Rasch, Malte J.

    2015-03-01

    Neural networks can generate spatiotemporal patterns of spike activity. Sequential activity learning and retrieval have been observed in many brain areas, and e.g. is crucial for coding of episodic memory in the hippocampus or generating temporal patterns during song production in birds. In a recent study, a sequential activity pattern was directly entrained onto the neural activity of the primary visual cortex (V1) of rats and subsequently successfully recalled by a local and transient trigger. It was observed that the speed of activity propagation in coordinates of the retinotopically organized neural tissue was constant during retrieval regardless how the speed of light stimulation sweeping across the visual field during training was varied. It is well known that spike-timing dependent plasticity (STDP) is a potential mechanism for embedding temporal sequences into neural network activity. How training and retrieval speeds relate to each other and how network and learning parameters influence retrieval speeds, however, is not well described. We here theoretically analyze sequential activity learning and retrieval in a recurrent neural network with realistic synaptic short-term dynamics and STDP. Testing multiple STDP rules, we confirm that sequence learning can be achieved by STDP. However, we found that a multiplicative nearest-neighbor (NN) weight update rule generated weight distributions and recall activities that best matched the experiments in V1. Using network simulations and mean-field analysis, we further investigated the learning mechanisms and the influence of network parameters on recall speeds. Our analysis suggests that a multiplicative STDP rule with dominant NN spike interaction might be implemented in V1 since recall speed was almost constant in an NMDA-dominant regime. Interestingly, in an AMPA-dominant regime, neural circuits might exhibit recall speeds that instead follow the change in stimulus speeds. This prediction could be tested in experiments.

  1. Exploring Neural Network Models with Hierarchical Memories and Their Use in Modeling Biological Systems

    NASA Astrophysics Data System (ADS)

    Pusuluri, Sai Teja

    Energy landscapes are often used as metaphors for phenomena in biology, social sciences and finance. Different methods have been implemented in the past for the construction of energy landscapes. Neural network models based on spin glass physics provide an excellent mathematical framework for the construction of energy landscapes. This framework uses a minimal number of parameters and constructs the landscape using data from the actual phenomena. In the past neural network models were used to mimic the storage and retrieval process of memories (patterns) in the brain. With advances in the field now, these models are being used in machine learning, deep learning and modeling of complex phenomena. Most of the past literature focuses on increasing the storage capacity and stability of stored patterns in the network but does not study these models from a modeling perspective or an energy landscape perspective. This dissertation focuses on neural network models both from a modeling perspective and from an energy landscape perspective. I firstly show how the cellular interconversion phenomenon can be modeled as a transition between attractor states on an epigenetic landscape constructed using neural network models. The model allows the identification of a reaction coordinate of cellular interconversion by analyzing experimental and simulation time course data. Monte Carlo simulations of the model show that the initial phase of cellular interconversion is a Poisson process and the later phase of cellular interconversion is a deterministic process. Secondly, I explore the static features of landscapes generated using neural network models, such as sizes of basins of attraction and densities of metastable states. The simulation results show that the static landscape features are strongly dependent on the correlation strength and correlation structure between patterns. Using different hierarchical structures of the correlation between patterns affects the landscape features. These results show how the static landscape features can be controlled by adjusting the correlations between patterns. Finally, I explore the dynamical features of landscapes generated using neural network models such as the stability of minima and the transition rates between minima. The results from this project show that the stability depends on the correlations between patterns. It is also found that the transition rates between minima strongly depend on the type of bias applied and the correlation between patterns. The results from this part of the dissertation can be useful in engineering an energy landscape without even having the complete information about the associated minima of the landscape.

  2. Neural-Network-Development Program

    NASA Technical Reports Server (NTRS)

    Phillips, Todd A.

    1993-01-01

    NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.

  3. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks.

    PubMed

    Khan, Taimoor; De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results.

  4. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks

    PubMed Central

    De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616

  5. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    USGS Publications Warehouse

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  6. Development of neural network techniques for finger-vein pattern classification

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Da; Liu, Chiung-Tsiung; Tsai, Yi-Jang; Liu, Jun-Ching; Chang, Ya-Wen

    2010-02-01

    A personal identification system using finger-vein patterns and neural network techniques is proposed in the present study. In the proposed system, the finger-vein patterns are captured by a device that can transmit near infrared through the finger and record the patterns for signal analysis and classification. The biometric system for verification consists of a combination of feature extraction using principal component analysis and pattern classification using both back-propagation network and adaptive neuro-fuzzy inference systems. Finger-vein features are first extracted by principal component analysis method to reduce the computational burden and removes noise residing in the discarded dimensions. The features are then used in pattern classification and identification. To verify the effect of the proposed adaptive neuro-fuzzy inference system in the pattern classification, the back-propagation network is compared with the proposed system. The experimental results indicated the proposed system using adaptive neuro-fuzzy inference system demonstrated a better performance than the back-propagation network for personal identification using the finger-vein patterns.

  7. Neural network classification of clinical neurophysiological data for acute care monitoring

    NASA Technical Reports Server (NTRS)

    Sgro, Joseph

    1994-01-01

    The purpose of neurophysiological monitoring of the 'acute care' patient is to allow the accurate recognition of changing or deteriorating neurological function as close to the moment of occurrence as possible, thus permitting immediate intervention. Results confirm that: (1) neural networks are able to accurately identify electroencephalogram (EEG) patterns and evoked potential (EP) wave components, and measuring EP waveform latencies and amplitudes; (2) neural networks are able to accurately detect EP and EEG recordings that have been contaminated by noise; (3) the best performance was obtained consistently with the back propagation network for EP and the HONN for EEG's; (4) neural network performed consistently better than other methods evaluated; and (5) neural network EEG and EP analyses are readily performed on multichannel data.

  8. Multi-voxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    PubMed Central

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.; Munhall, Kevin G.; Cusack, Rhodri; Johnsrude, Ingrid S.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multi-voxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was employed to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared to during passive listening. One network of regions appears to encode an ‘error signal’ irrespective of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a fronto-temporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Taken together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems. PMID:23467350

  9. A research using hybrid RBF/Elman neural networks for intrusion detection system secure model

    NASA Astrophysics Data System (ADS)

    Tong, Xiaojun; Wang, Zhu; Yu, Haining

    2009-10-01

    A hybrid RBF/Elman neural network model that can be employed for both anomaly detection and misuse detection is presented in this paper. The IDSs using the hybrid neural network can detect temporally dispersed and collaborative attacks effectively because of its memory of past events. The RBF network is employed as a real-time pattern classification and the Elman network is employed to restore the memory of past events. The IDSs using the hybrid neural network are evaluated against the intrusion detection evaluation data sponsored by U.S. Defense Advanced Research Projects Agency (DARPA). Experimental results are presented in ROC curves. Experiments show that the IDSs using this hybrid neural network improve the detection rate and decrease the false positive rate effectively.

  10. Advanced obstacle avoidance for a laser based wheelchair using optimised Bayesian neural networks.

    PubMed

    Trieu, Hoang T; Nguyen, Hung T; Willey, Keith

    2008-01-01

    In this paper we present an advanced method of obstacle avoidance for a laser based intelligent wheelchair using optimized Bayesian neural networks. Three neural networks are designed for three separate sub-tasks: passing through a door way, corridor and wall following and general obstacle avoidance. The accurate usable accessible space is determined by including the actual wheelchair dimensions in a real-time map used as inputs to each networks. Data acquisitions are performed separately to collect the patterns required for specified sub-tasks. Bayesian frame work is used to determine the optimal neural network structure in each case. Then these networks are trained under the supervision of Bayesian rule. Experiment results showed that compare to the VFH algorithm our neural networks navigated a smoother path following a near optimum trajectory.

  11. Applying Neural Networks in Optical Communication Systems: Possible Pitfalls

    NASA Astrophysics Data System (ADS)

    Eriksson, Tobias A.; Bulow, Henning; Leven, Andreas

    2017-12-01

    We investigate the risk of overestimating the performance gain when applying neural network based receivers in systems with pseudo random bit sequences or with limited memory depths, resulting in repeated short patterns. We show that with such sequences, a large artificial gain can be obtained which comes from pattern prediction rather than predicting or compensating the studied channel/phenomena.

  12. A generalized locomotion CPG architecture based on oscillatory building blocks.

    PubMed

    Yang, Zhijun; França, Felipe M G

    2003-07-01

    Neural oscillation is one of the most extensively investigated topics of artificial neural networks. Scientific approaches to the functionalities of both natural and artificial intelligences are strongly related to mechanisms underlying oscillatory activities. This paper concerns itself with the assumption of the existence of central pattern generators (CPGs), which are the plausible neural architectures with oscillatory capabilities, and presents a discrete and generalized approach to the functionality of locomotor CPGs of legged animals. Based on scheduling by multiple edge reversal (SMER), a primitive and deterministic distributed algorithm, it is shown how oscillatory building block (OBB) modules can be created and, hence, how OBB-based networks can be formulated as asymmetric Hopfield-like neural networks for the generation of complex coordinated rhythmic patterns observed among pairs of biological motor neurons working during different gait patterns. It is also shown that the resulting Hopfield-like network possesses the property of reproducing the whole spectrum of different gaits intrinsic to the target locomotor CPGs. Although the new approach is not restricted to the understanding of the neurolocomotor system of any particular animal, hexapodal and quadrupedal gait patterns are chosen as illustrations given the wide interest expressed by the ongoing research in the area.

  13. Neural networks and traditional time series methods: a synergistic combination in state economic forecasts.

    PubMed

    Hansen, J V; Nelson, R D

    1997-01-01

    Ever since the initial planning for the 1997 Utah legislative session, neural-network forecasting techniques have provided valuable insights for analysts forecasting tax revenues. These revenue estimates are critically important since agency budgets, support for education, and improvements to infrastructure all depend on their accuracy. Underforecasting generates windfalls that concern taxpayers, whereas overforecasting produces budget shortfalls that cause inadequately funded commitments. The pattern finding ability of neural networks gives insightful and alternative views of the seasonal and cyclical components commonly found in economic time series data. Two applications of neural networks to revenue forecasting clearly demonstrate how these models complement traditional time series techniques. In the first, preoccupation with a potential downturn in the economy distracts analysis based on traditional time series methods so that it overlooks an emerging new phenomenon in the data. In this case, neural networks identify the new pattern that then allows modification of the time series models and finally gives more accurate forecasts. In the second application, data structure found by traditional statistical tools allows analysts to provide neural networks with important information that the networks then use to create more accurate models. In summary, for the Utah revenue outlook, the insights that result from a portfolio of forecasts that includes neural networks exceeds the understanding generated from strictly statistical forecasting techniques. In this case, the synergy clearly results in the whole of the portfolio of forecasts being more accurate than the sum of the individual parts.

  14. Artificial Neural Networks and Instructional Technology.

    ERIC Educational Resources Information Center

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  15. Recognition and classification of oscillatory patterns of electric brain activity using artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Pchelintseva, Svetlana V.; Runnova, Anastasia E.; Musatov, Vyacheslav Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we study the problem of recognition type of the observed object, depending on the generated pattern and the registered EEG data. EEG recorded at the time of displaying cube Necker characterizes appropriate state of brain activity. As an image we use bistable image Necker cube. Subject selects the type of cube and interpret it either as aleft cube or as the right cube. To solve the problem of recognition, we use artificial neural networks. In our paper to create a classifier we have considered a multilayer perceptron. We examine the structure of the artificial neural network and define cubes recognition accuracy.

  16. Propagating waves can explain irregular neural dynamics.

    PubMed

    Keane, Adam; Gong, Pulin

    2015-01-28

    Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. Copyright © 2015 the authors 0270-6474/15/351591-15$15.00/0.

  17. The role of symmetry in neural networks and their Laplacian spectra.

    PubMed

    de Lange, Siemon C; van den Heuvel, Martijn P; de Reus, Marcel A

    2016-11-01

    Human and animal nervous systems constitute complexly wired networks that form the infrastructure for neural processing and integration of information. The organization of these neural networks can be analyzed using the so-called Laplacian spectrum, providing a mathematical tool to produce systems-level network fingerprints. In this article, we examine a characteristic central peak in the spectrum of neural networks, including anatomical brain network maps of the mouse, cat and macaque, as well as anatomical and functional network maps of human brain connectivity. We link the occurrence of this central peak to the level of symmetry in neural networks, an intriguing aspect of network organization resulting from network elements that exhibit similar wiring patterns. Specifically, we propose a measure to capture the global level of symmetry of a network and show that, for both empirical networks and network models, the height of the main peak in the Laplacian spectrum is strongly related to node symmetry in the underlying network. Moreover, examination of spectra of duplication-based model networks shows that neural spectra are best approximated using a trade-off between duplication and diversification. Taken together, our results facilitate a better understanding of neural network spectra and the importance of symmetry in neural networks. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  19. Neural networks application to divergence-based passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1992-01-01

    The purpose of this report is to summarize the state of knowledge and outline the planned work in divergence-based/neural networks approach to the problem of passive ranging derived from optical flow. Work in this and closely related areas is reviewed in order to provide the necessary background for further developments. New ideas about devising a monocular passive-ranging system are then introduced. It is shown that image-plan divergence is independent of image-plan location with respect to the focus of expansion and of camera maneuvers because it directly measures the object's expansion which, in turn, is related to the time-to-collision. Thus, a divergence-based method has the potential of providing a reliable range complementing other monocular passive-ranging methods which encounter difficulties in image areas close to the focus of expansion. Image-plan divergence can be thought of as some spatial/temporal pattern. A neural network realization was chosen for this task because neural networks have generally performed well in various other pattern recognition applications. The main goal of this work is to teach a neural network to derive the divergence from the imagery.

  20. Feature to prototype transition in neural networks

    NASA Astrophysics Data System (ADS)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  1. Pencil-and-Paper Neural Networks: An Undergraduate Laboratory Exercise in Computational Neuroscience

    PubMed Central

    Crisp, Kevin M.; Sutter, Ellen N.; Westerberg, Jacob A.

    2015-01-01

    Although it has been more than 70 years since McCulloch and Pitts published their seminal work on artificial neural networks, such models remain primarily in the domain of computer science departments in undergraduate education. This is unfortunate, as simple network models offer undergraduate students a much-needed bridge between cellular neurobiology and processes governing thought and behavior. Here, we present a very simple laboratory exercise in which students constructed, trained and tested artificial neural networks by hand on paper. They explored a variety of concepts, including pattern recognition, pattern completion, noise elimination and stimulus ambiguity. Learning gains were evident in changes in the use of language when writing about information processing in the brain. PMID:26557791

  2. Signal processing method and system for noise removal and signal extraction

    DOEpatents

    Fu, Chi Yung; Petrich, Loren

    2009-04-14

    A signal processing method and system combining smooth level wavelet pre-processing together with artificial neural networks all in the wavelet domain for signal denoising and extraction. Upon receiving a signal corrupted with noise, an n-level decomposition of the signal is performed using a discrete wavelet transform to produce a smooth component and a rough component for each decomposition level. The n.sup.th level smooth component is then inputted into a corresponding neural network pre-trained to filter out noise in that component by pattern recognition in the wavelet domain. Additional rough components, beginning at the highest level, may also be retained and inputted into corresponding neural networks pre-trained to filter out noise in those components also by pattern recognition in the wavelet domain. In any case, an inverse discrete wavelet transform is performed on the combined output from all the neural networks to recover a clean signal back in the time domain.

  3. Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Patrick I.

    2003-09-23

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neuralmore » networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing information [2]. Each one of these cells acts as a simple processor. When individual cells interact with one another, the complex abilities of the brain are made possible. In neural networks, the input or data are processed by a propagation function that adds up the values of all the incoming data. The ending value is then compared with a threshold or specific value. The resulting value must exceed the activation function value in order to become output. The activation function is a mathematical function that a neuron uses to produce an output referring to its input value. [8] Figure 1 depicts this process. Neural networks usually have three components an input, a hidden, and an output. These layers create the end result of the neural network. A real world example is a child associating the word dog with a picture. The child says dog and simultaneously looks a picture of a dog. The input is the spoken word ''dog'', the hidden is the brain processing, and the output will be the category of the word dog based on the picture. This illustration describes how a neural network functions.« less

  4. The use of global image characteristics for neural network pattern recognitions

    NASA Astrophysics Data System (ADS)

    Kulyas, Maksim O.; Kulyas, Oleg L.; Loshkarev, Aleksey S.

    2017-04-01

    The recognition system is observed, where the information is transferred by images of symbols generated by a television camera. For descriptors of objects the coefficients of two-dimensional Fourier transformation generated in a special way. For solution of the task of classification the one-layer neural network trained on reference images is used. Fast learning of a neural network with a single neuron calculation of coefficients is applied.

  5. Threat Based Risk Assessment for Enterprise Networks

    DTIC Science & Technology

    2016-02-15

    served as the program chair of the Research in Attacks, Intrusions , and Defenses workshop; the Neural Information Processing Systems (NIPS) annual...Threat- Based Risk Assessment for Enterprise Networks Richard P. Lippmann and James F. Riordan Protecting enterprise networks requires...include aids for the hearing impaired, speech recognition, pattern classification, neural networks , and cybersecurity. He has taught three courses

  6. Information recall using relative spike timing in a spiking neural network.

    PubMed

    Sterne, Philip

    2012-08-01

    We present a neural network that is capable of completing and correcting a spiking pattern given only a partial, noisy version. It operates in continuous time and represents information using the relative timing of individual spikes. The network is capable of correcting and recalling multiple patterns simultaneously. We analyze the network's performance in terms of information recall. We explore two measures of the capacity of the network: one that values the accurate recall of individual spike times and another that values only the presence or absence of complete patterns. Both measures of information are found to scale linearly in both the number of neurons and the period of the patterns, suggesting these are natural measures of network information. We show a smooth transition from encodings that provide precise spike times to flexible encodings that can encode many scenes. This makes it plausible that many diverse tasks could be learned with such an encoding.

  7. Architecture and biological applications of artificial neural networks: a tuberculosis perspective.

    PubMed

    Darsey, Jerry A; Griffin, William O; Joginipelli, Sravanthi; Melapu, Venkata Kiran

    2015-01-01

    Advancement of science and technology has prompted researchers to develop new intelligent systems that can solve a variety of problems such as pattern recognition, prediction, and optimization. The ability of the human brain to learn in a fashion that tolerates noise and error has attracted many researchers and provided the starting point for the development of artificial neural networks: the intelligent systems. Intelligent systems can acclimatize to the environment or data and can maximize the chances of success or improve the efficiency of a search. Due to massive parallelism with large numbers of interconnected processers and their ability to learn from the data, neural networks can solve a variety of challenging computational problems. Neural networks have the ability to derive meaning from complicated and imprecise data; they are used in detecting patterns, and trends that are too complex for humans, or other computer systems. Solutions to the toughest problems will not be found through one narrow specialization; therefore we need to combine interdisciplinary approaches to discover the solutions to a variety of problems. Many researchers in different disciplines such as medicine, bioinformatics, molecular biology, and pharmacology have successfully applied artificial neural networks. This chapter helps the reader in understanding the basics of artificial neural networks, their applications, and methodology; it also outlines the network learning process and architecture. We present a brief outline of the application of neural networks to medical diagnosis, drug discovery, gene identification, and protein structure prediction. We conclude with a summary of the results from our study on tuberculosis data using neural networks, in diagnosing active tuberculosis, and predicting chronic vs. infiltrative forms of tuberculosis.

  8. Evaluation of Sex-Specific Movement Patterns in Judo Using Probabilistic Neural Networks.

    PubMed

    Miarka, Bianca; Sterkowicz-Przybycien, Katarzyna; Fukuda, David H

    2017-10-01

    The purpose of the present study was to create a probabilistic neural network to clarify the understanding of movement patterns in international judo competitions by gender. Analysis of 773 male and 638 female bouts was utilized to identify movements during the approach, gripping, attack (including biomechanical designations), groundwork, defense, and pause phases. Probabilistic neural network and chi-square (χ 2 ) tests modeled and compared frequencies (p ≤ .05). Women (mean [interquartile range]: 9.9 [4; 14]) attacked more than men (7.0 [3; 10]) while attempting a greater number of arm/leg lever (women: 2.7 [1; 6]; men: 4.0 [0; 4]) and trunk/leg lever (women: 0.8 [0; 1]; men: 2.4 [0; 4]) techniques but fewer maximal length-moment arm techniques (women: 0.7 [0; 1]; men: 1.0 [0; 2]). Male athletes displayed one-handed gripping of the back and sleeve, whereas female athletes executed a greater number of groundwork techniques. An optimized probabilistic neural network model, using patterns from the gripping, attack, groundwork, and pause phases, produced an overall prediction accuracy of 76% for discrimination between men and women.

  9. Coronary Artery Diagnosis Aided by Neural Network

    NASA Astrophysics Data System (ADS)

    Stefko, Kamil

    2007-01-01

    Coronary artery disease is due to atheromatous narrowing and subsequent occlusion of the coronary vessel. Application of optimised feed forward multi-layer back propagation neural network (MLBP) for detection of narrowing in coronary artery vessels is presented in this paper. The research was performed using 580 data records from traditional ECG exercise test confirmed by coronary arteriography results. Each record of training database included description of the state of a patient providing input data for the neural network. Level and slope of ST segment of a 12 lead ECG signal recorded at rest and after effort (48 floating point values) was the main component of input data for neural network was. Coronary arteriography results (verified the existence or absence of more than 50% stenosis of the particular coronary vessels) were used as a correct neural network training output pattern. More than 96% of cases were correctly recognised by especially optimised and a thoroughly verified neural network. Leave one out method was used for neural network verification so 580 data records could be used for training as well as for verification of neural network.

  10. Application of Artificial Neural Networks to the Design of Turbomachinery Airfoils

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri

    1997-01-01

    Artificial neural networks are widely used in engineering applications, such as control, pattern recognition, plant modeling and condition monitoring to name just a few. In this seminar we will explore the possibility of applying neural networks to aerodynamic design, in particular, the design of turbomachinery airfoils. The principle idea behind this effort is to represent the design space using a neural network (within some parameter limits), and then to employ an optimization procedure to search this space for a solution that exhibits optimal performance characteristics. Results obtained for design problems in two spatial dimensions will be presented.

  11. A mixed-signal implementation of a polychronous spiking neural network with delay adaptation

    PubMed Central

    Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan C.; van Schaik, André

    2014-01-01

    We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits. PMID:24672422

  12. A mixed-signal implementation of a polychronous spiking neural network with delay adaptation.

    PubMed

    Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan C; van Schaik, André

    2014-01-01

    We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits.

  13. A Decade of Neural Networks: Practical Applications and Prospects

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina E.

    1994-01-01

    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization.

  14. Hetero-association for pattern translation

    NASA Astrophysics Data System (ADS)

    Yu, Francis T. S.; Lu, Thomas T.; Yang, Xiangyang

    1991-09-01

    A hetero-association neural network using an interpattern association algorithm is presented. By using simple logical rules, hetero-association memory can be constructed based on the association between the input-output reference patterns. For optical implementation, a compact size liquid crystal television neural network is used. Translations between the English letters and the Chinese characters as well as Arabic and Chinese numerics are demonstrated. The authors have shown that the hetero-association model can perform more effectively in comparison to the Hopfield model in retrieving large numbers of similar patterns.

  15. Implementing Signature Neural Networks with Spiking Neurons

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence of inhibitory connections. These parameters also modulate the memory capabilities of the network. The dynamical modes observed in the different informational dimensions in a given moment are independent and they only depend on the parameters shaping the information processing in this dimension. In view of these results, we argue that plasticity mechanisms inside individual cells and multicoding strategies can provide additional computational properties to spiking neural networks, which could enhance their capacity and performance in a wide variety of real-world tasks. PMID:28066221

  16. Implementing Signature Neural Networks with Spiking Neurons.

    PubMed

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence of inhibitory connections. These parameters also modulate the memory capabilities of the network. The dynamical modes observed in the different informational dimensions in a given moment are independent and they only depend on the parameters shaping the information processing in this dimension. In view of these results, we argue that plasticity mechanisms inside individual cells and multicoding strategies can provide additional computational properties to spiking neural networks, which could enhance their capacity and performance in a wide variety of real-world tasks.

  17. Optical interconnections and networks; Proceedings of the Meeting, The Hague, Netherlands, Mar. 14, 15, 1990

    NASA Technical Reports Server (NTRS)

    Bartelt, Hartmut (Editor)

    1990-01-01

    The conference presents papers on interconnections, clock distribution, neural networks, and components and materials. Particular attention is given to a comparison of optical and electrical data interconnections at the board and backplane levels, a wafer-level optical interconnection network layout, an analysis and simulation of photonic switch networks, and the integration of picosecond GaAs photoconductive devices with silicon circuits for optical clocking and interconnects. Consideration is also given to the optical implementation of neural networks, invariance in an optoelectronic implementation of neural networks, and the recording of reversible patterns in polymer lightguides.

  18. Comparison of Computational-Model and Experimental-Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  19. Comparison of Computational, Model and Experimental, Example Trained Neural Networks for Processing Speckled Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.

    1998-01-01

    The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.

  20. Application of artificial neural networks with backpropagation technique in the financial data

    NASA Astrophysics Data System (ADS)

    Jaiswal, Jitendra Kumar; Das, Raja

    2017-11-01

    The propensity of applying neural networks has been proliferated in multiple disciplines for research activities since the past recent decades because of its powerful control with regulatory parameters for pattern recognition and classification. It is also being widely applied for forecasting in the numerous divisions. Since financial data have been readily available due to the involvement of computers and computing systems in the stock market premises throughout the world, researchers have also developed numerous techniques and algorithms to analyze the data from this sector. In this paper we have applied neural network with backpropagation technique to find the data pattern from finance section and prediction for stock values as well.

  1. Neural network approach to proximity effect corrections in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Frye, Robert C.; Cummings, Kevin D.; Rietman, Edward A.

    1990-05-01

    The proximity effect, caused by electron beam backscattering during resist exposure, is an important concern in writing submicron features. It can be compensated by appropriate local changes in the incident beam dose, but computation of the optimal correction usually requires a prohibitively long time. We present an example of such a computation on a small test pattern, which we performed by an iterative method. We then used this solution as a training set for an adaptive neural network. After training, the network computed the same correction as the iterative method, but in a much shorter time. Correcting the image with a software based neural network resulted in a decrease in the computation time by a factor of 30, and a hardware based network enhanced the computation speed by more than a factor of 1000. Both methods had an acceptably small error of 0.5% compared to the results of the iterative computation. Additionally, we verified that the neural network correctly generalized the solution of the problem to include patterns not contained in its training set.

  2. Identification of Abnormal System Noise Temperature Patterns in Deep Space Network Antennas Using Neural Network Trained Fuzzy Logic

    NASA Technical Reports Server (NTRS)

    Lu, Thomas; Pham, Timothy; Liao, Jason

    2011-01-01

    This paper presents the development of a fuzzy logic function trained by an artificial neural network to classify the system noise temperature (SNT) of antennas in the NASA Deep Space Network (DSN). The SNT data were classified into normal, marginal, and abnormal classes. The irregular SNT pattern was further correlated with link margin and weather data. A reasonably good correlation is detected among high SNT, low link margin and the effect of bad weather; however we also saw some unexpected non-correlations which merit further study in the future.

  3. CNN: a speaker recognition system using a cascaded neural network.

    PubMed

    Zaki, M; Ghalwash, A; Elkouny, A A

    1996-05-01

    The main emphasis of this paper is to present an approach for combining supervised and unsupervised neural network models to the issue of speaker recognition. To enhance the overall operation and performance of recognition, the proposed strategy integrates the two techniques, forming one global model called the cascaded model. We first present a simple conventional technique based on the distance measured between a test vector and a reference vector for different speakers in the population. This particular distance metric has the property of weighting down the components in those directions along which the intraspeaker variance is large. The reason for presenting this method is to clarify the discrepancy in performance between the conventional and neural network approach. We then introduce the idea of using unsupervised learning technique, presented by the winner-take-all model, as a means of recognition. Due to several tests that have been conducted and in order to enhance the performance of this model, dealing with noisy patterns, we have preceded it with a supervised learning model--the pattern association model--which acts as a filtration stage. This work includes both the design and implementation of both conventional and neural network approaches to recognize the speakers templates--which are introduced to the system via a voice master card and preprocessed before extracting the features used in the recognition. The conclusion indicates that the system performance in case of neural network is better than that of the conventional one, achieving a smooth degradation in respect of noisy patterns, and higher performance in respect of noise-free patterns.

  4. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

    PubMed

    Zenke, Friedemann; Ganguli, Surya

    2018-06-01

    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

  5. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)

    PubMed Central

    Dülger, L. Canan; Kapucu, Sadettin

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles. PMID:27610129

  6. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242).

    PubMed

    Almusawi, Ahmed R J; Dülger, L Canan; Kapucu, Sadettin

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.

  7. Data-driven inference of network connectivity for modeling the dynamics of neural codes in the insect antennal lobe

    PubMed Central

    Shlizerman, Eli; Riffell, Jeffrey A.; Kutz, J. Nathan

    2014-01-01

    The antennal lobe (AL), olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units), and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (1) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (2) characterize scent recognition, i.e., decision-making based on olfactory signals and (3) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. PMID:25165442

  8. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    PubMed

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  9. Applications of neural networks in training science.

    PubMed

    Pfeiffer, Mark; Hohmann, Andreas

    2012-04-01

    Training science views itself as an integrated and applied science, developing practical measures founded on scientific method. Therefore, it demands consideration of a wide spectrum of approaches and methods. Especially in the field of competitive sports, research questions are usually located in complex environments, so that mainly field studies are drawn upon to obtain broad external validity. Here, the interrelations between different variables or variable sets are mostly of a nonlinear character. In these cases, methods like neural networks, e.g., the pattern recognizing methods of Self-Organizing Kohonen Feature Maps or similar instruments to identify interactions might be successfully applied to analyze data. Following on from a classification of data analysis methods in training-science research, the aim of the contribution is to give examples of varied sports in which network approaches can be effectually used in training science. First, two examples are given in which neural networks are employed for pattern recognition. While one investigation deals with the detection of sporting talent in swimming, the other is located in game sports research, identifying tactical patterns in team handball. The third and last example shows how an artificial neural network can be used to predict competitive performance in swimming. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Artificial Neural Networks for Processing Graphs with Application to Image Understanding: A Survey

    NASA Astrophysics Data System (ADS)

    Bianchini, Monica; Scarselli, Franco

    In graphical pattern recognition, each data is represented as an arrangement of elements, that encodes both the properties of each element and the relations among them. Hence, patterns are modelled as labelled graphs where, in general, labels can be attached to both nodes and edges. Artificial neural networks able to process graphs are a powerful tool for addressing a great variety of real-world problems, where the information is naturally organized in entities and relationships among entities and, in fact, they have been widely used in computer vision, f.i. in logo recognition, in similarity retrieval, and for object detection. In this chapter, we propose a survey of neural network models able to process structured information, with a particular focus on those architectures tailored to address image understanding applications. Starting from the original recursive model (RNNs), we subsequently present different ways to represent images - by trees, forests of trees, multiresolution trees, directed acyclic graphs with labelled edges, general graphs - and, correspondingly, neural network architectures appropriate to process such structures.

  11. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  12. Silicon synaptic transistor for hardware-based spiking neural network and neuromorphic system

    NASA Astrophysics Data System (ADS)

    Kim, Hyungjin; Hwang, Sungmin; Park, Jungjin; Park, Byung-Gook

    2017-10-01

    Brain-inspired neuromorphic systems have attracted much attention as new computing paradigms for power-efficient computation. Here, we report a silicon synaptic transistor with two electrically independent gates to realize a hardware-based neural network system without any switching components. The spike-timing dependent plasticity characteristics of the synaptic devices are measured and analyzed. With the help of the device model based on the measured data, the pattern recognition capability of the hardware-based spiking neural network systems is demonstrated using the modified national institute of standards and technology handwritten dataset. By comparing systems with and without inhibitory synapse part, it is confirmed that the inhibitory synapse part is an essential element in obtaining effective and high pattern classification capability.

  13. Silicon synaptic transistor for hardware-based spiking neural network and neuromorphic system.

    PubMed

    Kim, Hyungjin; Hwang, Sungmin; Park, Jungjin; Park, Byung-Gook

    2017-10-06

    Brain-inspired neuromorphic systems have attracted much attention as new computing paradigms for power-efficient computation. Here, we report a silicon synaptic transistor with two electrically independent gates to realize a hardware-based neural network system without any switching components. The spike-timing dependent plasticity characteristics of the synaptic devices are measured and analyzed. With the help of the device model based on the measured data, the pattern recognition capability of the hardware-based spiking neural network systems is demonstrated using the modified national institute of standards and technology handwritten dataset. By comparing systems with and without inhibitory synapse part, it is confirmed that the inhibitory synapse part is an essential element in obtaining effective and high pattern classification capability.

  14. Predicate calculus for an architecture of multiple neural networks

    NASA Astrophysics Data System (ADS)

    Consoli, Robert H.

    1990-08-01

    Future projects with neural networks will require multiple individual network components. Current efforts along these lines are ad hoc. This paper relates the neural network to a classical device and derives a multi-part architecture from that model. Further it provides a Predicate Calculus variant for describing the location and nature of the trainings and suggests Resolution Refutation as a method for determining the performance of the system as well as the location of needed trainings for specific proofs. 2. THE NEURAL NETWORK AND A CLASSICAL DEVICE Recently investigators have been making reports about architectures of multiple neural networksL234. These efforts are appearing at an early stage in neural network investigations they are characterized by architectures suggested directly by the problem space. Touretzky and Hinton suggest an architecture for processing logical statements1 the design of this architecture arises from the syntax of a restricted class of logical expressions and exhibits syntactic limitations. In similar fashion a multiple neural netword arises out of a control problem2 from the sequence learning problem3 and from the domain of machine learning. 4 But a general theory of multiple neural devices is missing. More general attempts to relate single or multiple neural networks to classical computing devices are not common although an attempt is made to relate single neural devices to a Turing machines and Sun et a!. develop a multiple neural architecture that performs pattern classification.

  15. Resting-state hemodynamics are spatiotemporally coupled to synchronized and symmetric neural activity in excitatory neurons.

    PubMed

    Ma, Ying; Shaik, Mohammed A; Kozberg, Mariel G; Kim, Sharon H; Portes, Jacob P; Timerman, Dmitriy; Hillman, Elizabeth M C

    2016-12-27

    Brain hemodynamics serve as a proxy for neural activity in a range of noninvasive neuroimaging techniques including functional magnetic resonance imaging (fMRI). In resting-state fMRI, hemodynamic fluctuations have been found to exhibit patterns of bilateral synchrony, with correlated regions inferred to have functional connectivity. However, the relationship between resting-state hemodynamics and underlying neural activity has not been well established, making the neural underpinnings of functional connectivity networks unclear. In this study, neural activity and hemodynamics were recorded simultaneously over the bilateral cortex of awake and anesthetized Thy1-GCaMP mice using wide-field optical mapping. Neural activity was visualized via selective expression of the calcium-sensitive fluorophore GCaMP in layer 2/3 and 5 excitatory neurons. Characteristic patterns of resting-state hemodynamics were accompanied by more rapidly changing bilateral patterns of resting-state neural activity. Spatiotemporal hemodynamics could be modeled by convolving this neural activity with hemodynamic response functions derived through both deconvolution and gamma-variate fitting. Simultaneous imaging and electrophysiology confirmed that Thy1-GCaMP signals are well-predicted by multiunit activity. Neurovascular coupling between resting-state neural activity and hemodynamics was robust and fast in awake animals, whereas coupling in urethane-anesthetized animals was slower, and in some cases included lower-frequency (<0.04 Hz) hemodynamic fluctuations that were not well-predicted by local Thy1-GCaMP recordings. These results support that resting-state hemodynamics in the awake and anesthetized brain are coupled to underlying patterns of excitatory neural activity. The patterns of bilaterally-symmetric spontaneous neural activity revealed by wide-field Thy1-GCaMP imaging may depict the neural foundation of functional connectivity networks detected in resting-state fMRI.

  16. Resting-state hemodynamics are spatiotemporally coupled to synchronized and symmetric neural activity in excitatory neurons

    PubMed Central

    Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Portes, Jacob P.; Timerman, Dmitriy

    2016-01-01

    Brain hemodynamics serve as a proxy for neural activity in a range of noninvasive neuroimaging techniques including functional magnetic resonance imaging (fMRI). In resting-state fMRI, hemodynamic fluctuations have been found to exhibit patterns of bilateral synchrony, with correlated regions inferred to have functional connectivity. However, the relationship between resting-state hemodynamics and underlying neural activity has not been well established, making the neural underpinnings of functional connectivity networks unclear. In this study, neural activity and hemodynamics were recorded simultaneously over the bilateral cortex of awake and anesthetized Thy1-GCaMP mice using wide-field optical mapping. Neural activity was visualized via selective expression of the calcium-sensitive fluorophore GCaMP in layer 2/3 and 5 excitatory neurons. Characteristic patterns of resting-state hemodynamics were accompanied by more rapidly changing bilateral patterns of resting-state neural activity. Spatiotemporal hemodynamics could be modeled by convolving this neural activity with hemodynamic response functions derived through both deconvolution and gamma-variate fitting. Simultaneous imaging and electrophysiology confirmed that Thy1-GCaMP signals are well-predicted by multiunit activity. Neurovascular coupling between resting-state neural activity and hemodynamics was robust and fast in awake animals, whereas coupling in urethane-anesthetized animals was slower, and in some cases included lower-frequency (<0.04 Hz) hemodynamic fluctuations that were not well-predicted by local Thy1-GCaMP recordings. These results support that resting-state hemodynamics in the awake and anesthetized brain are coupled to underlying patterns of excitatory neural activity. The patterns of bilaterally-symmetric spontaneous neural activity revealed by wide-field Thy1-GCaMP imaging may depict the neural foundation of functional connectivity networks detected in resting-state fMRI. PMID:27974609

  17. Fuzzy Logic Module of Convolutional Neural Network for Handwritten Digits Recognition

    NASA Astrophysics Data System (ADS)

    Popko, E. A.; Weinstein, I. A.

    2016-08-01

    Optical character recognition is one of the important issues in the field of pattern recognition. This paper presents a method for recognizing handwritten digits based on the modeling of convolutional neural network. The integrated fuzzy logic module based on a structural approach was developed. Used system architecture adjusted the output of the neural network to improve quality of symbol identification. It was shown that proposed algorithm was flexible and high recognition rate of 99.23% was achieved.

  18. Membership generation using multilayer neural network

    NASA Technical Reports Server (NTRS)

    Kim, Jaeseok

    1992-01-01

    There has been intensive research in neural network applications to pattern recognition problems. Particularly, the back-propagation network has attracted many researchers because of its outstanding performance in pattern recognition applications. In this section, we describe a new method to generate membership functions from training data using a multilayer neural network. The basic idea behind the approach is as follows. The output values of a sigmoid activation function of a neuron bear remarkable resemblance to membership values. Therefore, we can regard the sigmoid activation values as the membership values in fuzzy set theory. Thus, in order to generate class membership values, we first train a suitable multilayer network using a training algorithm such as the back-propagation algorithm. After the training procedure converges, the resulting network can be treated as a membership generation network, where the inputs are feature values and the outputs are membership values in the different classes. This method allows fairly complex membership functions to be generated because the network is highly nonlinear in general. Also, it is to be noted that the membership functions are generated from a classification point of view. For pattern recognition applications, this is highly desirable, although the membership values may not be indicative of the degree of typicality of a feature value in a particular class.

  19. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule.

    PubMed

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-11-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  20. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than themore » SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.« less

  1. Development of dielectrophoresis MEMS device for PC12 cell patterning to elucidate nerve-network generation

    NASA Astrophysics Data System (ADS)

    Nakamachi, Eiji; Koga, Hirotaka; Morita, Yusuke; Yamamoto, Koji; Sakamoto, Hidetoshi

    2018-01-01

    We developed a PC12 cell trapping and patterning device by combining the dielectrophoresis (DEP) methodology and the micro electro mechanical systems (MEMS) technology for time-lapse observation of morphological change of nerve network to elucidate the generation mechanism of neural network. We succeeded a neural network generation, which consisted of cell body, axon and dendrites by using tetragonal and hexagonal cell patterning. Further, the time laps observations was carried out to evaluate the axonal extension rate. The axon extended in the channel and reached to the target cell body. We found that the shorter the PC12 cell distance, the less the axonal connection time in both tetragonal and hexagonal structures. After 48 hours culture, a maximum success rate of network formation was 85% in the case of 40 μm distance tetragonal structure.

  2. Relationship between isoseismal area and magnitude of historical earthquakes in Greece by a hybrid fuzzy neural network method

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-A.; Sokos, E.

    2012-01-01

    In this paper we suggest the use of diffusion-neural-networks, (neural networks with intrinsic fuzzy logic abilities) to assess the relationship between isoseismal area and earthquake magnitude for the region of Greece. It is of particular importance to study historical earthquakes for which we often have macroseismic information in the form of isoseisms but it is statistically incomplete to assess magnitudes from an isoseismal area or to train conventional artificial neural networks for magnitude estimation. Fuzzy relationships are developed and used to train a feed forward neural network with a back propagation algorithm to obtain the final relationships. Seismic intensity data from 24 earthquakes in Greece have been used. Special attention is being paid to the incompleteness and contradictory patterns in scanty historical earthquake records. The results show that the proposed processing model is very effective, better than applying classical artificial neural networks since the magnitude macroseismic intensity target function has a strong nonlinearity and in most cases the macroseismic datasets are very small.

  3. Open quantum generalisation of Hopfield neural networks

    NASA Astrophysics Data System (ADS)

    Rotondo, P.; Marcuzzi, M.; Garrahan, J. P.; Lesanovsky, I.; Müller, M.

    2018-03-01

    We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.

  4. Recognition of Telugu characters using neural networks.

    PubMed

    Sukhaswami, M B; Seetharamulu, P; Pujari, A K

    1995-09-01

    The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.

  5. Modeling fluctuations in default-mode brain network using a spiking neural network.

    PubMed

    Yamanishi, Teruya; Liu, Jian-Qin; Nishimura, Haruhiko

    2012-08-01

    Recently, numerous attempts have been made to understand the dynamic behavior of complex brain systems using neural network models. The fluctuations in blood-oxygen-level-dependent (BOLD) brain signals at less than 0.1 Hz have been observed by functional magnetic resonance imaging (fMRI) for subjects in a resting state. This phenomenon is referred to as a "default-mode brain network." In this study, we model the default-mode brain network by functionally connecting neural communities composed of spiking neurons in a complex network. Through computational simulations of the model, including transmission delays and complex connectivity, the network dynamics of the neural system and its behavior are discussed. The results show that the power spectrum of the modeled fluctuations in the neuron firing patterns is consistent with the default-mode brain network's BOLD signals when transmission delays, a characteristic property of the brain, have finite values in a given range.

  6. Weaving and neural complexity in symmetric quantum states

    NASA Astrophysics Data System (ADS)

    Susa, Cristian E.; Girolami, Davide

    2018-04-01

    We study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.

  7. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    ERIC Educational Resources Information Center

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  8. Non-Invasive Detection of CH-46 AFT Gearbox Faults Using Digital Pattern Recognition and Classification Techniques

    DTIC Science & Technology

    1999-05-05

    processing and artificial neural network (ANN) technology. The detector will classify incipient faults based on real-tine vibration data taken from the...provided the vibration data necessary to develop and test the feasibility of en artificial neural network for fault classification. This research

  9. Morphological self-organizing feature map neural network with applications to automatic target recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Shijun; Jing, Zhongliang; Li, Jianxun

    2005-01-01

    The rotation invariant feature of the target is obtained using the multi-direction feature extraction property of the steerable filter. Combining the morphological operation top-hat transform with the self-organizing feature map neural network, the adaptive topological region is selected. Using the erosion operation, the topological region shrinkage is achieved. The steerable filter based morphological self-organizing feature map neural network is applied to automatic target recognition of binary standard patterns and real-world infrared sequence images. Compared with Hamming network and morphological shared-weight networks respectively, the higher recognition correct rate, robust adaptability, quick training, and better generalization of the proposed method are achieved.

  10. Dynamic Neural Networks Supporting Memory Retrieval

    PubMed Central

    St. Jacques, Peggy L.; Kragel, Philip A.; Rubin, David C.

    2011-01-01

    How do separate neural networks interact to support complex cognitive processes such as remembrance of the personal past? Autobiographical memory (AM) retrieval recruits a consistent pattern of activation that potentially comprises multiple neural networks. However, it is unclear how such large-scale neural networks interact and are modulated by properties of the memory retrieval process. In the present functional MRI (fMRI) study, we combined independent component analysis (ICA) and dynamic causal modeling (DCM) to understand the neural networks supporting AM retrieval. ICA revealed four task-related components consistent with the previous literature: 1) Medial Prefrontal Cortex (PFC) Network, associated with self-referential processes, 2) Medial Temporal Lobe (MTL) Network, associated with memory, 3) Frontoparietal Network, associated with strategic search, and 4) Cingulooperculum Network, associated with goal maintenance. DCM analysis revealed that the medial PFC network drove activation within the system, consistent with the importance of this network to AM retrieval. Additionally, memory accessibility and recollection uniquely altered connectivity between these neural networks. Recollection modulated the influence of the medial PFC on the MTL network during elaboration, suggesting that greater connectivity among subsystems of the default network supports greater re-experience. In contrast, memory accessibility modulated the influence of frontoparietal and MTL networks on the medial PFC network, suggesting that ease of retrieval involves greater fluency among the multiple networks contributing to AM. These results show the integration between neural networks supporting AM retrieval and the modulation of network connectivity by behavior. PMID:21550407

  11. In-vivo determination of chewing patterns using FBG and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Pegorini, Vinicius; Zen Karam, Leandro; Rocha Pitta, Christiano S.; Ribeiro, Richardson; Simioni Assmann, Tangriani; Cardozo da Silva, Jean Carlos; Bertotti, Fábio L.; Kalinowski, Hypolito J.; Cardoso, Rafael

    2015-09-01

    This paper reports the process of pattern classification of the chewing process of ruminants. We propose a simplified signal processing scheme for optical fiber Bragg grating (FBG) sensors based on machine learning techniques. The FBG sensors measure the biomechanical forces during jaw movements and an artificial neural network is responsible for the classification of the associated chewing pattern. In this study, three patterns associated to dietary supplement, hay and ryegrass were considered. Additionally, two other important events for ingestive behavior studies were monitored, rumination and idle period. Experimental results show that the proposed approach for pattern classification has been capable of differentiating the materials involved in the chewing process with a small classification error.

  12. Modular representation of layered neural networks.

    PubMed

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Cloud Classification in Polar and Desert Regions and Smoke Classification from Biomass Burning Using a Hierarchical Neural Network

    NASA Technical Reports Server (NTRS)

    Alexander, June; Corwin, Edward; Lloyd, David; Logar, Antonette; Welch, Ronald

    1996-01-01

    This research focuses on a new neural network scene classification technique. The task is to identify scene elements in Advanced Very High Resolution Radiometry (AVHRR) data from three scene types: polar, desert and smoke from biomass burning in South America (smoke). The ultimate goal of this research is to design and implement a computer system which will identify the clouds present on a whole-Earth satellite view as a means of tracking global climate changes. Previous research has reported results for rule-based systems (Tovinkere et at 1992, 1993) for standard back propagation (Watters et at. 1993) and for a hierarchical approach (Corwin et al 1994) for polar data. This research uses a hierarchical neural network with don't care conditions and applies this technique to complex scenes. A hierarchical neural network consists of a switching network and a collection of leaf networks. The idea of the hierarchical neural network is that it is a simpler task to classify a certain pattern from a subset of patterns than it is to classify a pattern from the entire set. Therefore, the first task is to cluster the classes into groups. The switching, or decision network, performs an initial classification by selecting a leaf network. The leaf networks contain a reduced set of similar classes, and it is in the various leaf networks that the actual classification takes place. The grouping of classes in the various leaf networks is determined by applying an iterative clustering algorithm. Several clustering algorithms were investigated, but due to the size of the data sets, the exhaustive search algorithms were eliminated. A heuristic approach using a confusion matrix from a lightly trained neural network provided the basis for the clustering algorithm. Once the clusters have been identified, the hierarchical network can be trained. The approach of using don't care nodes results from the difficulty in generating extremely complex surfaces in order to separate one class from all of the others. This approach finds pairwise separating surfaces and forms the more complex separating surface from combinations of simpler surfaces. This technique both reduces training time and improves accuracy over the previously reported results. Accuracies of 97.47%, 95.70%, and 99.05% were achieved for the polar, desert and smoke data sets.

  14. Sign Language Recognition System using Neural Network for Digital Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Vargas, Lorena P.; Barba, Leiner; Torres, C. O.; Mattos, L.

    2011-01-01

    This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.

  15. Effect of synapse dilution on the memory retrieval in structured attractor neural networks

    NASA Astrophysics Data System (ADS)

    Brunel, N.

    1993-08-01

    We investigate a simple model of structured attractor neural network (ANN). In this network a module codes for the category of the stored information, while another group of neurons codes for the remaining information. The probability distribution of stabilities of the patterns and the prototypes of the categories are calculated, for two different synaptic structures. The stability of the prototypes is shown to increase when the fraction of neurons coding for the category goes down. Then the effect of synapse destruction on the retrieval is studied in two opposite situations : first analytically in sparsely connected networks, then numerically in completely connected ones. In both cases the behaviour of the structured network and that of the usual homogeneous networks are compared. When lesions increase, two transitions are shown to appear in the behaviour of the structured network when one of the patterns is presented to the network. After the first transition the network recognizes the category of the pattern but not the individual pattern. After the second transition the network recognizes nothing. These effects are similar to syndromes caused by lesions in the central visual system, namely prosopagnosia and agnosia. In both types of networks (structured or homogeneous) the stability of the prototype is greater than the stability of individual patterns, however the first transition, for completely connected networks, occurs only when the network is structured.

  16. Real-time biomimetic Central Pattern Generators in an FPGA for hybrid experiments

    PubMed Central

    Ambroise, Matthieu; Levi, Timothée; Joucla, Sébastien; Yvert, Blaise; Saïghi, Sylvain

    2013-01-01

    This investigation of the leech heartbeat neural network system led to the development of a low resources, real-time, biomimetic digital hardware for use in hybrid experiments. The leech heartbeat neural network is one of the simplest central pattern generators (CPG). In biology, CPG provide the rhythmic bursts of spikes that form the basis for all muscle contraction orders (heartbeat) and locomotion (walking, running, etc.). The leech neural network system was previously investigated and this CPG formalized in the Hodgkin–Huxley neural model (HH), the most complex devised to date. However, the resources required for a neural model are proportional to its complexity. In response to this issue, this article describes a biomimetic implementation of a network of 240 CPGs in an FPGA (Field Programmable Gate Array), using a simple model (Izhikevich) and proposes a new synapse model: activity-dependent depression synapse. The network implementation architecture operates on a single computation core. This digital system works in real-time, requires few resources, and has the same bursting activity behavior as the complex model. The implementation of this CPG was initially validated by comparing it with a simulation of the complex model. Its activity was then matched with pharmacological data from the rat spinal cord activity. This digital system opens the way for future hybrid experiments and represents an important step toward hybridization of biological tissue and artificial neural networks. This CPG network is also likely to be useful for mimicking the locomotion activity of various animals and developing hybrid experiments for neuroprosthesis development. PMID:24319408

  17. Neural networks and applications tutorial

    NASA Astrophysics Data System (ADS)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  18. Smart Sensing and Recognition Based on Models of Neural Networks

    DTIC Science & Technology

    1990-11-15

    9P-o ,yY𔄃-’. AD-A230 701 University of Pensylvania Philadelphia, PA 19104-6390 SMART SENSING AND RECOGNITION BASED ON MODELS OF NEURAL NETWORKS ... networks , photonic 1 implementations, nonlinear dynamical signal processing 9 ABSTRACT (Continue on reverse if necessary and identify by block number...not develop in isolation but in synergism with sensory organs and their feature forming networks . This means that development of artificial pattern

  19. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    PubMed Central

    Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation. PMID:26681933

  20. Detection of pseudosinusoidal epileptic seizure segments in the neonatal EEG by cascading a rule-based algorithm with a neural network.

    PubMed

    Karayiannis, Nicolaos B; Mukherjee, Amit; Glover, John R; Ktonas, Periklis Y; Frost, James D; Hrachovy, Richard A; Mizrahi, Eli M

    2006-04-01

    This paper presents an approach to detect epileptic seizure segments in the neonatal electroencephalogram (EEG) by characterizing the spectral features of the EEG waveform using a rule-based algorithm cascaded with a neural network. A rule-based algorithm screens out short segments of pseudosinusoidal EEG patterns as epileptic based on features in the power spectrum. The output of the rule-based algorithm is used to train and compare the performance of conventional feedforward neural networks and quantum neural networks. The results indicate that the trained neural networks, cascaded with the rule-based algorithm, improved the performance of the rule-based algorithm acting by itself. The evaluation of the proposed cascaded scheme for the detection of pseudosinusoidal seizure segments reveals its potential as a building block of the automated seizure detection system under development.

  1. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    PubMed

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  2. Real-Time Adaptive Color Segmentation by Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2004-01-01

    Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural network and algorithm is that each update of synaptic weights takes place in conjunction with the addition of another hidden unit, which then remains in place as still other hidden units are added on subsequent iterations. For a given training pattern, the synaptic weight between (1) the inputs and the previously added hidden units and (2) the newly added hidden unit is updated by an amount proportional to the partial derivative of a quadratic error function with respect to the synaptic weight. The synaptic weight between the newly added hidden unit and each output unit is given by a more complex function that involves the errors between the outputs and their target values, the transfer functions (hyperbolic tangents) of the neural units, and the derivatives of the transfer functions.

  3. Application of Artificial Neural Network to Predict the use of Runway at Juanda International Airport

    NASA Astrophysics Data System (ADS)

    Putra, J. C. P.; Safrilah

    2017-06-01

    Artificial neural network approaches are useful to solve many complicated problems. It solves a number of problems in various areas such as engineering, medicine, business, manufacturing, etc. This paper presents an application of artificial neural network to predict a runway capacity at Juanda International Airport. An artificial neural network model of backpropagation and multi-layer perceptron is adopted to this research to learning process of runway capacity at Juanda International Airport. The results indicate that the training data is successfully recognizing the certain pattern of runway use at Juanda International Airport. Whereas, testing data indicate vice versa. Finally, it can be concluded that the approach of uniformity data and network architecture is the critical part to determine the accuracy of prediction results.

  4. A neural network approach for determining gait modifications to reduce the contact force in knee joint implant.

    PubMed

    Ardestani, Marzieh Mostafavizadeh; Chen, Zhenxian; Wang, Ling; Lian, Qin; Liu, Yaxiong; He, Jiankang; Li, Dichen; Jin, Zhongmin

    2014-10-01

    There is a growing interest in non-surgical gait rehabilitation treatments to reduce the loading in the knee joint. In particular, synergetic kinematic changes required for joint offloading should be determined individually for each subject. Previous studies for gait rehabilitation designs are typically relied on a "trial-and-error" approach, using multi-body dynamic (MBD) analysis. However MBD is fairly time demanding which prevents it to be used iteratively for each subject. This study employed an artificial neural network to develop a cost-effective computational framework for designing gait rehabilitation patterns. A feed forward artificial neural network (FFANN) was trained based on a number of experimental gait trials obtained from literature. The trained network was then hired to calculate the appropriate kinematic waveforms (output) needed to achieve desired knee joint loading patterns (input). An auxiliary neural network was also developed to update the ground reaction force and moment profiles with respect to the predicted kinematic waveforms. The feasibility and efficiency of the predicted kinematic patterns were then evaluated through MBD analysis. Results showed that FFANN-based predicted kinematics could effectively decrease the total knee joint reaction forces. Peak values of the resultant knee joint forces, with respect to the bodyweight (BW), were reduced by 20% BW and 25% BW in the midstance and the terminal stance phases. Impulse values of the knee joint loading patterns were also decreased by 17% BW*s and 24%BW*s in the corresponding phases. The FFANN-based framework suggested a cost-effective forward solution which directly calculated the kinematic variations needed to implement a given desired knee joint loading pattern. It is therefore expected that this approach provides potential advantages and further insights into knee rehabilitation designs. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Network architectures and circuit function: testing alternative hypotheses in multifunctional networks.

    PubMed

    Leonard, J L

    2000-05-01

    Understanding how species-typical movement patterns are organized in the nervous system is a central question in neurobiology. The current explanations involve 'alphabet' models in which an individual neuron may participate in the circuit for several behaviors but each behavior is specified by a specific neural circuit. However, not all of the well-studied model systems fit the 'alphabet' model. The 'equation' model provides an alternative possibility, whereby a system of parallel motor neurons, each with a unique (but overlapping) field of innervation, can account for the production of stereotyped behavior patterns by variable circuits. That is, it is possible for such patterns to arise as emergent properties of a generalized neural network in the absence of feedback, a simple version of a 'self-organizing' behavioral system. Comparison of systems of identified neurons suggest that the 'alphabet' model may account for most observations where CPGs act to organize motor patterns. Other well-known model systems, involving architectures corresponding to feed-forward neural networks with a hidden layer, may organize patterned behavior in a manner consistent with the 'equation' model. Such architectures are found in the Mauthner and reticulospinal circuits, 'escape' locomotion in cockroaches, CNS control of Aplysia gill, and may also be important in the coordination of sensory information and motor systems in insect mushroom bodies and the vertebrate hippocampus. The hidden layer of such networks may serve as an 'internal representation' of the behavioral state and/or body position of the animal, allowing the animal to fine-tune oriented, or particularly context-sensitive, movements to the prevalent conditions. Experiments designed to distinguish between the two models in cases where they make mutually exclusive predictions provide an opportunity to elucidate the neural mechanisms by which behavior is organized in vivo and in vitro. Copyright 2000 S. Karger AG, Basel

  6. A System for Video Surveillance and Monitoring CMU VSAM Final Report

    DTIC Science & Technology

    1999-11-30

    motion-based skeletonization, neural network , spatio-temporal salience Patterns inside image chips, spurious motion rejection, model -based... network of sensors with respect to the model coordinate system, computation of 3D geolocation estimates, and graphical display of object hypotheses...rithms have been developed. The first uses view dependent visual properties to train a neural network classifier to recognize four classes: single

  7. Optical implementation of a feature-based neural network with application to automatic target recognition

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1993-01-01

    An optical neural network based on the neocognitron paradigm is introduced. A novel aspect of the architecture design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by feeding back the ouput of the feature correlator interatively to the input spatial light modulator and by updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intraclass fault tolerance and interclass discrimination is achieved. A detailed system description is provided. Experimental demonstrations of a two-layer neural network for space-object discrimination is also presented.

  8. Automatic target recognition using a feature-based optical neural network

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1992-01-01

    An optical neural network based upon the Neocognitron paradigm (K. Fukushima et al. 1983) is introduced. A novel aspect of the architectural design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator and updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intra-class fault tolerance and inter-class discrimination is achieved. A detailed system description is provided. Experimental demonstration of a two-layer neural network for space objects discrimination is also presented.

  9. Probabilistic and Other Neural Nets in Multi-Hole Probe Calibration and Flow Angularity Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Baskaran, Subbiah; Ramachandran, Narayanan; Noever, David

    1998-01-01

    The use of probabilistic (PNN) and multilayer feed forward (MLFNN) neural networks are investigated for calibration of multi-hole pressure probes and the prediction of associated flow angularity patterns in test flow fields. Both types of networks are studied in detail for their calibration and prediction characteristics. The current formalism can be applied to any multi-hole probe, however the test results for the most commonly used five-hole Cone and Prism probe types alone are reported in this article.

  10. SoxB1-driven transcriptional network underlies neural-specific interpretation of morphogen signals.

    PubMed

    Oosterveen, Tony; Kurdija, Sanja; Ensterö, Mats; Uhde, Christopher W; Bergsland, Maria; Sandberg, Magnus; Sandberg, Rickard; Muhr, Jonas; Ericson, Johan

    2013-04-30

    The reiterative deployment of a small cadre of morphogen signals underlies patterning and growth of most tissues during embyogenesis, but how such inductive events result in tissue-specific responses remains poorly understood. By characterizing cis-regulatory modules (CRMs) associated with genes regulated by Sonic hedgehog (Shh), retinoids, or bone morphogenetic proteins in the CNS, we provide evidence that the neural-specific interpretation of morphogen signaling reflects a direct integration of these pathways with SoxB1 proteins at the CRM level. Moreover, expression of SoxB1 proteins in the limb bud confers on mesodermal cells the potential to activate neural-specific target genes upon Shh, retinoid, or bone morphogenetic protein signaling, and the collocation of binding sites for SoxB1 and morphogen-mediatory transcription factors in CRMs faithfully predicts neural-specific gene activity. Thus, an unexpectedly simple transcriptional paradigm appears to conceptually explain the neural-specific interpretation of pleiotropic signaling during vertebrate development. Importantly, genes induced in a SoxB1-dependent manner appear to constitute repressive gene regulatory networks that are directly interlinked at the CRM level to constrain the regional expression of patterning genes. Accordingly, not only does the topology of SoxB1-driven gene regulatory networks provide a tissue-specific mode of gene activation, but it also determines the spatial expression pattern of target genes within the developing neural tube.

  11. A novel method for flow pattern identification in unstable operational conditions using gamma ray and radial basis function.

    PubMed

    Roshani, G H; Nazemi, E; Roshani, M M

    2017-05-01

    Changes of fluid properties (especially density) strongly affect the performance of radiation-based multiphase flow meter and could cause error in recognizing the flow pattern and determining void fraction. In this work, we proposed a methodology based on combination of multi-beam gamma ray attenuation and dual modality densitometry techniques using RBF neural network in order to recognize the flow regime and determine the void fraction in gas-liquid two phase flows independent of the liquid phase changes. The proposed system is consisted of one 137 Cs source, two transmission detectors and one scattering detector. The registered counts in two transmission detectors were used as the inputs of one primary Radial Basis Function (RBF) neural network for recognizing the flow regime independent of liquid phase density. Then, after flow regime identification, three RBF neural networks were utilized for determining the void fraction independent of liquid phase density. Registered count in scattering detector and first transmission detector were used as the inputs of these three RBF neural networks. Using this simple methodology, all the flow patterns were correctly recognized and the void fraction was predicted independent of liquid phase density with mean relative error (MRE) of less than 3.28%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. F77NNS - A FORTRAN-77 NEURAL NETWORK SIMULATOR

    NASA Technical Reports Server (NTRS)

    Mitchell, P. H.

    1994-01-01

    F77NNS (A FORTRAN-77 Neural Network Simulator) simulates the popular back error propagation neural network. F77NNS is an ANSI-77 FORTRAN program designed to take advantage of vectorization when run on machines having this capability, but it will run on any computer with an ANSI-77 FORTRAN Compiler. Artificial neural networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to biological nerve cells. Problems which involve pattern matching or system modeling readily fit the class of problems which F77NNS is designed to solve. The program's formulation trains a neural network using Rumelhart's back-propagation algorithm. Typically the nodes of a network are grouped together into clumps called layers. A network will generally have an input layer through which the various environmental stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. The back-propagation training algorithm can require massive computational resources to implement a large network such as a network capable of learning text-to-phoneme pronunciation rules as in the famous Sehnowski experiment. The Sehnowski neural network learns to pronounce 1000 common English words. The standard input data defines the specific inputs that control the type of run to be made, and input files define the NN in terms of the layers and nodes, as well as the input/output (I/O) pairs. The program has a restart capability so that a neural network can be solved in stages suitable to the user's resources and desires. F77NNS allows the user to customize the patterns of connections between layers of a network. The size of the neural network to be solved is limited only by the amount of random access memory (RAM) available to the user. The program has a memory requirement of about 900K. The standard distribution medium for this package is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. F77NNS was developed in 1989.

  13. Weaving and neural complexity in symmetric quantum states

    DOE PAGES

    Susa, Cristian E.; Girolami, Davide

    2017-12-27

    Here, we study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.

  14. Weaving and neural complexity in symmetric quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Susa, Cristian E.; Girolami, Davide

    Here, we study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.

  15. Comparing success levels of different neural network structures in extracting discriminative information from the response patterns of a temperature-modulated resistive gas sensor

    NASA Astrophysics Data System (ADS)

    Hosseini-Golgoo, S. M.; Bozorgi, H.; Saberkari, A.

    2015-06-01

    Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively.

  16. Denoising by coupled partial differential equations and extracting phase by backpropagation neural networks for electronic speckle pattern interferometry.

    PubMed

    Tang, Chen; Lu, Wenjing; Chen, Song; Zhang, Zhen; Li, Botao; Wang, Wenping; Han, Lin

    2007-10-20

    We extend and refine previous work [Appl. Opt. 46, 2907 (2007)]. Combining the coupled nonlinear partial differential equations (PDEs) denoising model with the ordinary differential equations enhancement method, we propose the new denoising and enhancing model for electronic speckle pattern interferometry (ESPI) fringe patterns. Meanwhile, we propose the backpropagation neural networks (BPNN) method to obtain unwrapped phase values based on a skeleton map instead of traditional interpolations. We test the introduced methods on the computer-simulated speckle ESPI fringe patterns and experimentally obtained fringe pattern, respectively. The experimental results show that the coupled nonlinear PDEs denoising model is capable of effectively removing noise, and the unwrapped phase values obtained by the BPNN method are much more accurate than those obtained by the well-known traditional interpolation. In addition, the accuracy of the BPNN method is adjustable by changing the parameters of networks such as the number of neurons.

  17. Sequence memory based on coherent spin-interaction neural networks.

    PubMed

    Xia, Min; Wong, W K; Wang, Zhijie

    2014-12-01

    Sequence information processing, for instance, the sequence memory, plays an important role on many functions of brain. In the workings of the human brain, the steady-state period is alterable. However, in the existing sequence memory models using heteroassociations, the steady-state period cannot be changed in the sequence recall. In this work, a novel neural network model for sequence memory with controllable steady-state period based on coherent spininteraction is proposed. In the proposed model, neurons fire collectively in a phase-coherent manner, which lets a neuron group respond differently to different patterns and also lets different neuron groups respond differently to one pattern. The simulation results demonstrating the performance of the sequence memory are presented. By introducing a new coherent spin-interaction sequence memory model, the steady-state period can be controlled by dimension parameters and the overlap between the input pattern and the stored patterns. The sequence storage capacity is enlarged by coherent spin interaction compared with the existing sequence memory models. Furthermore, the sequence storage capacity has an exponential relationship to the dimension of the neural network.

  18. Study of the Gray Scale, Polychromatic, Distortion Invariant Neural Networks Using the Ipa Model.

    NASA Astrophysics Data System (ADS)

    Uang, Chii-Maw

    Research in the optical neural network field is primarily motivated by the fact that humans recognize objects better than the conventional digital computers and the massively parallel inherent nature of optics. This research represents a continuous effort during the past several years in the exploitation of using neurocomputing for pattern recognition. Based on the interpattern association (IPA) model and Hamming net model, many new systems and applications are introduced. A gray level discrete associative memory that is based on object decomposition/composition is proposed for recognizing gray-level patterns. This technique extends the processing ability from the binary mode to gray-level mode, and thus the information capacity is increased. Two polychromatic optical neural networks using color liquid crystal television (LCTV) panels for color pattern recognition are introduced. By introducing a color encoding technique in conjunction with the interpattern associative algorithm, a color associative memory was realized. Based on the color decomposition and composition technique, a color exemplar-based Hamming net was built for color image classification. A shift-invariant neural network is presented through use of the translation invariant property of the modulus of the Fourier transformation and the hetero-associative interpattern association (IPA) memory. To extract the main features, a quadrantal sampling method is used to sampled data and then replace the training patterns. Using the concept of hetero-associative memory to recall the distorted object. A shift and rotation invariant neural network using an interpattern hetero-association (IHA) model is presented. To preserve the shift and rotation invariant properties, a set of binarized-encoded circular harmonic expansion (CHE) functions at the Fourier domain is used as the training set. We use the shift and symmetric properties of the modulus of the Fourier spectrum to avoid the problem of centering the CHE functions. Almost all neural networks have the positive and negative weights, which increases the difficulty of optical implementation. A method to construct a unipolar IPA IWM is discussed. By searching the redundant interconnection links, an effective way that removes all negative links is discussed.

  19. Challenges to the Use of Artificial Neural Networks for Diagnostic Classifications with Student Test Data

    ERIC Educational Resources Information Center

    Briggs, Derek C.; Circi, Ruhan

    2017-01-01

    Artificial Neural Networks (ANNs) have been proposed as a promising approach for the classification of students into different levels of a psychological attribute hierarchy. Unfortunately, because such classifications typically rely upon internally produced item response patterns that have not been externally validated, the instability of ANN…

  20. A comparative study between nonlinear regression and artificial neural network approaches for modelling wild oat (Avena fatua) field emergence

    USDA-ARS?s Scientific Manuscript database

    Non-linear regression techniques are used widely to fit weed field emergence patterns to soil microclimatic indices using S-type functions. Artificial neural networks present interesting and alternative features for such modeling purposes. In this work, a univariate hydrothermal-time based Weibull m...

  1. Analysis of structural patterns in the brain with the complex network approach

    NASA Astrophysics Data System (ADS)

    Maksimenko, Vladimir A.; Makarov, Vladimir V.; Kharchenko, Alexander A.; Pavlov, Alexey N.; Khramova, Marina V.; Koronovskii, Alexey A.; Hramov, Alexander E.

    2015-03-01

    In this paper we study mechanisms of the phase synchronization in a model network of Van der Pol oscillators and in the neural network of the brain by consideration of macroscopic parameters of these networks. As the macroscopic characteristics of the model network we consider a summary signal produced by oscillators. Similar to the model simulations, we study EEG signals reflecting the macroscopic dynamics of neural network. We show that the appearance of the phase synchronization leads to an increased peak in the wavelet spectrum related to the dynamics of synchronized oscillators. The observed correlation between the phase relations of individual elements and the macroscopic characteristics of the whole network provides a way to detect phase synchronization in the neural networks in the cases of normal and pathological activity.

  2. Application of a neural network for reflectance spectrum classification

    NASA Astrophysics Data System (ADS)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  3. Dimensionality of brain networks linked to life-long individual differences in self-control.

    PubMed

    Berman, Marc G; Yourganov, Grigori; Askren, Mary K; Ayduk, Ozlem; Casey, B J; Gotlib, Ian H; Kross, Ethan; McIntosh, Anthony R; Strother, Stephen; Wilson, Nicole L; Zayas, Vivian; Mischel, Walter; Shoda, Yuichi; Jonides, John

    2013-01-01

    The ability to delay gratification in childhood has been linked to positive outcomes in adolescence and adulthood. Here we examine a subsample of participants from a seminal longitudinal study of self-control throughout a subject's life span. Self-control, first studied in children at age 4 years, is now re-examined 40 years later, on a task that required control over the contents of working memory. We examine whether patterns of brain activation on this task can reliably distinguish participants with consistently low and high self-control abilities (low versus high delayers). We find that low delayers recruit significantly higher-dimensional neural networks when performing the task compared with high delayers. High delayers are also more homogeneous as a group in their neural patterns compared with low delayers. From these brain patterns, we can predict with 71% accuracy, whether a participant is a high or low delayer. The present results suggest that dimensionality of neural networks is a biological predictor of self-control abilities.

  4. Ex vivo determination of chewing patterns using FBG and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Karam, L. Z.; Pegorini, V.; Pitta, C. S. R.; Assmann, T. S.; Cardoso, R.; Kalinowski, H. J.; Silva, J. C. C.

    2014-05-01

    This paper reports the experimental procedures performed in a bovine head for the determination of chewing patterns during the mastication process. Mandible movements during the chewing have been simulated either by using two plasticine materials with different textures or without material. Fibre Bragg grating sensors were fixed in the jaw to monitor the biomechanical forces involved in the chewing process. The acquired signals from the sensors fed the input of an artificial neural network aiming at the classification of the measured chewing patterns for each material used in the experiment. The results obtained from the simulation of the chewing process presented different patterns for the different textures of plasticine, resulting on the determination of three chewing patterns with a classification error of 5%.

  5. Optical Calibration Process Developed for Neural-Network-Based Optical Nondestructive Evaluation Method

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A completely optical calibration process has been developed at Glenn for calibrating a neural-network-based nondestructive evaluation (NDE) method. The NDE method itself detects very small changes in the characteristic patterns or vibration mode shapes of vibrating structures as discussed in many references. The mode shapes or characteristic patterns are recorded using television or electronic holography and change when a structure experiences, for example, cracking, debonds, or variations in fastener properties. An artificial neural network can be trained to be very sensitive to changes in the mode shapes, but quantifying or calibrating that sensitivity in a consistent, meaningful, and deliverable manner has been challenging. The standard calibration approach has been difficult to implement, where the response to damage of the trained neural network is compared with the responses of vibration-measurement sensors. In particular, the vibration-measurement sensors are intrusive, insufficiently sensitive, and not numerous enough. In response to these difficulties, a completely optical alternative to the standard calibration approach was proposed and tested successfully. Specifically, the vibration mode to be monitored for structural damage was intentionally contaminated with known amounts of another mode, and the response of the trained neural network was measured as a function of the peak-to-peak amplitude of the contaminating mode. The neural network calibration technique essentially uses the vibration mode shapes of the undamaged structure as standards against which the changed mode shapes are compared. The published response of the network can be made nearly independent of the contaminating mode, if enough vibration modes are used to train the net. The sensitivity of the neural network can be adjusted for the environment in which the test is to be conducted. The response of a neural network trained with measured vibration patterns for use on a vibration isolation table in the presence of various sources of laboratory noise is shown. The output of the neural network is called the degradable classification index. The curve was generated by a simultaneous comparison of means, and it shows a peak-to-peak sensitivity of about 100 nm. The following graph uses model generated data from a compressor blade to show that much higher sensitivities are possible when the environment can be controlled better. The peak-to-peak sensitivity here is about 20 nm. The training procedure was modified for the second graph, and the data were subjected to an intensity-dependent transformation called folding. All the measurements for this approach to calibration were optical. The peak-to-peak amplitudes of the vibration modes were measured using heterodyne interferometry, and the modes themselves were recorded using television (electronic) holography.

  6. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    NASA Astrophysics Data System (ADS)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  7. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  8. Neural network pattern recognition of thermal-signature spectra for chemical defense

    NASA Astrophysics Data System (ADS)

    Carrieri, Arthur H.; Lim, Pascal I.

    1995-05-01

    We treat infrared patterns of absorption or emission by nerve and blister agent compounds (and simulants of this chemical group) as features for the training of neural networks to detect the compounds' liquid layers on the ground or their vapor plumes during evaporation by external heating. Training of a four-layer network architecture is composed of a backward-error-propagation algorithm and a gradient-descent paradigm. We conduct testing by feed-forwarding preprocessed spectra through the network in a scaled format consistent with the structure of the training-data-set representation. The best-performance weight matrix (spectral filter) evolved from final network training and testing with software simulation trials is electronically transferred to a set of eight artificial intelligence integrated circuits (ICs') in specific modular form (splitting of weight matrices). This form makes full use of all input-output IC nodes. This neural network computer serves an important real-time detection function when it is integrated into pre-and postprocessing data-handling units of a tactical prototype thermoluminescence sensor now under development at the Edgewood Research, Development, and Engineering Center.

  9. Neural Network Model For Fast Learning And Retrieval

    NASA Astrophysics Data System (ADS)

    Arsenault, Henri H.; Macukow, Bohdan

    1989-05-01

    An approach to learning in a multilayer neural network is presented. The proposed network learns by creating interconnections between the input layer and the intermediate layer. In one of the new storage prescriptions proposed, interconnections are excitatory (positive) only and the weights depend on the stored patterns. In the intermediate layer each mother cell is responsible for one stored pattern. Mutually interconnected neurons in the intermediate layer perform a winner-take-all operation, taking into account correlations between stored vectors. The performance of networks using this interconnection prescription is compared with two previously proposed schemes, one using inhibitory connections at the output and one using all-or-nothing interconnections. The network can be used as a content-addressable memory or as a symbolic substitution system that yields an arbitrarily defined output for any input. The training of a model to perform Boolean logical operations is also described. Computer simulations using the network as an autoassociative content-addressable memory show the model to be efficient. Content-addressable associative memories and neural logic modules can be combined to perform logic operations on highly corrupted data.

  10. Extending unified-theory-of-reinforcement neural networks to steady-state operant behavior.

    PubMed

    Calvin, Olivia L; McDowell, J J

    2016-06-01

    The unified theory of reinforcement has been used to develop models of behavior over the last 20 years (Donahoe et al., 1993). Previous research has focused on the theory's concordance with the respondent behavior of humans and animals. In this experiment, neural networks were developed from the theory to extend the unified theory of reinforcement to operant behavior on single-alternative variable-interval schedules. This area of operant research was selected because previously developed neural networks could be applied to it without significant alteration. Previous research with humans and animals indicates that the pattern of their steady-state behavior is hyperbolic when plotted against the obtained rate of reinforcement (Herrnstein, 1970). A genetic algorithm was used in the first part of the experiment to determine parameter values for the neural networks, because values that were used in previous research did not result in a hyperbolic pattern of behavior. After finding these parameters, hyperbolic and other similar functions were fitted to the behavior produced by the neural networks. The form of the neural network's behavior was best described by an exponentiated hyperbola (McDowell, 1986; McLean and White, 1983; Wearden, 1981), which was derived from the generalized matching law (Baum, 1974). In post-hoc analyses the addition of a baseline rate of behavior significantly improved the fit of the exponentiated hyperbola and removed systematic residuals. The form of this function was consistent with human and animal behavior, but the estimated parameter values were not. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Hybrid information privacy system: integration of chaotic neural network and RSA coding

    NASA Astrophysics Data System (ADS)

    Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.

    2005-03-01

    Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.

  12. An Ensemble of Neural Networks for Stock Trading Decision Making

    NASA Astrophysics Data System (ADS)

    Chang, Pei-Chann; Liu, Chen-Hao; Fan, Chin-Yuan; Lin, Jun-Lin; Lai, Chih-Ming

    Stock turning signals detection are very interesting subject arising in numerous financial and economic planning problems. In this paper, Ensemble Neural Network system with Intelligent Piecewise Linear Representation for stock turning points detection is presented. The Intelligent piecewise linear representation method is able to generate numerous stocks turning signals from the historic data base, then Ensemble Neural Network system will be applied to train the pattern and retrieve similar stock price patterns from historic data for training. These turning signals represent short-term and long-term trading signals for selling or buying stocks from the market which are applied to forecast the future turning points from the set of test data. Experimental results demonstrate that the hybrid system can make a significant and constant amount of profit when compared with other approaches using stock data available in the market.

  13. Analogue spin-orbit torque device for artificial-neural-network-based associative memory operation

    NASA Astrophysics Data System (ADS)

    Borders, William A.; Akima, Hisanao; Fukami, Shunsuke; Moriya, Satoshi; Kurihara, Shouta; Horio, Yoshihiko; Sato, Shigeo; Ohno, Hideo

    2017-01-01

    We demonstrate associative memory operations reminiscent of the brain using nonvolatile spintronics devices. Antiferromagnet-ferromagnet bilayer-based Hall devices, which show analogue-like spin-orbit torque switching under zero magnetic fields and behave as artificial synapses, are used. An artificial neural network is used to associate memorized patterns from their noisy versions. We develop a network consisting of a field-programmable gate array and 36 spin-orbit torque devices. An effect of learning on associative memory operations is successfully confirmed for several 3 × 3-block patterns. A discussion on the present approach for realizing spintronics-based artificial intelligence is given.

  14. Artificial neural network detects human uncertainty

    NASA Astrophysics Data System (ADS)

    Hramov, Alexander E.; Frolov, Nikita S.; Maksimenko, Vladimir A.; Makarov, Vladimir V.; Koronovskii, Alexey A.; Garcia-Prieto, Juan; Antón-Toro, Luis Fernando; Maestú, Fernando; Pisarchik, Alexander N.

    2018-03-01

    Artificial neural networks (ANNs) are known to be a powerful tool for data analysis. They are used in social science, robotics, and neurophysiology for solving tasks of classification, forecasting, pattern recognition, etc. In neuroscience, ANNs allow the recognition of specific forms of brain activity from multichannel EEG or MEG data. This makes the ANN an efficient computational core for brain-machine systems. However, despite significant achievements of artificial intelligence in recognition and classification of well-reproducible patterns of neural activity, the use of ANNs for recognition and classification of patterns in neural networks still requires additional attention, especially in ambiguous situations. According to this, in this research, we demonstrate the efficiency of application of the ANN for classification of human MEG trials corresponding to the perception of bistable visual stimuli with different degrees of ambiguity. We show that along with classification of brain states associated with multistable image interpretations, in the case of significant ambiguity, the ANN can detect an uncertain state when the observer doubts about the image interpretation. With the obtained results, we describe the possible application of ANNs for detection of bistable brain activity associated with difficulties in the decision-making process.

  15. Black Holes as Brains: Neural Networks with Area Law Entropy

    NASA Astrophysics Data System (ADS)

    Dvali, Gia

    2018-04-01

    Motivated by the potential similarities between the underlying mechanisms of the enhanced memory storage capacity in black holes and in brain networks, we construct an artificial quantum neural network based on gravity-like synaptic connections and a symmetry structure that allows to describe the network in terms of geometry of a d-dimensional space. We show that the network possesses a critical state in which the gapless neurons emerge that appear to inhabit a (d-1)-dimensional surface, with their number given by the surface area. In the excitations of these neurons, the network can store and retrieve an exponentially large number of patterns within an arbitrarily narrow energy gap. The corresponding micro-state entropy of the brain network exhibits an area law. The neural network can be described in terms of a quantum field, via identifying the different neurons with the different momentum modes of the field, while identifying the synaptic connections among the neurons with the interactions among the corresponding momentum modes. Such a mapping allows to attribute a well-defined sense of geometry to an intrinsically non-local system, such as the neural network, and vice versa, it allows to represent the quantum field model as a neural network.

  16. Recognition of neural brain activity patterns correlated with complex motor activity

    NASA Astrophysics Data System (ADS)

    Kurkin, Semen; Musatov, Vyacheslav Yu.; Runnova, Anastasia E.; Grubov, Vadim V.; Efremova, Tatyana Yu.; Zhuravlev, Maxim O.

    2018-04-01

    In this paper, based on the apparatus of artificial neural networks, a technique for recognizing and classifying patterns corresponding to imaginary movements on electroencephalograms (EEGs) obtained from a group of untrained subjects was developed. The works on the selection of the optimal type, topology, training algorithms and neural network parameters were carried out from the point of view of the most accurate and fast recognition and classification of patterns on multi-channel EEGs associated with the imagination of movements. The influence of the number and choice of the analyzed channels of a multichannel EEG on the quality of recognition of imaginary movements was also studied, and optimal configurations of electrode arrangements were obtained. The effect of pre-processing of EEG signals is analyzed from the point of view of improving the accuracy of recognition of imaginary movements.

  17. Detection of high-grade small bowel obstruction on conventional radiography with convolutional neural networks.

    PubMed

    Cheng, Phillip M; Tejura, Tapas K; Tran, Khoa N; Whang, Gilbert

    2018-05-01

    The purpose of this pilot study is to determine whether a deep convolutional neural network can be trained with limited image data to detect high-grade small bowel obstruction patterns on supine abdominal radiographs. Grayscale images from 3663 clinical supine abdominal radiographs were categorized into obstructive and non-obstructive categories independently by three abdominal radiologists, and the majority classification was used as ground truth; 74 images were found to be consistent with small bowel obstruction. Images were rescaled and randomized, with 2210 images constituting the training set (39 with small bowel obstruction) and 1453 images constituting the test set (35 with small bowel obstruction). Weight parameters for the final classification layer of the Inception v3 convolutional neural network, previously trained on the 2014 Large Scale Visual Recognition Challenge dataset, were retrained on the training set. After training, the neural network achieved an AUC of 0.84 on the test set (95% CI 0.78-0.89). At the maximum Youden index (sensitivity + specificity-1), the sensitivity of the system for small bowel obstruction is 83.8%, with a specificity of 68.1%. The results demonstrate that transfer learning with convolutional neural networks, even with limited training data, may be used to train a detector for high-grade small bowel obstruction gas patterns on supine radiographs.

  18. The super-Turing computational power of plastic recurrent neural networks.

    PubMed

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  19. Achieving Consistent Near-Optimal Pattern Recognition Accuracy Using Particle Swarm Optimization to Pre-Train Artificial Neural Networks

    ERIC Educational Resources Information Center

    Nikelshpur, Dmitry O.

    2014-01-01

    Similar to mammalian brains, Artificial Neural Networks (ANN) are universal approximators, capable of yielding near-optimal solutions to a wide assortment of problems. ANNs are used in many fields including medicine, internet security, engineering, retail, robotics, warfare, intelligence control, and finance. "ANNs have a tendency to get…

  20. Hysteresis, neural avalanches, and critical behavior near a first-order transition of a spiking neural network

    NASA Astrophysics Data System (ADS)

    Scarpetta, Silvia; Apicella, Ilenia; Minati, Ludovico; de Candia, Antonio

    2018-06-01

    Many experimental results, both in vivo and in vitro, support the idea that the brain cortex operates near a critical point and at the same time works as a reservoir of precise spatiotemporal patterns. However, the mechanism at the basis of these observations is still not clear. In this paper we introduce a model which combines both these features, showing that scale-free avalanches are the signature of a system posed near the spinodal line of a first-order transition, with many spatiotemporal patterns stored as dynamical metastable attractors. Specifically, we studied a network of leaky integrate-and-fire neurons whose connections are the result of the learning of multiple spatiotemporal dynamical patterns, each with a randomly chosen ordering of the neurons. We found that the network shows a first-order transition between a low-spiking-rate disordered state (down), and a high-rate state characterized by the emergence of collective activity and the replay of one of the stored patterns (up). The transition is characterized by hysteresis, or alternation of up and down states, depending on the lifetime of the metastable states. In both cases, critical features and neural avalanches are observed. Notably, critical phenomena occur at the edge of a discontinuous phase transition, as recently observed in a network of glow lamps.

  1. Self-organizing neural network models for visual pattern recognition.

    PubMed

    Fukushima, K

    1987-01-01

    Two neural network models for visual pattern recognition are discussed. The first model, called a "neocognitron", is a hierarchical multilayered network which has only afferent synaptic connections. It can acquire the ability to recognize patterns by "learning-without-a-teacher": the repeated presentation of a set of training patterns is sufficient, and no information about the categories of the patterns is necessary. The cells of the highest stage eventually become "gnostic cells", whose response shows the final result of the pattern-recognition of the network. Pattern recognition is performed on the basis of similarity in shape between patterns, and is not affected by deformation, nor by changes in size, nor by shifts in the position of the stimulus pattern. The second model has not only afferent but also efferent synaptic connections, and is endowed with the function of selective attention. The afferent and the efferent signals interact with each other in the hierarchical network: the efferent signals, that is, the signals for selective attention, have a facilitating effect on the afferent signals, and at the same time, the afferent signals gate efferent signal flow. When a complex figure, consisting of two patterns or more, is presented to the model, it is segmented into individual patterns, and each pattern is recognized separately. Even if one of the patterns to which the models is paying selective attention is affected by noise or defects, the model can "recall" the complete pattern from which the noise has been eliminated and the defects corrected.

  2. Neural signal registration and analysis of axons grown in microchannels

    NASA Astrophysics Data System (ADS)

    Pigareva, Y.; Malishev, E.; Gladkov, A.; Kolpakov, V.; Bukatin, A.; Mukhina, I.; Kazantsev, V.; Pimashkin, A.

    2016-08-01

    Registration of neuronal bioelectrical signals remains one of the main physical tools to study fundamental mechanisms of signal processing in the brain. Neurons generate spiking patterns which propagate through complex map of neural network connectivity. Extracellular recording of isolated axons grown in microchannels provides amplification of the signal for detailed study of spike propagation. In this study we used neuronal hippocampal cultures grown in microfluidic devices combined with microelectrode arrays to investigate a changes of electrical activity during neural network development. We found that after 5 days in vitro after culture plating the spiking activity appears first in microchannels and on the next 2-3 days appears on the electrodes of overall neural network. We conclude that such approach provides a convenient method to study neural signal processing and functional structure development on a single cell and network level of the neuronal culture.

  3. Neural Architecture of Selective Stopping Strategies: Distinct Brain Activity Patterns Are Associated with Attentional Capture But Not with Outright Stopping.

    PubMed

    Sebastian, Alexandra; Rössler, Kora; Wibral, Michael; Mobascher, Arian; Lieb, Klaus; Jung, Patrick; Tüscher, Oliver

    2017-10-04

    In stimulus-selective stop-signal tasks, the salient stop signal needs attentional processing before genuine response inhibition is completed. Differential prefrontal involvement in attentional capture and response inhibition has been linked to the right inferior frontal junction (IFJ) and ventrolateral prefrontal cortex (VLPFC), respectively. Recently, it has been suggested that stimulus-selective stopping may be accomplished by the following different strategies: individuals may selectively inhibit their response only upon detecting a stop signal (independent discriminate then stop strategy) or unselectively whenever detecting a stop or attentional capture signal (stop then discriminate strategy). Alternatively, the discrimination process of the critical signal (stop vs attentional capture signal) may interact with the go process (dependent discriminate then stop strategy). Those different strategies might differentially involve attention- and stopping-related processes that might be implemented by divergent neural networks. This should lead to divergent activation patterns and, if disregarded, interfere with analyses in neuroimaging studies. To clarify this crucial issue, we studied 87 human participants of both sexes during a stimulus-selective stop-signal task and performed strategy-dependent functional magnetic resonance imaging analyses. We found that, regardless of the strategy applied, outright stopping displayed indistinguishable brain activation patterns. However, during attentional capture different strategies resulted in divergent neural activation patterns with variable activation of right IFJ and bilateral VLPFC. In conclusion, the neural network involved in outright stopping is ubiquitous and independent of strategy, while different strategies impact on attention-related processes and underlying neural network usage. Strategic differences should therefore be taken into account particularly when studying attention-related processes in stimulus-selective stopping. SIGNIFICANCE STATEMENT Dissociating inhibition from attention has been a major challenge for the cognitive neuroscience of executive functions. Selective stopping tasks have been instrumental in addressing this question. However, recent theoretical, cognitive and behavioral research suggests that different strategies are applied in successful execution of the task. The underlying strategy-dependent neural networks might differ substantially. Here, we show evidence that, regardless of the strategy used, the neural network involved in outright stopping is ubiquitous. However, significant differences can only be found in the attention-related processes underlying those different strategies. Thus, when studying attentional processing of salient stop signals, strategic differences should be considered. In contrast, the neural networks implementing outright stopping seem less or not at all affected by strategic differences. Copyright © 2017 the authors 0270-6474/17/379786-10$15.00/0.

  4. Imbalance aware lithography hotspot detection: a deep learning approach

    NASA Astrophysics Data System (ADS)

    Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei

    2017-03-01

    With the advancement of VLSI technology nodes, light diffraction caused lithographic hotspots have become a serious problem affecting manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with extreme scaling of transistor feature size and more and more complicated layout patterns, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. In this paper, we present a deep convolutional neural network (CNN) targeting representative feature learning in lithography hotspot detection. We carefully analyze impact and effectiveness of different CNN hyper-parameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always minorities in VLSI mask design, the training data set is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from high false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply minority upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves highly comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.

  5. The research of "blind" spot in the LVQ network

    NASA Astrophysics Data System (ADS)

    Guo, Zhanjie; Nan, Shupo; Wang, Xiaoli

    2017-04-01

    Nowadays competitive neural network has been widely used in the pattern recognition, classification and other aspects, and show the great advantages compared with the traditional clustering methods. But the competitive neural networks still has inadequate in many aspects, and it needs to be further improved. Based on the learning Vector Quantization Network proposed by Learning Kohonen [1], this paper resolve the issue of the large training error, when there are "blind" spots in a network through the introduction of threshold value learning rules and finally programs the realization with Matlab.

  6. Spike frequency adaptation is a possible mechanism for control of attractor preference in auto-associative neural networks

    NASA Astrophysics Data System (ADS)

    Roach, James; Sander, Leonard; Zochowski, Michal

    Auto-associative memory is the ability to retrieve a pattern from a small fraction of the pattern and is an important function of neural networks. Within this context, memories that are stored within the synaptic strengths of networks act as dynamical attractors for network firing patterns. In networks with many encoded memories, some attractors will be stronger than others. This presents the problem of how networks switch between attractors depending on the situation. We suggest that regulation of neuronal spike-frequency adaptation (SFA) provides a universal mechanism for network-wide attractor selectivity. Here we demonstrate in a Hopfield type attractor network that neurons minimal SFA will reliably activate in the pattern corresponding to a local attractor and that a moderate increase in SFA leads to the network to converge to the strongest attractor state. Furthermore, we show that on long time scales SFA allows for temporal sequences of activation to emerge. Finally, using a model of cholinergic modulation within the cortex we argue that dynamic regulation of attractor preference by SFA could be critical for the role of acetylcholine in attention or for arousal states in general. This work was supported by: NSF Graduate Research Fellowship Program under Grant No. DGE 1256260 (JPR), NSF CMMI 1029388 (MRZ) and NSF PoLS 1058034 (MRZ & LMS).

  7. Enhanced storage capacity with errors in scale-free Hopfield neural networks: An analytical study.

    PubMed

    Kim, Do-Hyun; Park, Jinha; Kahng, Byungnam

    2017-01-01

    The Hopfield model is a pioneering neural network model with associative memory retrieval. The analytical solution of the model in mean field limit revealed that memories can be retrieved without any error up to a finite storage capacity of O(N), where N is the system size. Beyond the threshold, they are completely lost. Since the introduction of the Hopfield model, the theory of neural networks has been further developed toward realistic neural networks using analog neurons, spiking neurons, etc. Nevertheless, those advances are based on fully connected networks, which are inconsistent with recent experimental discovery that the number of connections of each neuron seems to be heterogeneous, following a heavy-tailed distribution. Motivated by this observation, we consider the Hopfield model on scale-free networks and obtain a different pattern of associative memory retrieval from that obtained on the fully connected network: the storage capacity becomes tremendously enhanced but with some error in the memory retrieval, which appears as the heterogeneity of the connections is increased. Moreover, the error rates are also obtained on several real neural networks and are indeed similar to that on scale-free model networks.

  8. Local synchronization of chaotic neural networks with sampled-data and saturating actuators.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2014-12-01

    This paper investigates the problem of local synchronization of chaotic neural networks with sampled-data and actuator saturation. A new time-dependent Lyapunov functional is proposed for the synchronization error systems. The advantage of the constructed Lyapunov functional lies in the fact that it is positive definite at sampling times but not necessarily between sampling times, and makes full use of the available information about the actual sampling pattern. A local stability condition of the synchronization error systems is derived, based on which a sampled-data controller with respect to the actuator saturation is designed to ensure that the master neural networks and slave neural networks are locally asymptotically synchronous. Two optimization problems are provided to compute the desired sampled-data controller with the aim of enlarging the set of admissible initial conditions or the admissible sampling upper bound ensuring the local synchronization of the considered chaotic neural networks. A numerical example is used to demonstrate the effectiveness of the proposed design technique.

  9. 18F-FDG PET brain images as features for Alzheimer classification

    NASA Astrophysics Data System (ADS)

    Azmi, M. H.; Saripan, M. I.; Nordin, A. J.; Ahmad Saad, F. F.; Abdul Aziz, S. A.; Wan Adnan, W. A.

    2017-08-01

    2-Deoxy-2-[fluorine-18] fluoro-D-glucose (18F-FDG) Positron Emission Tomography (PET) imaging offers meaningful information for various types of diseases diagnosis. In Alzheimer's disease (AD), the hypometabolism of glucose which observed on the low intensity voxel in PET image may relate to the onset of the disease. The importance of early detection of AD is inevitable because the resultant brain damage is irreversible. Several statistical analysis and machine learning algorithm have been proposed to investigate the rate and the pattern of the hypometabolism. This study focus on the same aim with further investigation was performed on several hypometabolism pattern. Some pre-processing steps were implemented to standardize the data in order to minimize the effect of resolution and anatomical differences. The features used are the mean voxel intensity within the AD pattern mask, which derived from several z-score and FDR threshold values. The global mean voxel (GMV) and slice-based mean voxel (SbMV) intensity were observed and used as input to the neural network. Several neural network architectures were tested and compared to the nearest neighbour method. The highest accuracy equals to 0.9 and recorded at z-score ≤-1.3 with 1 node neural network architecture (sensitivity=0.81 and specificity=0.95) and at z-score ≤-0.7 with 10 nodes neural network (sensitivity=0.83 and specificity=0.94).

  10. Pattern Learning, Damage and Repair within Biological Neural Networks

    NASA Astrophysics Data System (ADS)

    Siu, Theodore; Fitzgerald O'Neill, Kate; Shinbrot, Troy

    2015-03-01

    Traumatic brain injury (TBI) causes damage to neural networks, potentially leading to disability or even death. Nearly one in ten of these patients die, and most of the remainder suffer from symptoms ranging from headaches and nausea to convulsions and paralysis. In vitro studies to develop treatments for TBI have limited in vivo applicability, and in vitro therapies have even proven to worsen the outcome of TBI patients. We propose that this disconnect between in vitro and in vivo outcomes may be associated with the fact that in vitro tests assess indirect measures of neuronal health, but do not investigate the actual function of neuronal networks. Therefore in this talk, we examine both in vitro and in silico neuronal networks that actually perform a function: pattern identification. We allow the networks to execute genetic, Hebbian, learning, and additionally, we examine the effects of damage and subsequent repair within our networks. We show that the length of repaired connections affects the overall pattern learning performance of the network and we propose therapies that may improve function following TBI in clinical settings.

  11. The Emerging Role of Epigenetics in Stroke

    PubMed Central

    Qureshi, Irfan A.; Mehler, Mark F.

    2013-01-01

    The transplantation of exogenous stem cells and the activation of endogenous neural stem and progenitor cells (NSPCs) are promising treatments for stroke. These cells can modulate intrinsic responses to ischemic injury and may even integrate directly into damaged neural networks. However, the neuroprotective and neural regenerative effects that can be mediated by these cells are limited and may even be deleterious. Epigenetic reprogramming represents a novel strategy for enhancing the intrinsic potential of the brain to protect and repair itself by modulating pathologic neural gene expression and promoting the recapitulation of seminal neural developmental processes. In fact, recent evidence suggests that emerging epigenetic mechanisms are critical for orchestrating nearly every aspect of neural development and homeostasis, including brain patterning, neural stem cell maintenance, neurogenesis and gliogenesis, neural subtype specification, and synaptic and neural network connectivity and plasticity. In this review, we survey the therapeutic potential of exogenous stem cells and endogenous NSPCs and highlight innovative technological approaches for designing, developing, and delivering epigenetic therapies for targeted reprogramming of endogenous pools of NSPCs, neural cells at risk, and dysfunctional neural networks to rescue and restore neurologic function in the ischemic brain. PMID:21403016

  12. Reference ability neural networks and behavioral performance across the adult life span.

    PubMed

    Habeck, Christian; Eich, Teal; Razlighi, Ray; Gazes, Yunglin; Stern, Yaakov

    2018-05-15

    To better understand the impact of aging, along with other demographic and brain health variables, on the neural networks that support different aspects of cognitive performance, we applied a brute-force search technique based on Principal Components Analysis to derive 4 corresponding spatial covariance patterns (termed Reference Ability Neural Networks -RANNs) from a large sample of participants across the age range. 255 clinically healthy, community-dwelling adults, aged 20-77, underwent fMRI while performing 12 tasks, 3 tasks for each of the following cognitive reference abilities: Episodic Memory, Reasoning, Perceptual Speed, and Vocabulary. The derived RANNs (1) showed selective activation to their specific cognitive domain and (2) correlated with behavioral performance. Quasi out-of-sample replication with Monte-Carlo 5-fold cross validation was built into our approach, and all patterns indicated their corresponding reference ability and predicted performance in held-out data to a degree significantly greater than chance level. RANN-pattern expression for Episodic Memory, Reasoning and Vocabulary were associated selectively with age, while the pattern for Perceptual Speed showed no such age-related influences. For each participant we also looked at residual activity unaccounted for by the RANN-pattern derived for the cognitive reference ability. Higher residual activity was associated with poorer brain-structural health and older age, but -apart from Vocabulary-not with cognitive performance, indicating that older participants with worse brain-structural health might recruit alternative neural resources to maintain performance levels. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Artificial Neural Network Application in the Diagnosis of Disease Conditions with Liver Ultrasound Images

    PubMed Central

    Lele, Ramachandra Dattatraya; Joshi, Mukund; Chowdhary, Abhay

    2014-01-01

    The preliminary study presented within this paper shows a comparative study of various texture features extracted from liver ultrasonic images by employing Multilayer Perceptron (MLP), a type of artificial neural network, to study the presence of disease conditions. An ultrasound (US) image shows echo-texture patterns, which defines the organ characteristics. Ultrasound images of liver disease conditions such as “fatty liver,” “cirrhosis,” and “hepatomegaly” produce distinctive echo patterns. However, various ultrasound imaging artifacts and speckle noise make these echo-texture patterns difficult to identify and often hard to distinguish visually. Here, based on the extracted features from the ultrasonic images, we employed an artificial neural network for the diagnosis of disease conditions in liver and finding of the best classifier that distinguishes between abnormal and normal conditions of the liver. Comparison of the overall performance of all the feature classifiers concluded that “mixed feature set” is the best feature set. It showed an excellent rate of accuracy for the training data set. The gray level run length matrix (GLRLM) feature shows better results when the network was tested against unknown data. PMID:25332717

  14. SOM neural network fault diagnosis method of polymerization kettle equipment optimized by improved PSO algorithm.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Gao, Jie

    2014-01-01

    For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.

  15. Optimized color decomposition of localized whole slide images and convolutional neural network for intermediate prostate cancer classification

    NASA Astrophysics Data System (ADS)

    Zhou, Naiyun; Gao, Yi

    2017-03-01

    This paper presents a fully automatic approach to grade intermediate prostate malignancy with hematoxylin and eosin-stained whole slide images. Deep learning architectures such as convolutional neural networks have been utilized in the domain of histopathology for automated carcinoma detection and classification. However, few work show its power in discriminating intermediate Gleason patterns, due to sporadic distribution of prostate glands on stained surgical section samples. We propose optimized hematoxylin decomposition on localized images, followed by convolutional neural network to classify Gleason patterns 3+4 and 4+3 without handcrafted features or gland segmentation. Crucial glands morphology and structural relationship of nuclei are extracted twice in different color space by the multi-scale strategy to mimic pathologists' visual examination. Our novel classification scheme evaluated on 169 whole slide images yielded a 70.41% accuracy and corresponding area under the receiver operating characteristic curve of 0.7247.

  16. Fuzzy logic and neural networks in artificial intelligence and pattern recognition

    NASA Astrophysics Data System (ADS)

    Sanchez, Elie

    1991-10-01

    With the use of fuzzy logic techniques, neural computing can be integrated in symbolic reasoning to solve complex real world problems. In fact, artificial neural networks, expert systems, and fuzzy logic systems, in the context of approximate reasoning, share common features and techniques. A model of Fuzzy Connectionist Expert System is introduced, in which an artificial neural network is designed to construct the knowledge base of an expert system from, training examples (this model can also be used for specifications of rules in fuzzy logic control). Two types of weights are associated with the synaptic connections in an AND-OR structure: primary linguistic weights, interpreted as labels of fuzzy sets, and secondary numerical weights. Cell activation is computed through min-max fuzzy equations of the weights. Learning consists in finding the (numerical) weights and the network topology. This feedforward network is described and first illustrated in a biomedical application (medical diagnosis assistance from inflammatory-syndromes/proteins profiles). Then, it is shown how this methodology can be utilized for handwritten pattern recognition (characters play the role of diagnoses): in a fuzzy neuron describing a number for example, the linguistic weights represent fuzzy sets on cross-detecting lines and the numerical weights reflect the importance (or weakness) of connections between cross-detecting lines and characters.

  17. The neural basis of visual word form processing: a multivariate investigation.

    PubMed

    Nestor, Adrian; Behrmann, Marlene; Plaut, David C

    2013-07-01

    Current research on the neurobiological bases of reading points to the privileged role of a ventral cortical network in visual word processing. However, the properties of this network and, in particular, its selectivity for orthographic stimuli such as words and pseudowords remain topics of significant debate. Here, we approached this issue from a novel perspective by applying pattern-based analyses to functional magnetic resonance imaging data. Specifically, we examined whether, where and how, orthographic stimuli elicit distinct patterns of activation in the human cortex. First, at the category level, multivariate mapping found extensive sensitivity throughout the ventral cortex for words relative to false-font strings. Secondly, at the identity level, the multi-voxel pattern classification provided direct evidence that different pseudowords are encoded by distinct neural patterns. Thirdly, a comparison of pseudoword and face identification revealed that both stimulus types exploit common neural resources within the ventral cortical network. These results provide novel evidence regarding the involvement of the left ventral cortex in orthographic stimulus processing and shed light on its selectivity and discriminability profile. In particular, our findings support the existence of sublexical orthographic representations within the left ventral cortex while arguing for the continuity of reading with other visual recognition skills.

  18. Memristive neural network for on-line learning and tracking with brain-inspired spike timing dependent plasticity.

    PubMed

    Pedretti, G; Milo, V; Ambrogio, S; Carboni, R; Bianchi, S; Calderoni, A; Ramaswamy, N; Spinelli, A S; Ielmini, D

    2017-07-13

    Brain-inspired computation can revolutionize information technology by introducing machines capable of recognizing patterns (images, speech, video) and interacting with the external world in a cognitive, humanlike way. Achieving this goal requires first to gain a detailed understanding of the brain operation, and second to identify a scalable microelectronic technology capable of reproducing some of the inherent functions of the human brain, such as the high synaptic connectivity (~10 4 ) and the peculiar time-dependent synaptic plasticity. Here we demonstrate unsupervised learning and tracking in a spiking neural network with memristive synapses, where synaptic weights are updated via brain-inspired spike timing dependent plasticity (STDP). The synaptic conductance is updated by the local time-dependent superposition of pre- and post-synaptic spikes within a hybrid one-transistor/one-resistor (1T1R) memristive synapse. Only 2 synaptic states, namely the low resistance state (LRS) and the high resistance state (HRS), are sufficient to learn and recognize patterns. Unsupervised learning of a static pattern and tracking of a dynamic pattern of up to 4 × 4 pixels are demonstrated, paving the way for intelligent hardware technology with up-scaled memristive neural networks.

  19. A Comparison Study of Rule Space Method and Neural Network Model for Classifying Individuals and an Application.

    ERIC Educational Resources Information Center

    Hayashi, Atsuhiro

    Both the Rule Space Method (RSM) and the Neural Network Model (NNM) are techniques of statistical pattern recognition and classification approaches developed for applications from different fields. RSM was developed in the domain of educational statistics. It started from the use of an incidence matrix Q that characterizes the underlying cognitive…

  20. Hierarchical singleton-type recurrent neural fuzzy networks for noisy speech recognition.

    PubMed

    Juang, Chia-Feng; Chiou, Chyi-Tian; Lai, Chun-Lung

    2007-05-01

    This paper proposes noisy speech recognition using hierarchical singleton-type recurrent neural fuzzy networks (HSRNFNs). The proposed HSRNFN is a hierarchical connection of two singleton-type recurrent neural fuzzy networks (SRNFNs), where one is used for noise filtering and the other for recognition. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and their recurrent properties make them suitable for processing speech patterns with temporal characteristics. In n words recognition, n SRNFNs are created for modeling n words, where each SRNFN receives the current frame feature and predicts the next one of its modeling word. The prediction error of each SRNFN is used as recognition criterion. In filtering, one SRNFN is created, and each SRNFN recognizer is connected to the same SRNFN filter, which filters noisy speech patterns in the feature domain before feeding them to the SRNFN recognizer. Experiments with Mandarin word recognition under different types of noise are performed. Other recognizers, including multilayer perceptron (MLP), time-delay neural networks (TDNNs), and hidden Markov models (HMMs), are also tested and compared. These experiments and comparisons demonstrate good results with HSRNFN for noisy speech recognition tasks.

  1. An intelligent sales forecasting system through integration of artificial neural networks and fuzzy neural networks with fuzzy weight elimination.

    PubMed

    Kuo, R J; Wu, P; Wang, C P

    2002-09-01

    Sales forecasting plays a very prominent role in business strategy. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average (ARMA). However, sales forecasting is very complicated owing to influence by internal and external environments. Recently, artificial neural networks (ANNs) have also been applied in sales forecasting since their promising performances in the areas of control and pattern recognition. However, further improvement is still necessary since unique circumstances, e.g. promotion, cause a sudden change in the sales pattern. Thus, this study utilizes a proposed fuzzy neural network (FNN), which is able to eliminate the unimportant weights, for the sake of learning fuzzy IF-THEN rules obtained from the marketing experts with respect to promotion. The result from FNN is further integrated with the time series data through an ANN. Both the simulated and real-world problem results show that FNN with weight elimination can have lower training error compared with the regular FNN. Besides, real-world problem results also indicate that the proposed estimation system outperforms the conventional statistical method and single ANN in accuracy.

  2. Abstracts for the symposium on the Application of neural networks to the earth sciences

    USGS Publications Warehouse

    Singer, Donald A.

    2002-01-01

    Artificial neural networks are a group of mathematical methods that attempt to mimic some of the processes in the human mind. Although the foundations for these ideas were laid as early as 1943 (McCulloch and Pitts, 1943), it wasn't until 1986 (Rumelhart and McClelland, 1986; Masters, 1995) that applications to practical problems became possible. It is the acknowledged superiority of the human mind at recognizing patterns that the artificial neural networks are trying to imitate with their interconnected neurons. Interconnections used in the methods that have been developed allow robust learning. Capabilities of neural networks fall into three kinds of applications: (1) function fitting or prediction, (2) noise reduction or pattern recognition, and (3) classification or placing into types. Because of these capabilities and the powerful abilities of artificial neural networks, there have been increasing applications of these methods in the earth sciences. The abstracts in this document represent excellent samples of the range of applications. Talks associated with the abstracts were presented at the Symposium on the Application of Neural Networks to the Earth Sciences: Seventh International Symposium on Mineral Exploration (ISME–02), held August 20–21, 2002, at NASA Moffett Field, Mountain View, California. This symposium was sponsored by the Mining and Materials Processing Institute of Japan (MMIJ), the U.S. Geological Survey, the Circum-Pacific Council, and NASA. The ISME symposia have been held every two years in order to bring together scientists actively working on diverse quantitative methods applied to the earth sciences. Although the title, International Symposium on Mineral Exploration, suggests exclusive focus on mineral exploration, interests and presentations have always been wide-ranging—abstracts presented here are no exception.

  3. Improved head direction command classification using an optimised Bayesian neural network.

    PubMed

    Nguyen, Son T; Nguyen, Hung T; Taylor, Philip B; Middleton, James

    2006-01-01

    Assistive technologies have recently emerged to improve the quality of life of severely disabled people by enhancing their independence in daily activities. Since many of those individuals have limited or non-existing control from the neck downward, alternative hands-free input modalities have become very important for these people to access assistive devices. In hands-free control, head movement has been proved to be a very effective user interface as it can provide a comfortable, reliable and natural way to access the device. Recently, neural networks have been shown to be useful not only for real-time pattern recognition but also for creating user-adaptive models. Since multi-layer perceptron neural networks trained using standard back-propagation may cause poor generalisation, the Bayesian technique has been proposed to improve the generalisation and robustness of these networks. This paper describes the use of Bayesian neural networks in developing a hands-free wheelchair control system. The experimental results show that with the optimised architecture, classification Bayesian neural networks can detect head commands of wheelchair users accurately irrespective to their levels of injuries.

  4. Exact computation of the maximum-entropy potential of spiking neural-network models.

    PubMed

    Cofré, R; Cessac, B

    2014-05-01

    Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.

  5. High solar activity predictions through an artificial neural network

    NASA Astrophysics Data System (ADS)

    Orozco-Del-Castillo, M. G.; Ortiz-Alemán, J. C.; Couder-Castañeda, C.; Hernández-Gómez, J. J.; Solís-Santomé, A.

    The effects of high-energy particles coming from the Sun on human health as well as in the integrity of outer space electronics make the prediction of periods of high solar activity (HSA) a task of significant importance. Since periodicities in solar indexes have been identified, long-term predictions can be achieved. In this paper, we present a method based on an artificial neural network to find a pattern in some harmonics which represent such periodicities. We used data from 1973 to 2010 to train the neural network, and different historical data for its validation. We also used the neural network along with a statistical analysis of its performance with known data to predict periods of HSA with different confidence intervals according to the three-sigma rule associated with solar cycles 24-26, which we found to occur before 2040.

  6. Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.

    PubMed

    Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung

    2018-04-01

    In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Multiple neural network approaches to clinical expert systems

    NASA Astrophysics Data System (ADS)

    Stubbs, Derek F.

    1990-08-01

    We briefly review the concept of computer aided medical diagnosis and more extensively review the the existing literature on neural network applications in the field. Neural networks can function as simple expert systems for diagnosis or prognosis. Using a public database we develop a neural network for the diagnosis of a major presenting symptom while discussing the development process and possible approaches. MEDICAL EXPERTS SYSTEMS COMPUTER AIDED DIAGNOSIS Biomedicine is an incredibly diverse and multidisciplinary field and it is not surprising that neural networks with their many applications are finding more and more applications in the highly non-linear field of biomedicine. I want to concentrate on neural networks as medical expert systems for clinical diagnosis or prognosis. Expert Systems started out as a set of computerized " ifthen" rules. Everything was reduced to boolean logic and the promised land of computer experts was said to be in sight. It never came. Why? First the computer code explodes as the number of " ifs" increases. All the " ifs" have to interact. Second experts are not very good at reducing expertise to language. It turns out that experts recognize patterns and have non-verbal left-brain intuition decision processes. Third learning by example rather than learning by rule is the way natural brains works and making computers work by rule-learning is hideously labor intensive. Neural networks can learn from example. They learn the results

  8. An intelligent control system for failure detection and controller reconfiguration

    NASA Technical Reports Server (NTRS)

    Biswas, Saroj K.

    1994-01-01

    We present an architecture of an intelligent restructurable control system to automatically detect failure of system components, assess its impact on system performance and safety, and reconfigure the controller for performance recovery. Fault detection is based on neural network associative memories and pattern classifiers, and is implemented using a multilayer feedforward network. Details of the fault detection network along with simulation results on health monitoring of a dc motor have been presented. Conceptual developments for fault assessment using an expert system and controller reconfiguration using a neural network are outlined.

  9. Automatic identification of species with neural networks.

    PubMed

    Hernández-Serna, Andrés; Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  10. Clique-Based Neural Associative Memories with Local Coding and Precoding.

    PubMed

    Mofrad, Asieh Abolpour; Parker, Matthew G; Ferdosi, Zahra; Tadayon, Mohammad H

    2016-08-01

    Techniques from coding theory are able to improve the efficiency of neuroinspired and neural associative memories by forcing some construction and constraints on the network. In this letter, the approach is to embed coding techniques into neural associative memory in order to increase their performance in the presence of partial erasures. The motivation comes from recent work by Gripon, Berrou, and coauthors, which revisited Willshaw networks and presented a neural network with interacting neurons that partitioned into clusters. The model introduced stores patterns as small-size cliques that can be retrieved in spite of partial error. We focus on improving the success of retrieval by applying two techniques: doing a local coding in each cluster and then applying a precoding step. We use a slightly different decoding scheme, which is appropriate for partial erasures and converges faster. Although the ideas of local coding and precoding are not new, the way we apply them is different. Simulations show an increase in the pattern retrieval capacity for both techniques. Moreover, we use self-dual additive codes over field [Formula: see text], which have very interesting properties and a simple-graph representation.

  11. Imbalance aware lithography hotspot detection: a deep learning approach

    NASA Astrophysics Data System (ADS)

    Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei

    2017-07-01

    With the advancement of very large scale integrated circuits (VLSI) technology nodes, lithographic hotspots become a serious problem that affects manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with the extreme scaling of transistor feature size and layout patterns growing in complexity, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. We present a deep convolutional neural network (CNN) that targets representative feature learning in lithography hotspot detection. We carefully analyze the impact and effectiveness of different CNN hyperparameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always in the minority in VLSI mask design, the training dataset is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from a high number of false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply hotspot upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.

  12. Computational Models and Emergent Properties of Respiratory Neural Networks

    PubMed Central

    Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.

    2012-01-01

    Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564

  13. Conic section function neural network circuitry for offline signature recognition.

    PubMed

    Erkmen, Burcu; Kahraman, Nihan; Vural, Revna A; Yildirim, Tulay

    2010-04-01

    In this brief, conic section function neural network (CSFNN) circuitry was designed for offline signature recognition. CSFNN is a unified framework for multilayer perceptron (MLP) and radial basis function (RBF) networks to make simultaneous use of advantages of both. The CSFNN circuitry architecture was developed using a mixed mode circuit implementation. The designed circuit system is problem independent. Hence, the general purpose neural network circuit system could be applied to various pattern recognition problems with different network sizes on condition with the maximum network size of 16-16-8. In this brief, CSFNN circuitry system has been applied to two different signature recognition problems. CSFNN circuitry was trained with chip-in-the-loop learning technique in order to compensate typical analog process variations. CSFNN hardware achieved highly comparable computational performances with CSFNN software for nonlinear signature recognition problems.

  14. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System.

    PubMed

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas; Chicca, Elisabetta

    2012-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  15. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System

    PubMed Central

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L.; Wennekers, Thomas; Chicca, Elisabetta

    2011-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems. PMID:22347163

  16. Robust autoassociative memory with coupled networks of Kuramoto-type oscillators

    NASA Astrophysics Data System (ADS)

    Heger, Daniel; Krischer, Katharina

    2016-08-01

    Uncertain recognition success, unfavorable scaling of connection complexity, or dependence on complex external input impair the usefulness of current oscillatory neural networks for pattern recognition or restrict technical realizations to small networks. We propose a network architecture of coupled oscillators for pattern recognition which shows none of the mentioned flaws. Furthermore we illustrate the recognition process with simulation results and analyze the dynamics analytically: Possible output patterns are isolated attractors of the system. Additionally, simple criteria for recognition success are derived from a lower bound on the basins of attraction.

  17. PatterNet: a system to learn compact physical design pattern representations for pattern-based analytics

    NASA Astrophysics Data System (ADS)

    Lutich, Andrey

    2017-07-01

    This research considers the problem of generating compact vector representations of physical design patterns for analytics purposes in semiconductor patterning domain. PatterNet uses a deep artificial neural network to learn mapping of physical design patterns to a compact Euclidean hyperspace. Distances among mapped patterns in this space correspond to dissimilarities among patterns defined at the time of the network training. Once the mapping network has been trained, PatterNet embeddings can be used as feature vectors with standard machine learning algorithms, and pattern search, comparison, and clustering become trivial problems. PatterNet is inspired by the concepts developed within the framework of generative adversarial networks as well as the FaceNet. Our method facilitates a deep neural network (DNN) to learn directly the compact representation by supplying it with pairs of design patterns and dissimilarity among these patterns defined by a user. In the simplest case, the dissimilarity is represented by an area of the XOR of two patterns. Important to realize that our PatterNet approach is very different to the methods developed for deep learning on image data. In contrast to "conventional" pictures, the patterns in the CAD world are the lists of polygon vertex coordinates. The method solely relies on the promise of deep learning to discover internal structure of the incoming data and learn its hierarchical representations. Artificial intelligence arising from the combination of PatterNet and clustering analysis very precisely follows intuition of patterning/optical proximity correction experts paving the way toward human-like and human-friendly engineering tools.

  18. Stability and Hopf bifurcation in a simplified BAM neural network with two time delays.

    PubMed

    Cao, Jinde; Xiao, Min

    2007-03-01

    Various local periodic solutions may represent different classes of storage patterns or memory patterns, and arise from the different equilibrium points of neural networks (NNs) by applying Hopf bifurcation technique. In this paper, a bidirectional associative memory NN with four neurons and multiple delays is considered. By applying the normal form theory and the center manifold theorem, analysis of its linear stability and Hopf bifurcation is performed. An algorithm is worked out for determining the direction and stability of the bifurcated periodic solutions. Numerical simulation results supporting the theoretical analysis are also given.

  19. Third-dimension information retrieval from a single convergent-beam transmission electron diffraction pattern using an artificial neural network

    NASA Astrophysics Data System (ADS)

    Pennington, Robert S.; Van den Broek, Wouter; Koch, Christoph T.

    2014-05-01

    We have reconstructed third-dimension specimen information from convergent-beam electron diffraction (CBED) patterns simulated using the stacked-Bloch-wave method. By reformulating the stacked-Bloch-wave formalism as an artificial neural network and optimizing with resilient back propagation, we demonstrate specimen orientation reconstructions with depth resolutions down to 5 nm. To show our algorithm's ability to analyze realistic data, we also discuss and demonstrate our algorithm reconstructing from noisy data and using a limited number of CBED disks. Applicability of this reconstruction algorithm to other specimen parameters is discussed.

  20. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    ERIC Educational Resources Information Center

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  1. Spatial-temporal-spectral EEG patterns of BOLD functional network connectivity dynamics

    NASA Astrophysics Data System (ADS)

    Lamoš, Martin; Mareček, Radek; Slavíček, Tomáš; Mikl, Michal; Rektor, Ivan; Jan, Jiří

    2018-06-01

    Objective. Growing interest in the examination of large-scale brain network functional connectivity dynamics is accompanied by an effort to find the electrophysiological correlates. The commonly used constraints applied to spatial and spectral domains during electroencephalogram (EEG) data analysis may leave part of the neural activity unrecognized. We propose an approach that blindly reveals multimodal EEG spectral patterns that are related to the dynamics of the BOLD functional network connectivity. Approach. The blind decomposition of EEG spectrogram by parallel factor analysis has been shown to be a useful technique for uncovering patterns of neural activity. The simultaneously acquired BOLD fMRI data were decomposed by independent component analysis. Dynamic functional connectivity was computed on the component’s time series using a sliding window correlation, and between-network connectivity states were then defined based on the values of the correlation coefficients. ANOVA tests were performed to assess the relationships between the dynamics of between-network connectivity states and the fluctuations of EEG spectral patterns. Main results. We found three patterns related to the dynamics of between-network connectivity states. The first pattern has dominant peaks in the alpha, beta, and gamma bands and is related to the dynamics between the auditory, sensorimotor, and attentional networks. The second pattern, with dominant peaks in the theta and low alpha bands, is related to the visual and default mode network. The third pattern, also with peaks in the theta and low alpha bands, is related to the auditory and frontal network. Significance. Our previous findings revealed a relationship between EEG spectral pattern fluctuations and the hemodynamics of large-scale brain networks. In this study, we suggest that the relationship also exists at the level of functional connectivity dynamics among large-scale brain networks when no standard spatial and spectral constraints are applied on the EEG data.

  2. The Reference Ability Neural Network Study: Life-time stability of reference-ability neural networks derived from task maps of young adults.

    PubMed

    Habeck, C; Gazes, Y; Razlighi, Q; Steffener, J; Brickman, A; Barulli, D; Salthouse, T; Stern, Y

    2016-01-15

    Analyses of large test batteries administered to individuals ranging from young to old have consistently yielded a set of latent variables representing reference abilities (RAs) that capture the majority of the variance in age-related cognitive change: Episodic Memory, Fluid Reasoning, Perceptual Processing Speed, and Vocabulary. In a previous paper (Stern et al., 2014), we introduced the Reference Ability Neural Network Study, which administers 12 cognitive neuroimaging tasks (3 for each RA) to healthy adults age 20-80 in order to derive unique neural networks underlying these 4 RAs and investigate how these networks may be affected by aging. We used a multivariate approach, linear indicator regression, to derive a unique covariance pattern or Reference Ability Neural Network (RANN) for each of the 4 RAs. The RANNs were derived from the neural task data of 64 younger adults of age 30 and below. We then prospectively applied the RANNs to fMRI data from the remaining sample of 227 adults of age 31 and above in order to classify each subject-task map into one of the 4 possible reference domains. Overall classification accuracy across subjects in the sample age 31 and above was 0.80±0.18. Classification accuracy by RA domain was also good, but variable; memory: 0.72±0.32; reasoning: 0.75±0.35; speed: 0.79±0.31; vocabulary: 0.94±0.16. Classification accuracy was not associated with cross-sectional age, suggesting that these networks, and their specificity to the respective reference domain, might remain intact throughout the age range. Higher mean brain volume was correlated with increased overall classification accuracy; better overall performance on the tasks in the scanner was also associated with classification accuracy. For the RANN network scores, we observed for each RANN that a higher score was associated with a higher corresponding classification accuracy for that reference ability. Despite the absence of behavioral performance information in the derivation of these networks, we also observed some brain-behavioral correlations, notably for the fluid-reasoning network whose network score correlated with performance on the memory and fluid-reasoning tasks. While age did not influence the expression of this RANN, the slope of the association between network score and fluid-reasoning performance was negatively associated with higher ages. These results provide support for the hypothesis that a set of specific, age-invariant neural networks underlies these four RAs, and that these networks maintain their cognitive specificity and level of intensity across age. Activation common to all 12 tasks was identified as another activation pattern resulting from a mean-contrast Partial-Least-Squares technique. This common pattern did show associations with age and some subject demographics for some of the reference domains, lending support to the overall conclusion that aspects of neural processing that are specific to any cognitive reference ability stay constant across age, while aspects that are common to all reference abilities differ across age. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Random Boolean networks for autoassociative memory: Optimization and sequential learning

    NASA Astrophysics Data System (ADS)

    Sherrington, D.; Wong, K. Y. M.

    Conventional neural networks are based on synaptic storage of information, even when the neural states are discrete and bounded. In general, the set of potential local operations is much greater. Here we discuss some aspects of the properties of networks of binary neurons with more general Boolean functions controlling the local dynamics. Two specific aspects are emphasised; (i) optimization in the presence of noise and (ii) a simple model for short-term memory exhibiting primacy and recency in the recall of sequentially taught patterns.

  4. Role of local network oscillations in resting-state functional connectivity.

    PubMed

    Cabral, Joana; Hugues, Etienne; Sporns, Olaf; Deco, Gustavo

    2011-07-01

    Spatio-temporally organized low-frequency fluctuations (<0.1 Hz), observed in BOLD fMRI signal during rest, suggest the existence of underlying network dynamics that emerge spontaneously from intrinsic brain processes. Furthermore, significant correlations between distinct anatomical regions-or functional connectivity (FC)-have led to the identification of several widely distributed resting-state networks (RSNs). This slow dynamics seems to be highly structured by anatomical connectivity but the mechanism behind it and its relationship with neural activity, particularly in the gamma frequency range, remains largely unknown. Indeed, direct measurements of neuronal activity have revealed similar large-scale correlations, particularly in slow power fluctuations of local field potential gamma frequency range oscillations. To address these questions, we investigated neural dynamics in a large-scale model of the human brain's neural activity. A key ingredient of the model was a structural brain network defined by empirically derived long-range brain connectivity together with the corresponding conduction delays. A neural population, assumed to spontaneously oscillate in the gamma frequency range, was placed at each network node. When these oscillatory units are integrated in the network, they behave as weakly coupled oscillators. The time-delayed interaction between nodes is described by the Kuramoto model of phase oscillators, a biologically-based model of coupled oscillatory systems. For a realistic setting of axonal conduction speed, we show that time-delayed network interaction leads to the emergence of slow neural activity fluctuations, whose patterns correlate significantly with the empirically measured FC. The best agreement of the simulated FC with the empirically measured FC is found for a set of parameters where subsets of nodes tend to synchronize although the network is not globally synchronized. Inside such clusters, the simulated BOLD signal between nodes is found to be correlated, instantiating the empirically observed RSNs. Between clusters, patterns of positive and negative correlations are observed, as described in experimental studies. These results are found to be robust with respect to a biologically plausible range of model parameters. In conclusion, our model suggests how resting-state neural activity can originate from the interplay between the local neural dynamics and the large-scale structure of the brain. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Synchronization and spatiotemporal patterns in coupled phase oscillators on a weighted planar network

    NASA Astrophysics Data System (ADS)

    Kagawa, Yuki; Takamatsu, Atsuko

    2009-04-01

    To reveal the relation between network structures found in two-dimensional biological systems, such as protoplasmic tube networks in the plasmodium of true slime mold, and spatiotemporal oscillation patterns emerged on the networks, we constructed coupled phase oscillators on weighted planar networks and investigated their dynamics. Results showed that the distribution of edge weights in the networks strongly affects (i) the propensity for global synchronization and (ii) emerging ratios of oscillation patterns, such as traveling and concentric waves, even if the total weight is fixed. In-phase locking, traveling wave, and concentric wave patterns were, respectively, observed most frequently in uniformly weighted, center weighted treelike, and periphery weighted ring-shaped networks. Controlling the global spatiotemporal patterns with the weight distribution given by the local weighting (coupling) rules might be useful in biological network systems including the plasmodial networks and neural networks in the brain.

  6. Network complexity as a measure of information processing across resting-state networks: evidence from the Human Connectome Project

    PubMed Central

    McDonough, Ian M.; Nashiro, Kaoru

    2014-01-01

    An emerging field of research focused on fluctuations in brain signals has provided evidence that the complexity of those signals, as measured by entropy, conveys important information about network dynamics (e.g., local and distributed processing). While much research has focused on how neural complexity differs in populations with different age groups or clinical disorders, substantially less research has focused on the basic understanding of neural complexity in populations with young and healthy brain states. The present study used resting-state fMRI data from the Human Connectome Project (Van Essen et al., 2013) to test the extent that neural complexity in the BOLD signal, as measured by multiscale entropy (1) would differ from random noise, (2) would differ between four major resting-state networks previously associated with higher-order cognition, and (3) would be associated with the strength and extent of functional connectivity—a complementary method of estimating information processing. We found that complexity in the BOLD signal exhibited different patterns of complexity from white, pink, and red noise and that neural complexity was differentially expressed between resting-state networks, including the default mode, cingulo-opercular, left and right frontoparietal networks. Lastly, neural complexity across all networks was negatively associated with functional connectivity at fine scales, but was positively associated with functional connectivity at coarse scales. The present study is the first to characterize neural complexity in BOLD signals at a high temporal resolution and across different networks and might help clarify the inconsistencies between neural complexity and functional connectivity, thus informing the mechanisms underlying neural complexity. PMID:24959130

  7. Modularity Induced Gating and Delays in Neuronal Networks

    PubMed Central

    Shein-Idelson, Mark; Cohen, Gilad; Hanein, Yael

    2016-01-01

    Neural networks, despite their highly interconnected nature, exhibit distinctly localized and gated activation. Modularity, a distinctive feature of neural networks, has been recently proposed as an important parameter determining the manner by which networks support activity propagation. Here we use an engineered biological model, consisting of engineered rat cortical neurons, to study the role of modular topology in gating the activity between cell populations. We show that pairs of connected modules support conditional propagation (transmitting stronger bursts with higher probability), long delays and propagation asymmetry. Moreover, large modular networks manifest diverse patterns of both local and global activation. Blocking inhibition decreased activity diversity and replaced it with highly consistent transmission patterns. By independently controlling modularity and disinhibition, experimentally and in a model, we pose that modular topology is an important parameter affecting activation localization and is instrumental for population-level gating by disinhibition. PMID:27104350

  8. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning.

    PubMed

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung

    2018-02-01

    Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrada, J.J.; Osborne-Lee, I.W.; Grizzaffi, P.A.

    Expert systems are known to be useful in capturing expertise and applying knowledge to chemical engineering problems such as diagnosis, process control, process simulation, and process advisory. However, expert system applications are traditionally limited to knowledge domains that are heuristic and involve only simple mathematics. Neural networks, on the other hand, represent an emerging technology capable of rapid recognition of patterned behavior without regard to mathematical complexity. Although useful in problem identification, neural networks are not very efficient in providing in-depth solutions and typically do not promote full understanding of the problem or the reasoning behind its solutions. Hence, applicationsmore » of neural networks have certain limitations. This paper explores the potential for expanding the scope of chemical engineering areas where neural networks might be utilized by incorporating expert systems and neural networks into the same application, a process called hybridization. In addition, hybrid applications are compared with those using more traditional approaches, the results of the different applications are analyzed, and the feasibility of converting the preliminary prototypes described herein into useful final products is evaluated. 12 refs., 8 figs.« less

  10. The graph neural network model.

    PubMed

    Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele

    2009-01-01

    Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.

  11. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    PubMed

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  12. Development of a neural network for early detection of renal osteodystrophy

    NASA Astrophysics Data System (ADS)

    Cheng, Shirley N.; Chan, Heang-Ping; Adler, Ronald; Niklason, Loren T.; Chang, Chair-Li

    1991-07-01

    Bone erosion presenting as subperiosteal resorption on the phalanges of the hand is an early manifestation of hyperparathyroidism associated with chronic renal failure. At present, the diagnosis is made by trained radiologists through visual inspection of hand radiographs. In this study, a neural network is being developed to assess the feasibility of computer-aided detection of these changes. A two-pass approach is adopted. The digitized image is first compressed by a Laplacian pyramid compact code. The first neural network locates the region of interest using vertical projections along the phalanges and then the horizontal projections across the phalanges. A second neural network is used to classify texture variations of trabecular patterns in the region using a concurrence matrix as the input to a two-dimensional sensor layer to detect the degree of associated osteopenia. Preliminary results demonstrate the feasibility of this approach.

  13. RM-SORN: a reward-modulated self-organizing recurrent neural network.

    PubMed

    Aswolinskiy, Witali; Pipa, Gordon

    2015-01-01

    Neural plasticity plays an important role in learning and memory. Reward-modulation of plasticity offers an explanation for the ability of the brain to adapt its neural activity to achieve a rewarded goal. Here, we define a neural network model that learns through the interaction of Intrinsic Plasticity (IP) and reward-modulated Spike-Timing-Dependent Plasticity (STDP). IP enables the network to explore possible output sequences and STDP, modulated by reward, reinforces the creation of the rewarded output sequences. The model is tested on tasks for prediction, recall, non-linear computation, pattern recognition, and sequence generation. It achieves performance comparable to networks trained with supervised learning, while using simple, biologically motivated plasticity rules, and rewarding strategies. The results confirm the importance of investigating the interaction of several plasticity rules in the context of reward-modulated learning and whether reward-modulated self-organization can explain the amazing capabilities of the brain.

  14. The Neural Border: Induction, Specification and Maturation of the territory that generates Neural Crest cells.

    PubMed

    Pla, Patrick; Monsoro-Burq, Anne H

    2018-05-28

    The neural crest is induced at the edge between the neural plate and the nonneural ectoderm, in an area called the neural (plate) border, during gastrulation and neurulation. In recent years, many studies have explored how this domain is patterned, and how the neural crest is induced within this territory, that also participates to the prospective dorsal neural tube, the dorsalmost nonneural ectoderm, as well as placode derivatives in the anterior area. This review highlights the tissue interactions, the cell-cell signaling and the molecular mechanisms involved in this dynamic spatiotemporal patterning, resulting in the induction of the premigratory neural crest. Collectively, these studies allow building a complex neural border and early neural crest gene regulatory network, mostly composed by transcriptional regulations but also, more recently, including novel signaling interactions. Copyright © 2018. Published by Elsevier Inc.

  15. Optical Neural Classification Of Binary Patterns

    NASA Astrophysics Data System (ADS)

    Gustafson, Steven C.; Little, Gordon R.

    1988-05-01

    Binary pattern classification that may be implemented using optical hardware and neural network algorithms is considered. Pattern classification problems that have no concise description (as in classifying handwritten characters) or no concise computation (as in NP-complete problems) are expected to be particularly amenable to this approach. For example, optical processors that efficiently classify binary patterns in accordance with their Boolean function complexity might be designed. As a candidate for such a design, an optical neural network model is discussed that is designed for binary pattern classification and that consists of an optical resonator with a dynamic multiplex-recorded reflection hologram and a phase conjugate mirror with thresholding and gain. In this model, learning or training examples of binary patterns may be recorded on the hologram such that one bit in each pattern marks the pattern class. Any input pattern, including one with an unknown class or marker bit, will be modified by a large number of parallel interactions with the reflection hologram and nonlinear mirror. After perhaps several seconds and 100 billion interactions, a steady-state pattern may develop with a marker bit that represents a minimum-Boolean-complexity classification of the input pattern. Computer simulations are presented that illustrate progress in understanding the behavior of this model and in developing a processor design that could have commanding and enduring performance advantages compared to current pattern classification techniques.

  16. Distributed memory approaches for robotic neural controllers

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1990-01-01

    The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.

  17. Nonlinear channel equalization for QAM signal constellation using artificial neural networks.

    PubMed

    Patra, J C; Pal, R N; Baliarsingh, R; Panda, G

    1999-01-01

    Application of artificial neural networks (ANN's) to adaptive channel equalization in a digital communication system with 4-QAM signal constellation is reported in this paper. A novel computationally efficient single layer functional link ANN (FLANN) is proposed for this purpose. This network has a simple structure in which the nonlinearity is introduced by functional expansion of the input pattern by trigonometric polynomials. Because of input pattern enhancement, the FLANN is capable of forming arbitrarily nonlinear decision boundaries and can perform complex pattern classification tasks. Considering channel equalization as a nonlinear classification problem, the FLANN has been utilized for nonlinear channel equalization. The performance of the FLANN is compared with two other ANN structures [a multilayer perceptron (MLP) and a polynomial perceptron network (PPN)] along with a conventional linear LMS-based equalizer for different linear and nonlinear channel models. The effect of eigenvalue ratio (EVR) of input correlation matrix on the equalizer performance has been studied. The comparison of computational complexity involved for the three ANN structures is also provided.

  18. Identification of Correlated GRACE Monthly Harmonic Coefficients Using Pattern Recognition and Neural Networks

    NASA Astrophysics Data System (ADS)

    Piretzidis, D.; Sra, G.; Sideris, M. G.

    2016-12-01

    This study explores new methods for identifying correlation errors in harmonic coefficients derived from monthly solutions of the Gravity Recovery and Climate Experiment (GRACE) satellite mission using pattern recognition and neural network algorithms. These correlation errors are evidenced in the differences between monthly solutions and can be suppressed using a de-correlation filter. In all studies so far, the implementation of the de-correlation filter starts from a specific minimum order (i.e., 11 for RL04 and 38 for RL05) until the maximum order of the monthly solution examined. This implementation method has two disadvantages, namely, the omission of filtering correlated coefficients of order less than the minimum order and the filtering of uncorrelated coefficients of order higher than the minimum order. In the first case, the filtered solution is not completely free of correlated errors, whereas the second case results in a monthly solution that suffers from loss of geophysical signal. In the present study, a new method of implementing the de-correlation filter is suggested, by identifying and filtering only the coefficients that show indications of high correlation. Several numerical and geometric properties of the harmonic coefficient series of all orders are examined. Extreme cases of both correlated and uncorrelated coefficients are selected, and their corresponding properties are used to train a two-layer feed-forward neural network. The objective of the neural network is to identify and quantify the correlation by providing the probability of an order of coefficients to be correlated. Results show good performance of the neural network, both in the validation stage of the training procedure and in the subsequent use of the trained network to classify independent coefficients. The neural network is also capable of identifying correlated coefficients even when a small number of training samples and neurons are used (e.g.,100 and 10, respectively).

  19. Deforestation and Industrial Forest Patterns in Colombia: a Case Study

    NASA Astrophysics Data System (ADS)

    Huo, L. Z.; Boschetti, L.; Sparks, A. M.; Clerici, N.

    2017-12-01

    The recent peace agreement between the government and the Revolutionary Armed Forces of Colombia (FARC) offers new opportunities for peaceful and sustainable development, but at the same time requires a timely effort to protect biological resources, and ecosystem services (Clerici et al., 2016). In this context, we use the 2001-2017 Landsat data record to prototype a methodology to establish a baseline of deforestation, afforestation and industrial forest practices (i.e. establishment and harvest of forest plantations), and to monitor future changes. Two study areas, which have seen considerable deforestation in recent years, were selected: one in the South of the country, at the edge of the Amazon Forest (WRS path 008 row 059) and one in the center, in mixed forest (WRS path 008 row 055). The time series of all the available cloud free Landsat 5, Landsat 7 and Landsat 8 data was classified into a sequence of binary forest/non forest maps using a deep learning model, successfully used in the natural language processing field, trained to detect forest transitions. Recurrent Neural Networks (RNN) is a class of artificial neural network that extends the conventional neural network with loops in the connections (Graves et al., 2013). Unlike a feed-forward neural network, an RNN is able to process the sequential inputs by having a recurrent hidden state whose activation at each step depends on that of the previous steps. In this manner, the RNN provides a good framework to dynamically model time series data, and has been successfully applied to natural language processing in Google (Sutskever et al., 2014). The sequence of forest cover state maps was subsequently post-processed to differentiate between deforestation (e.g. transition from forest to non forest land use) and industrial forest harvest (i.e. timber harvest followed by regrowth), by integrating the detection of temporal patterns, and spatial patterns. References Clerici, N., et al., (2016). Colombia: Dealing in conservation. Science, 354(6309), 190-190. Sutskever I.,et al. (2014). Sequence to sequence learning with neural networks. Advances in neural information processing systems, 3104-3112. Graves A., et al. (2013). Speech recognition with deep recurrent neural networks. In Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 6645-6649.

  20. Biological modelling of a computational spiking neural network with neuronal avalanches.

    PubMed

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-06-28

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).

  1. Biological modelling of a computational spiking neural network with neuronal avalanches

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  2. Patterns of work attitudes: A neural network approach

    NASA Astrophysics Data System (ADS)

    Mengov, George D.; Zinovieva, Irina L.; Sotirov, George R.

    2000-05-01

    In this paper we introduce a neural networks based approach to analyzing empirical data and models from work and organizational psychology (WOP), and suggest possible implications for the practice of managers and business consultants. With this method it becomes possible to have quantitative answers to a bunch of questions like: What are the characteristics of an organization in terms of its employees' motivation? What distinct attitudes towards the work exist? Which pattern is most desirable from the standpoint of productivity and professional achievement? What will be the dynamics of behavior as quantified by our method, during an ongoing organizational change or consultancy intervention? Etc. Our investigation is founded on the theoretical achievements of Maslow (1954, 1970) in human motivation, and of Hackman & Oldham (1975, 1980) in job diagnostics, and applies the mathematical algorithm of the dARTMAP variation (Carpenter et al., 1998) of the Adaptive Resonance Theory (ART) neural networks introduced by Grossberg (1976). We exploit the ART capabilities to visualize the knowledge accumulated in the network's long-term memory in order to interpret the findings in organizational research.

  3. Quick fuzzy backpropagation algorithm.

    PubMed

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  4. Temporal sequence learning in winner-take-all networks of spiking neurons demonstrated in a brain-based device.

    PubMed

    McKinstry, Jeffrey L; Edelman, Gerald M

    2013-01-01

    Animal behavior often involves a temporally ordered sequence of actions learned from experience. Here we describe simulations of interconnected networks of spiking neurons that learn to generate patterns of activity in correct temporal order. The simulation consists of large-scale networks of thousands of excitatory and inhibitory neurons that exhibit short-term synaptic plasticity and spike-timing dependent synaptic plasticity. The neural architecture within each area is arranged to evoke winner-take-all (WTA) patterns of neural activity that persist for tens of milliseconds. In order to generate and switch between consecutive firing patterns in correct temporal order, a reentrant exchange of signals between these areas was necessary. To demonstrate the capacity of this arrangement, we used the simulation to train a brain-based device responding to visual input by autonomously generating temporal sequences of motor actions.

  5. Photonics: From target recognition to lesion detection

    NASA Technical Reports Server (NTRS)

    Henry, E. Michael

    1994-01-01

    Since 1989, Martin Marietta has invested in the development of an innovative concept for robust real-time pattern recognition for any two-dimensioanal sensor. This concept has been tested in simulation, and in laboratory and field hardware, for a number of DOD and commercial uses from automatic target recognition to manufacturing inspection. We have now joined Rose Health Care Systems in developing its use for medical diagnostics. The concept is based on determining regions of interest by using optical Fourier bandpassing as a scene segmentation technique, enhancing those regions using wavelet filters, passing the enhanced regions to a neural network for analysis and initial pattern identification, and following this initial identification with confirmation by optical correlation. The optical scene segmentation and pattern confirmation are performed by the same optical module. The neural network is a recursive error minimization network with a small number of connections and nodes that rapidly converges to a global minimum.

  6. Study on pattern recognition of Raman spectrum based on fuzzy neural network

    NASA Astrophysics Data System (ADS)

    Zheng, Xiangxiang; Lv, Xiaoyi; Mo, Jiaqing

    2017-10-01

    Hydatid disease is a serious parasitic disease in many regions worldwide, especially in Xinjiang, China. Raman spectrum of the serum of patients with echinococcosis was selected as the research object in this paper. The Raman spectrum of blood samples from healthy people and patients with echinococcosis are measured, of which the spectrum characteristics are analyzed. The fuzzy neural network not only has the ability of fuzzy logic to deal with uncertain information, but also has the ability to store knowledge of neural network, so it is combined with the Raman spectrum on the disease diagnosis problem based on Raman spectrum. Firstly, principal component analysis (PCA) is used to extract the principal components of the Raman spectrum, reducing the network input and accelerating the prediction speed and accuracy of Network based on remaining the original data. Then, the information of the extracted principal component is used as the input of the neural network, the hidden layer of the network is the generation of rules and the inference process, and the output layer of the network is fuzzy classification output. Finally, a part of samples are randomly selected for the use of training network, then the trained network is used for predicting the rest of the samples, and the predicted results are compared with general BP neural network to illustrate the feasibility and advantages of fuzzy neural network. Success in this endeavor would be helpful for the research work of spectroscopic diagnosis of disease and it can be applied in practice in many other spectral analysis technique fields.

  7. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  8. A neural-visualization IDS for honeynet data.

    PubMed

    Herrero, Álvaro; Zurutuza, Urko; Corchado, Emilio

    2012-04-01

    Neural intelligent systems can provide a visualization of the network traffic for security staff, in order to reduce the widely known high false-positive rate associated with misuse-based Intrusion Detection Systems (IDSs). Unlike previous work, this study proposes an unsupervised neural models that generate an intuitive visualization of the captured traffic, rather than network statistics. These snapshots of network events are immensely useful for security personnel that monitor network behavior. The system is based on the use of different neural projection and unsupervised methods for the visual inspection of honeypot data, and may be seen as a complementary network security tool that sheds light on internal data structures through visual inspection of the traffic itself. Furthermore, it is intended to facilitate verification and assessment of Snort performance (a well-known and widely-used misuse-based IDS), through the visualization of attack patterns. Empirical verification and comparison of the proposed projection methods are performed in a real domain, where two different case studies are defined and analyzed.

  9. Spectral pattern recognition of controlled substances in street samples using artificial neural network system

    NASA Astrophysics Data System (ADS)

    Poryvkina, Larisa; Aleksejev, Valeri; Babichenko, Sergey M.; Ivkina, Tatjana

    2011-04-01

    The NarTest fluorescent technique is aimed at the detection of analyte of interest in street samples by recognition of its specific spectral patterns in 3-dimentional Spectral Fluorescent Signatures (SFS) measured with NTX2000 analyzer without chromatographic or other separation of controlled substances from a mixture with cutting agents. The illicit drugs have their own characteristic SFS features which can be used for detection and identification of narcotics, however typical street sample consists of a mixture with cutting agents: adulterants and diluents. Many of them interfere the spectral shape of SFS. The expert system based on Artificial Neural Networks (ANNs) has been developed and applied for such pattern recognition in SFS of street samples of illicit drugs.

  10. Synchronization transition in neuronal networks composed of chaotic or non-chaotic oscillators.

    PubMed

    Xu, Kesheng; Maidana, Jean Paul; Castro, Samy; Orio, Patricio

    2018-05-30

    Chaotic dynamics has been shown in the dynamics of neurons and neural networks, in experimental data and numerical simulations. Theoretical studies have proposed an underlying role of chaos in neural systems. Nevertheless, whether chaotic neural oscillators make a significant contribution to network behaviour and whether the dynamical richness of neural networks is sensitive to the dynamics of isolated neurons, still remain open questions. We investigated synchronization transitions in heterogeneous neural networks of neurons connected by electrical coupling in a small world topology. The nodes in our model are oscillatory neurons that - when isolated - can exhibit either chaotic or non-chaotic behaviour, depending on conductance parameters. We found that the heterogeneity of firing rates and firing patterns make a greater contribution than chaos to the steepness of the synchronization transition curve. We also show that chaotic dynamics of the isolated neurons do not always make a visible difference in the transition to full synchrony. Moreover, macroscopic chaos is observed regardless of the dynamics nature of the neurons. However, performing a Functional Connectivity Dynamics analysis, we show that chaotic nodes can promote what is known as multi-stable behaviour, where the network dynamically switches between a number of different semi-synchronized, metastable states.

  11. A solution for two-dimensional mazes with use of chaotic dynamics in a recurrent neural network model.

    PubMed

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2004-09-01

    Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.

  12. Disrupted Topological Patterns of Large-Scale Network in Conduct Disorder

    PubMed Central

    Jiang, Yali; Liu, Weixiang; Ming, Qingsen; Gao, Yidian; Ma, Ren; Zhang, Xiaocui; Situ, Weijun; Wang, Xiang; Yao, Shuqiao; Huang, Bingsheng

    2016-01-01

    Regional abnormalities in brain structure and function, as well as disrupted connectivity, have been found repeatedly in adolescents with conduct disorder (CD). Yet, the large-scale brain topology associated with CD is not well characterized, and little is known about the systematic neural mechanisms of CD. We employed graphic theory to investigate systematically the structural connectivity derived from cortical thickness correlation in a group of patients with CD (N = 43) and healthy controls (HCs, N = 73). Nonparametric permutation tests were applied for between-group comparisons of graphical metrics. Compared with HCs, network measures including global/local efficiency and modularity all pointed to hypo-functioning in CD, despite of preserved small-world organization in both groups. The hubs distribution is only partially overlapped with each other. These results indicate that CD is accompanied by both impaired integration and segregation patterns of brain networks, and the distribution of highly connected neural network ‘hubs’ is also distinct between groups. Such misconfiguration extends our understanding regarding how structural neural network disruptions may underlie behavioral disturbances in adolescents with CD, and potentially, implicates an aberrant cytoarchitectonic profiles in the brain of CD patients. PMID:27841320

  13. Using neural networks to model the behavior and decisions of gamblers, in particular, cyber-gamblers.

    PubMed

    Chan, Victor K Y

    2010-03-01

    This article describes the use of neural networks (a type of artificial intelligence) and an empirical data sample of, inter alia, the amounts of bets laid and the winnings/losses made in successive games by a number of cyber-gamblers to longitudinally model gamblers' behavior and decisions as to such bet amounts and the temporal trajectory of winnings/losses. The data was collected by videoing Texas Holdem gamblers at a cyber-gambling website. Six "persistent" gamblers were identified, totaling 675 games. The neural networks on average were able to predict bet amounts and cumulative winnings/losses in successive games accurately to three decimal places of the dollar. A more important conclusion is that the influence of a gambler's skills, strategies, and personality on his/her successive bet amounts and cumulative winnings/losses is almost totally reflected by the pattern(s) of his/her winnings/losses in the few initial games and his/her gambling account balance. This partially invalidates gamblers' illusions and fallacies that they can outperform others or even bankers. For government policy-makers, gambling industry operators, economists, sociologists, psychiatrists, and psychologists, this article provides models for gamblers' behavior and decisions. It also explores and exemplifies the usefulness of neural networks and artificial intelligence at large in the research on gambling.

  14. Spectral feature extraction of EEG signals and pattern recognition during mental tasks of 2-D cursor movements for BCI using SVM and ANN.

    PubMed

    Bascil, M Serdar; Tesneli, Ahmet Y; Temurtas, Feyzullah

    2016-09-01

    Brain computer interface (BCI) is a new communication way between man and machine. It identifies mental task patterns stored in electroencephalogram (EEG). So, it extracts brain electrical activities recorded by EEG and transforms them machine control commands. The main goal of BCI is to make available assistive environmental devices for paralyzed people such as computers and makes their life easier. This study deals with feature extraction and mental task pattern recognition on 2-D cursor control from EEG as offline analysis approach. The hemispherical power density changes are computed and compared on alpha-beta frequency bands with only mental imagination of cursor movements. First of all, power spectral density (PSD) features of EEG signals are extracted and high dimensional data reduced by principle component analysis (PCA) and independent component analysis (ICA) which are statistical algorithms. In the last stage, all features are classified with two types of support vector machine (SVM) which are linear and least squares (LS-SVM) and three different artificial neural network (ANN) structures which are learning vector quantization (LVQ), multilayer neural network (MLNN) and probabilistic neural network (PNN) and mental task patterns are successfully identified via k-fold cross validation technique.

  15. Reservoir characterization using core, well log, and seismic data and intelligent software

    NASA Astrophysics Data System (ADS)

    Soto Becerra, Rodolfo

    We have developed intelligent software, Oilfield Intelligence (OI), as an engineering tool to improve the characterization of oil and gas reservoirs. OI integrates neural networks and multivariate statistical analysis. It is composed of five main subsystems: data input, preprocessing, architecture design, graphics design, and inference engine modules. More than 1,200 lines of programming code as M-files using the language MATLAB been written. The degree of success of many oil and gas drilling, completion, and production activities depends upon the accuracy of the models used in a reservoir description. Neural networks have been applied for identification of nonlinear systems in almost all scientific fields of humankind. Solving reservoir characterization problems is no exception. Neural networks have a number of attractive features that can help to extract and recognize underlying patterns, structures, and relationships among data. However, before developing a neural network model, we must solve the problem of dimensionality such as determining dominant and irrelevant variables. We can apply principal components and factor analysis to reduce the dimensionality and help the neural networks formulate more realistic models. We validated OI by obtaining confident models in three different oil field problems: (1) A neural network in-situ stress model using lithology and gamma ray logs for the Travis Peak formation of east Texas, (2) A neural network permeability model using porosity and gamma ray and a neural network pseudo-gamma ray log model using 3D seismic attributes for the reservoir VLE 196 Lamar field located in Block V of south-central Lake Maracaibo (Venezuela), and (3) Neural network primary ultimate oil recovery (PRUR), initial waterflooding ultimate oil recovery (IWUR), and infill drilling ultimate oil recovery (IDUR) models using reservoir parameters for San Andres and Clearfork carbonate formations in west Texas. In all cases, we compared the results from the neural network models with the results from regression statistical and non-parametric approach models. The results show that it is possible to obtain the highest cross-correlation coefficient between predicted and actual target variables, and the lowest average absolute errors using the integrated techniques of multivariate statistical analysis and neural networks in our intelligent software.

  16. Distorted Character Recognition Via An Associative Neural Network

    NASA Astrophysics Data System (ADS)

    Messner, Richard A.; Szu, Harold H.

    1987-03-01

    The purpose of this paper is two-fold. First, it is intended to provide some preliminary results of a character recognition scheme which has foundations in on-going neural network architecture modeling, and secondly, to apply some of the neural network results in a real application area where thirty years of effort has had little effect on providing the machine an ability to recognize distorted objects within the same object class. It is the author's belief that the time is ripe to start applying in ernest the results of over twenty years of effort in neural modeling to some of the more difficult problems which seem so hard to solve by conventional means. The character recognition scheme proposed utilizes a preprocessing stage which performs a 2-dimensional Walsh transform of an input cartesian image field, then sequency filters this spectrum into three feature bands. Various features are then extracted and organized into three sets of feature vectors. These vector patterns that are stored and recalled associatively. Two possible associative neural memory models are proposed for further investigation. The first being an outer-product linear matrix associative memory with a threshold function controlling the strength of the output pattern (similar to Kohonen's crosscorrelation approach [1]). The second approach is based upon a modified version of Grossberg's neural architecture [2] which provides better self-organizing properties due to its adaptive nature. Preliminary results of the sequency filtering and feature extraction preprocessing stage and discussion about the use of the proposed neural architectures is included.

  17. A Novel Higher Order Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuxiang

    2010-05-01

    In this paper a new Higher Order Neural Network (HONN) model is introduced and applied in several data mining tasks. Data Mining extracts hidden patterns and valuable information from large databases. A hyperbolic tangent function is used as the neuron activation function for the new HONN model. Experiments are conducted to demonstrate the advantages and disadvantages of the new HONN model, when compared with several conventional Artificial Neural Network (ANN) models: Feedforward ANN with the sigmoid activation function; Feedforward ANN with the hyperbolic tangent activation function; and Radial Basis Function (RBF) ANN with the Gaussian activation function. The experimental results seem to suggest that the new HONN holds higher generalization capability as well as abilities in handling missing data.

  18. Statistical process control using optimized neural networks: a case study.

    PubMed

    Addeh, Jalil; Ebrahimzadeh, Ata; Azarbad, Milad; Ranaee, Vahid

    2014-09-01

    The most common statistical process control (SPC) tools employed for monitoring process changes are control charts. A control chart demonstrates that the process has altered by generating an out-of-control signal. This study investigates the design of an accurate system for the control chart patterns (CCPs) recognition in two aspects. First, an efficient system is introduced that includes two main modules: feature extraction module and classifier module. In the feature extraction module, a proper set of shape features and statistical feature are proposed as the efficient characteristics of the patterns. In the classifier module, several neural networks, such as multilayer perceptron, probabilistic neural network and radial basis function are investigated. Based on an experimental study, the best classifier is chosen in order to recognize the CCPs. Second, a hybrid heuristic recognition system is introduced based on cuckoo optimization algorithm (COA) algorithm to improve the generalization performance of the classifier. The simulation results show that the proposed algorithm has high recognition accuracy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Local community detection as pattern restoration by attractor dynamics of recurrent neural networks.

    PubMed

    Okamoto, Hiroshi

    2016-08-01

    Densely connected parts in networks are referred to as "communities". Community structure is a hallmark of a variety of real-world networks. Individual communities in networks form functional modules of complex systems described by networks. Therefore, finding communities in networks is essential to approaching and understanding complex systems described by networks. In fact, network science has made a great deal of effort to develop effective and efficient methods for detecting communities in networks. Here we put forward a type of community detection, which has been little examined so far but will be practically useful. Suppose that we are given a set of source nodes that includes some (but not all) of "true" members of a particular community; suppose also that the set includes some nodes that are not the members of this community (i.e., "false" members of the community). We propose to detect the community from this "imperfect" and "inaccurate" set of source nodes using attractor dynamics of recurrent neural networks. Community detection by the proposed method can be viewed as restoration of the original pattern from a deteriorated pattern, which is analogous to cue-triggered recall of short-term memory in the brain. We demonstrate the effectiveness of the proposed method using synthetic networks and real social networks for which correct communities are known. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Quantum Associative Neural Network with Nonlinear Search Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Rigui; Wang, Huian; Wu, Qian; Shi, Yang

    2012-03-01

    Based on analysis on properties of quantum linear superposition, to overcome the complexity of existing quantum associative memory which was proposed by Ventura, a new storage method for multiply patterns is proposed in this paper by constructing the quantum array with the binary decision diagrams. Also, the adoption of the nonlinear search algorithm increases the pattern recalling speed of this model which has multiply patterns to O( {log2}^{2^{n -t}} ) = O( n - t ) time complexity, where n is the number of quantum bit and t is the quantum information of the t quantum bit. Results of case analysis show that the associative neural network model proposed in this paper based on quantum learning is much better and optimized than other researchers' counterparts both in terms of avoiding the additional qubits or extraordinary initial operators, storing pattern and improving the recalling speed.

  1. Similar patterns of neural activity predict memory function during encoding and retrieval.

    PubMed

    Kragel, James E; Ezzyat, Youssef; Sperling, Michael R; Gorniak, Richard; Worrell, Gregory A; Berry, Brent M; Inman, Cory; Lin, Jui-Jui; Davis, Kathryn A; Das, Sandhitsu R; Stein, Joel M; Jobst, Barbara C; Zaghloul, Kareem A; Sheth, Sameer A; Rizzuto, Daniel S; Kahana, Michael J

    2017-07-15

    Neural networks that span the medial temporal lobe (MTL), prefrontal cortex, and posterior cortical regions are essential to episodic memory function in humans. Encoding and retrieval are supported by the engagement of both distinct neural pathways across the cortex and common structures within the medial temporal lobes. However, the degree to which memory performance can be determined by neural processing that is common to encoding and retrieval remains to be determined. To identify neural signatures of successful memory function, we administered a delayed free-recall task to 187 neurosurgical patients implanted with subdural or intraparenchymal depth electrodes. We developed multivariate classifiers to identify patterns of spectral power across the brain that independently predicted successful episodic encoding and retrieval. During encoding and retrieval, patterns of increased high frequency activity in prefrontal, MTL, and inferior parietal cortices, accompanied by widespread decreases in low frequency power across the brain predicted successful memory function. Using a cross-decoding approach, we demonstrate the ability to predict memory function across distinct phases of the free-recall task. Furthermore, we demonstrate that classifiers that combine information from both encoding and retrieval states can outperform task-independent models. These findings suggest that the engagement of a core memory network during either encoding or retrieval shapes the ability to remember the past, despite distinct neural interactions that facilitate encoding and retrieval. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Fuzzy logic and neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  3. Ground states of partially connected binary neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1990-01-01

    Neural networks defined by outer products of vectors over (-1, 0, 1) are considered. Patterns over (-1, 0, 1) define by their outer products partially connected neural networks consisting of internally strongly connected, externally weakly connected subnetworks. Subpatterns over (-1, 1) define subnetworks, and their combinations that agree in the common bits define permissible words. It is shown that the permissible words are locally stable states of the network, provided that each of the subnetworks stores mutually orthogonal subwords, or, at most, two subwords. It is also shown that when each of the subnetworks stores two mutually orthogonal binary subwords at most, the permissible words, defined as the combinations of the subwords (one corresponding to each subnetwork), that agree in their common bits are the unique ground states of the associated energy function.

  4. A Neural Network Design for the Estimation of Nonlinear Behavior of a Magnetically-Excited Piezoelectric Harvester

    NASA Astrophysics Data System (ADS)

    Çelik, Emre; Uzun, Yunus; Kurt, Erol; Öztürk, Nihat; Topaloğlu, Nurettin

    2018-01-01

    An application of an artificial neural network (ANN) has been implemented in this article to model the nonlinear relationship of the harvested electrical power of a recently developed piezoelectric pendulum with respect to its resistive load R L and magnetic excitation frequency f. Prediction of harvested power for a wide range is a difficult task, because it increases dramatically when f gets closer to the natural frequency f 0 of the system. The neural model of the concerned system is designed upon the basis of a standard multi-layer network with a back propagation learning algorithm. Input data, termed input patterns, to present to the network and the respective output data, termed output patterns, describing desired network output that are carefully collected from the experiment under several conditions in order to train the developed network accurately. Results have indicated that the designed ANN is an effective means for predicting the harvested power of the piezoelectric harvester as functions of R L and f with a root mean square error of 6.65 × 10-3 for training and 1.40 for different test conditions. Using the proposed approach, the harvested power can be estimated reasonably without tackling the difficulty of experimental studies and complexity of analytical formulas representing the concerned system.

  5. Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.

    PubMed

    Li, Xiumin; Small, Michael

    2012-06-01

    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.

  6. Detection of Anomalies in Hydrometric Data Using Artificial Intelligence Techniques

    NASA Astrophysics Data System (ADS)

    Lauzon, N.; Lence, B. J.

    2002-12-01

    This work focuses on the detection of anomalies in hydrometric data sequences, such as 1) outliers, which are individual data having statistical properties that differ from those of the overall population; 2) shifts, which are sudden changes over time in the statistical properties of the historical records of data; and 3) trends, which are systematic changes over time in the statistical properties. For the purpose of the design and management of water resources systems, it is important to be aware of these anomalies in hydrometric data, for they can induce a bias in the estimation of water quantity and quality parameters. These anomalies may be viewed as specific patterns affecting the data, and therefore pattern recognition techniques can be used for identifying them. However, the number of possible patterns is very large for each type of anomaly and consequently large computing capacities are required to account for all possibilities using the standard statistical techniques, such as cluster analysis. Artificial intelligence techniques, such as the Kohonen neural network and fuzzy c-means, are clustering techniques commonly used for pattern recognition in several areas of engineering and have recently begun to be used for the analysis of natural systems. They require much less computing capacity than the standard statistical techniques, and therefore are well suited for the identification of outliers, shifts and trends in hydrometric data. This work constitutes a preliminary study, using synthetic data representing hydrometric data that can be found in Canada. The analysis of the results obtained shows that the Kohonen neural network and fuzzy c-means are reasonably successful in identifying anomalies. This work also addresses the problem of uncertainties inherent to the calibration procedures that fit the clusters to the possible patterns for both the Kohonen neural network and fuzzy c-means. Indeed, for the same database, different sets of clusters can be established with these calibration procedures. A simple method for analyzing uncertainties associated with the Kohonen neural network and fuzzy c-means is developed here. The method combines the results from several sets of clusters, either from the Kohonen neural network or fuzzy c-means, so as to provide an overall diagnosis as to the identification of outliers, shifts and trends. The results indicate an improvement in the performance for identifying anomalies when the method of combining cluster sets is used, compared with when only one cluster set is used.

  7. Modification of a neuronal network direction using stepwise photo-thermal etching of an agarose architecture.

    PubMed

    Suzuki, Ikurou; Sugio, Yoshihiro; Moriguchi, Hiroyuki; Jimbo, Yasuhiko; Yasuda, Kenji

    2004-07-01

    Control over spatial distribution of individual neurons and the pattern of neural network provides an important tool for studying information processing pathways during neural network formation. Moreover, the knowledge of the direction of synaptic connections between cells in each neural network can provide detailed information on the relationship between the forward and feedback signaling. We have developed a method for topographical control of the direction of synaptic connections within a living neuronal network using a new type of individual-cell-based on-chip cell-cultivation system with an agarose microchamber array (AMCA). The advantages of this system include the possibility to control positions and number of cultured cells as well as flexible control of the direction of elongation of axons through stepwise melting of narrow grooves. Such micrometer-order microchannels are obtained by photo-thermal etching of agarose where a portion of the gel is melted with a 1064-nm infrared laser beam. Using this system, we created neural network from individual Rat hippocampal cells. We were able to control elongation of individual axons during cultivation (from cells contained within the AMCA) by non-destructive stepwise photo-thermal etching. We have demonstrated the potential of our on-chip AMCA cell cultivation system for the controlled development of individual cell-based neural networks.

  8. A review and analysis of neural networks for classification of remotely sensed multispectral imagery

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1993-01-01

    A literature survey and analysis of the use of neural networks for the classification of remotely sensed multispectral imagery is presented. As part of a brief mathematical review, the backpropagation algorithm, which is the most common method of training multi-layer networks, is discussed with an emphasis on its application to pattern recognition. The analysis is divided into five aspects of neural network classification: (1) input data preprocessing, structure, and encoding; (2) output encoding and extraction of classes; (3) network architecture, (4) training algorithms; and (5) comparisons to conventional classifiers. The advantages of the neural network method over traditional classifiers are its non-parametric nature, arbitrary decision boundary capabilities, easy adaptation to different types of data and input structures, fuzzy output values that can enhance classification, and good generalization for use with multiple images. The disadvantages of the method are slow training time, inconsistent results due to random initial weights, and the requirement of obscure initialization values (e.g., learning rate and hidden layer size). Possible techniques for ameliorating these problems are discussed. It is concluded that, although the neural network method has several unique capabilities, it will become a useful tool in remote sensing only if it is made faster, more predictable, and easier to use.

  9. LVQ and backpropagation neural networks applied to NASA SSME data

    NASA Technical Reports Server (NTRS)

    Doniere, Timothy F.; Dhawan, Atam P.

    1993-01-01

    Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.

  10. Artificial neural networks as quantum associative memory

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Schrock, Jonathan; Imam, Neena; Humble, Travis

    We present results related to the recall accuracy and capacity of Hopfield networks implemented on commercially available quantum annealers. The use of Hopfield networks and artificial neural networks as content-addressable memories offer robust storage and retrieval of classical information, however, implementation of these models using currently available quantum annealers faces several challenges: the limits of precision when setting synaptic weights, the effects of spurious spin-glass states and minor embedding of densely connected graphs into fixed-connectivity hardware. We consider neural networks which are less than fully-connected, and also consider neural networks which contain multiple sparsely connected clusters. We discuss the effect of weak edge dilution on the accuracy of memory recall, and discuss how the multiple clique structure affects the storage capacity. Our work focuses on storage of patterns which can be embedded into physical hardware containing n < 1000 qubits. This work was supported by the United States Department of Defense and used resources of the Computational Research and Development Programs as Oak Ridge National Laboratory under Contract No. DE-AC0500OR22725 with the U. S. Department of Energy.

  11. A software sensor model based on hybrid fuzzy neural network for rapid estimation water quality in Guangzhou section of Pearl River, China.

    PubMed

    Zhou, Chunshan; Zhang, Chao; Tian, Di; Wang, Ke; Huang, Mingzhi; Liu, Yanbiao

    2018-01-02

    In order to manage water resources, a software sensor model was designed to estimate water quality using a hybrid fuzzy neural network (FNN) in Guangzhou section of Pearl River, China. The software sensor system was composed of data storage module, fuzzy decision-making module, neural network module and fuzzy reasoning generator module. Fuzzy subtractive clustering was employed to capture the character of model, and optimize network architecture for enhancing network performance. The results indicate that, on basis of available on-line measured variables, the software sensor model can accurately predict water quality according to the relationship between chemical oxygen demand (COD) and dissolved oxygen (DO), pH and NH 4 + -N. Owing to its ability in recognizing time series patterns and non-linear characteristics, the software sensor-based FNN is obviously superior to the traditional neural network model, and its R (correlation coefficient), MAPE (mean absolute percentage error) and RMSE (root mean square error) are 0.8931, 10.9051 and 0.4634, respectively.

  12. A circular model for song motor control in Serinus canaria

    PubMed Central

    Alonso, Rodrigo G.; Trevisan, Marcos A.; Amador, Ana; Goller, Franz; Mindlin, Gabriel B.

    2015-01-01

    Song production in songbirds is controlled by a network of nuclei distributed across several brain regions, which drives respiratory and vocal motor systems to generate sound. We built a model for birdsong production, whose variables are the average activities of different neural populations within these nuclei of the song system. We focus on the predictions of respiratory patterns of song, because these can be easily measured and therefore provide a validation for the model. We test the hypothesis that it is possible to construct a model in which (1) the activity of an expiratory related (ER) neural population fits the observed pressure patterns used by canaries during singing, and (2) a higher forebrain neural population, HVC, is sparsely active, simultaneously with significant motor instances of the pressure patterns. We show that in order to achieve these two requirements, the ER neural population needs to receive two inputs: a direct one, and its copy after being processed by other areas of the song system. The model is capable of reproducing the measured respiratory patterns and makes specific predictions on the timing of HVC activity during their production. These results suggest that vocal production is controlled by a circular network rather than by a simple top-down architecture. PMID:25904860

  13. Automated target recognition and tracking using an optical pattern recognition neural network

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1991-01-01

    The on-going development of an automatic target recognition and tracking system at the Jet Propulsion Laboratory is presented. This system is an optical pattern recognition neural network (OPRNN) that is an integration of an innovative optical parallel processor and a feature extraction based neural net training algorithm. The parallel optical processor provides high speed and vast parallelism as well as full shift invariance. The neural network algorithm enables simultaneous discrimination of multiple noisy targets in spite of their scales, rotations, perspectives, and various deformations. This fully developed OPRNN system can be effectively utilized for the automated spacecraft recognition and tracking that will lead to success in the Automated Rendezvous and Capture (AR&C) of the unmanned Cargo Transfer Vehicle (CTV). One of the most powerful optical parallel processors for automatic target recognition is the multichannel correlator. With the inherent advantages of parallel processing capability and shift invariance, multiple objects can be simultaneously recognized and tracked using this multichannel correlator. This target tracking capability can be greatly enhanced by utilizing a powerful feature extraction based neural network training algorithm such as the neocognitron. The OPRNN, currently under investigation at JPL, is constructed with an optical multichannel correlator where holographic filters have been prepared using the neocognitron training algorithm. The computation speed of the neocognitron-type OPRNN is up to 10(exp 14) analog connections/sec that enabling the OPRNN to outperform its state-of-the-art electronics counterpart by at least two orders of magnitude.

  14. What Neural Substrates Trigger the Adept Scientific Pattern Discovery by Biologists?

    NASA Astrophysics Data System (ADS)

    Lee, Jun-Ki; Kwon, Yong-Ju

    2011-04-01

    This study investigated the neural correlates of experts and novices during biological object pattern detection using an fMRI approach in order to reveal the neural correlates of a biologist's superior pattern discovery ability. Sixteen healthy male participants (8 biologists and 8 non-biologists) volunteered for the study. Participants were shown fifteen series of organism pictures and asked to detect patterns amid stimulus pictures. Primary findings showed significant activations in the right middle temporal gyrus and inferior parietal lobule amongst participants in the biologist (expert) group. Interestingly, the left superior temporal gyrus was activated in participants from the non-biologist (novice) group. These results suggested that superior pattern discovery ability could be related to a functional facilitation of the parieto-temporal network, which is particularly driven by the right middle temporal gyrus and inferior parietal lobule in addition to the recruitment of additional brain regions. Furthermore, the functional facilitation of the network might actually pertain to high coherent processing skills and visual working memory capacity. Hence, study results suggested that adept scientific thinking ability can be detected by neuronal substrates, which may be used as criteria for developing and evaluating a brain-based science curriculum and test instrument.

  15. Improving subjective pattern recognition in chemical senses through reduction of nonlinear effects in evaluation of sparse data

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Rasouli, Firooz; Wrenn, Susan E.; Subbiah, M.

    2002-11-01

    Artificial neural network models are typically useful in pattern recognition and extraction of important features in large data sets. These models are implemented in a wide variety of contexts and with diverse type of input-output data. The underlying mathematics of supervised training of neural networks is ultimately tied to the ability to approximate the nonlinearities that are inherent in network"s generalization ability. The quality and availability of sufficient data points for training and validation play a key role in the generalization ability of the network. A potential domain of applications of neural networks is in analysis of subjective data, such as in consumer science, affective neuroscience and perception of chemical senses. In applications of ANN to subjective data, it is common to rely on knowledge of the science and context for data acquisition, for instance as a priori probabilities in the Bayesian framework. In this paper, we discuss the circumstances that create challenges for success of neural network models for subjective data analysis, such as sparseness of data and cost of acquisition of additional samples. In particular, in the case of affect and perception of chemical senses, we suggest that inherent ambiguity of subjective responses could be offset by a combination of human-machine expert. We propose a method of pre- and post-processing for blind analysis of data that that relies on heuristics from human performance in interpretation of data. In particular, we offer an information-theoretic smoothing (ITS) algorithm that optimizes that geometric visualization of multi-dimensional data and improves human interpretation of the input-output view of neural network implementations. The pre- and post-processing algorithms and ITS are unsupervised. Finally, we discuss the details of an example of blind data analysis from actual taste-smell subjective data, and demonstrate the usefulness of PCA in reduction of dimensionality, as well as ITS.

  16. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning

    NASA Astrophysics Data System (ADS)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung

    2018-02-01

    Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  17. From neural plate to cortical arousal-a neuronal network theory of sleep derived from in vitro "model" systems for primordial patterns of spontaneous bioelectric activity in the vertebrate central nervous system.

    PubMed

    Corner, Michael A

    2013-05-22

    In the early 1960s intrinsically generated widespread neuronal discharges were discovered to be the basis for the earliest motor behavior throughout the animal kingdom. The pattern generating system is in fact programmed into the developing nervous system, in a regionally specific manner, already at the early neural plate stage. Such rhythmically modulated phasic bursts were next discovered to be a general feature of developing neural networks and, largely on the basis of experimental interventions in cultured neural tissues, to contribute significantly to their morpho-physiological maturation. In particular, the level of spontaneous synchronized bursting is homeostatically regulated, and has the effect of constraining the development of excessive network excitability. After birth or hatching, this "slow-wave" activity pattern becomes sporadically suppressed in favor of sensory oriented "waking" behaviors better adapted to dealing with environmental contingencies. It nevertheless reappears periodically as "sleep" at several species-specific points in the diurnal/nocturnal cycle. Although this "default" behavior pattern evolves with development, its essential features are preserved throughout the life cycle, and are based upon a few simple mechanisms which can be both experimentally demonstrated and simulated by computer modeling. In contrast, a late onto- and phylogenetic aspect of sleep, viz., the intermittent "paradoxical" activation of the forebrain so as to mimic waking activity, is much less well understood as regards its contribution to brain development. Some recent findings dealing with this question by means of cholinergically induced "aroused" firing patterns in developing neocortical cell cultures, followed by quantitative electrophysiological assays of immediate and longterm sequelae, will be discussed in connection with their putative implications for sleep ontogeny.

  18. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model.

    PubMed

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander

    2015-04-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.

  19. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model

    PubMed Central

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher

    2015-01-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106

  20. Learning and retrieval behavior in recurrent neural networks with pre-synaptic dependent homeostatic plasticity

    NASA Astrophysics Data System (ADS)

    Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.

    2017-08-01

    The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.

  1. On a phase diagram for random neural networks with embedded spike timing dependent plasticity.

    PubMed

    Turova, Tatyana S; Villa, Alessandro E P

    2007-01-01

    This paper presents an original mathematical framework based on graph theory which is a first attempt to investigate the dynamics of a model of neural networks with embedded spike timing dependent plasticity. The neurons correspond to integrate-and-fire units located at the vertices of a finite subset of 2D lattice. There are two types of vertices, corresponding to the inhibitory and the excitatory neurons. The edges are directed and labelled by the discrete values of the synaptic strength. We assume that there is an initial firing pattern corresponding to a subset of units that generate a spike. The number of activated externally vertices is a small fraction of the entire network. The model presented here describes how such pattern propagates throughout the network as a random walk on graph. Several results are compared with computational simulations and new data are presented for identifying critical parameters of the model.

  2. Neural networks for structural design - An integrated system implementation

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han

    1992-01-01

    The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.

  3. Spiking Neurons for Analysis of Patterns

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2008-01-01

    Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological neurons). These features enable the neurons to adapt their responses to high-rate inputs from sensors, and to adapt their firing thresholds to mitigate noise or effects of potential sensor failure. The mathematical derivation of the SVM starts from a prior model, known in the art as the point soma model, which captures all of the salient properties of neuronal response while keeping the computational cost low. The point-soma latency time is modified to be an exponentially decaying function of the strength of the applied potential. Choosing computational efficiency over biological fidelity, the dendrites surrounding a neuron are represented by simplified compartmental submodels and there are no dendritic spines. Updates to the dendritic potential, calcium-ion concentrations and conductances, and potassium-ion conductances are done by use of equations similar to those of the point soma. Diffusion processes in dendrites are modeled by averaging among nearest-neighbor compartments. Inputs to each of the dendritic compartments come from sensors. Alternatively or in addition, when an affected neuron is part of a pool, inputs can come from other spiking neurons. At present, SVM neural networks are implemented by computational simulation, using algorithms that encode the SVM and its submodels. However, it should be possible to implement these neural networks in hardware: The differential equations for the dendritic and cellular processes in the SVM model of spiking neurons map to equivalent circuits that can be implemented directly in analog very-large-scale integrated (VLSI) circuits.

  4. Attractor neural networks with resource-efficient synaptic connectivity

    NASA Astrophysics Data System (ADS)

    Pehlevan, Cengiz; Sengupta, Anirvan

    Memories are thought to be stored in the attractor states of recurrent neural networks. Here we explore how resource constraints interplay with memory storage function to shape synaptic connectivity of attractor networks. We propose that given a set of memories, in the form of population activity patterns, the neural circuit choses a synaptic connectivity configuration that minimizes a resource usage cost. We argue that the total synaptic weight (l1-norm) in the network measures the resource cost because synaptic weight is correlated with synaptic volume, which is a limited resource, and is proportional to neurotransmitter release and post-synaptic current, both of which cost energy. Using numerical simulations and replica theory, we characterize optimal connectivity profiles in resource-efficient attractor networks. Our theory explains several experimental observations on cortical connectivity profiles, 1) connectivity is sparse, because synapses are costly, 2) bidirectional connections are overrepresented and 3) are stronger, because attractor states need strong recurrence.

  5. Six networks on a universal neuromorphic computing substrate.

    PubMed

    Pfeil, Thomas; Grübl, Andreas; Jeltsch, Sebastian; Müller, Eric; Müller, Paul; Petrovici, Mihai A; Schmuker, Michael; Brüderle, Daniel; Schemmel, Johannes; Meier, Karlheinz

    2013-01-01

    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.

  6. Six Networks on a Universal Neuromorphic Computing Substrate

    PubMed Central

    Pfeil, Thomas; Grübl, Andreas; Jeltsch, Sebastian; Müller, Eric; Müller, Paul; Petrovici, Mihai A.; Schmuker, Michael; Brüderle, Daniel; Schemmel, Johannes; Meier, Karlheinz

    2013-01-01

    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality. PMID:23423583

  7. Development of programmable artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J.

    1993-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  8. Formal Models of the Network Co-occurrence Underlying Mental Operations.

    PubMed

    Bzdok, Danilo; Varoquaux, Gaël; Grisel, Olivier; Eickenberg, Michael; Poupon, Cyril; Thirion, Bertrand

    2016-06-01

    Systems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-unconstrained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition.

  9. Formal Models of the Network Co-occurrence Underlying Mental Operations

    PubMed Central

    Bzdok, Danilo; Varoquaux, Gaël; Grisel, Olivier; Eickenberg, Michael; Poupon, Cyril; Thirion, Bertrand

    2016-01-01

    Systems neuroscience has identified a set of canonical large-scale networks in humans. These have predominantly been characterized by resting-state analyses of the task-unconstrained, mind-wandering brain. Their explicit relationship to defined task performance is largely unknown and remains challenging. The present work contributes a multivariate statistical learning approach that can extract the major brain networks and quantify their configuration during various psychological tasks. The method is validated in two extensive datasets (n = 500 and n = 81) by model-based generation of synthetic activity maps from recombination of shared network topographies. To study a use case, we formally revisited the poorly understood difference between neural activity underlying idling versus goal-directed behavior. We demonstrate that task-specific neural activity patterns can be explained by plausible combinations of resting-state networks. The possibility of decomposing a mental task into the relative contributions of major brain networks, the "network co-occurrence architecture" of a given task, opens an alternative access to the neural substrates of human cognition. PMID:27310288

  10. A loop-based neural architecture for structured behavior encoding and decoding.

    PubMed

    Gisiger, Thomas; Boukadoum, Mounir

    2018-02-01

    We present a new type of artificial neural network that generalizes on anatomical and dynamical aspects of the mammal brain. Its main novelty lies in its topological structure which is built as an array of interacting elementary motifs shaped like loops. These loops come in various types and can implement functions such as gating, inhibitory or executive control, or encoding of task elements to name a few. Each loop features two sets of neurons and a control region, linked together by non-recurrent projections. The two neural sets do the bulk of the loop's computations while the control unit specifies the timing and the conditions under which the computations implemented by the loop are to be performed. By functionally linking many such loops together, a neural network is obtained that may perform complex cognitive computations. To demonstrate the potential offered by such a system, we present two neural network simulations. The first illustrates the structure and dynamics of a single loop implementing a simple gating mechanism. The second simulation shows how connecting four loops in series can produce neural activity patterns that are sufficient to pass a simplified delayed-response task. We also show that this network reproduces electrophysiological measurements gathered in various regions of the brain of monkeys performing similar tasks. We also demonstrate connections between this type of neural network and recurrent or long short-term memory network models, and suggest ways to generalize them for future artificial intelligence research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Sentiment analysis: a comparison of deep learning neural network algorithm with SVM and naϊve Bayes for Indonesian text

    NASA Astrophysics Data System (ADS)

    Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia

    2018-03-01

    Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.

  12. Convolutional neural networks for event-related potential detection: impact of the architecture.

    PubMed

    Cecotti, H

    2017-07-01

    The detection of brain responses at the single-trial level in the electroencephalogram (EEG) such as event-related potentials (ERPs) is a difficult problem that requires different processing steps to extract relevant discriminant features. While most of the signal and classification techniques for the detection of brain responses are based on linear algebra, different pattern recognition techniques such as convolutional neural network (CNN), as a type of deep learning technique, have shown some interests as they are able to process the signal after limited pre-processing. In this study, we propose to investigate the performance of CNNs in relation of their architecture and in relation to how they are evaluated: a single system for each subject, or a system for all the subjects. More particularly, we want to address the change of performance that can be observed between specifying a neural network to a subject, or by considering a neural network for a group of subjects, taking advantage of a larger number of trials from different subjects. The results support the conclusion that a convolutional neural network trained on different subjects can lead to an AUC above 0.9 by using an appropriate architecture using spatial filtering and shift invariant layers.

  13. Prediction of Welded Joint Strength in Plasma Arc Welding: A Comparative Study Using Back-Propagation and Radial Basis Neural Networks

    NASA Astrophysics Data System (ADS)

    Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.

    2016-09-01

    Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nogueira, C. P. S. M.; Guimarães, J. G.

    In this paper, an auto-associative neural network using single-electron tunneling (SET) devices is proposed and simulated at low temperature. The nanoelectronic auto-associative network is able to converge to a stable state, previously stored during training. The recognition of the pattern involves decreasing the energy of the input state until it achieves a point of local minimum energy, which corresponds to one of the stored patterns.

  15. Kannada character recognition system using neural network

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  16. Influence of quality of images recorded in far infrared on pattern recognition based on neural networks and Eigenfaces algorithm

    NASA Astrophysics Data System (ADS)

    Jelen, Lukasz; Kobel, Joanna; Podbielska, Halina

    2003-11-01

    This paper discusses the possibility of exploiting of the tennovision registration and artificial neural networks for facial recognition systems. A biometric system that is able to identify people from thermograms is presented. To identify a person we used the Eigenfaces algorithm. For the face detection in the picture the backpropagation neural network was designed. For this purpose thermograms of 10 people in various external conditions were studies. The Eigenfaces algorithm calculated an average face and then the set of characteristic features for each studied person was produced. The neural network has to detect the face in the image before it actually can be identified. We used five hidden layers for that purpose. It was shown that the errors in recognition depend on the feature extraction, for low quality pictures the error was so high as 30%. However, for pictures with a good feature extraction the results of proper identification higher then 90%, were obtained.

  17. Efficient implementation of neural network deinterlacing

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee

    2009-02-01

    Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.

  18. Artificial vision by multi-layered neural networks: neocognitron and its advances.

    PubMed

    Fukushima, Kunihiko

    2013-01-01

    The neocognitron is a neural network model proposed by Fukushima (1980). Its architecture was suggested by neurophysiological findings on the visual systems of mammals. It is a hierarchical multi-layered network. It acquires the ability to robustly recognize visual patterns through learning. Although the neocognitron has a long history, modifications of the network to improve its performance are still going on. For example, a recent neocognitron uses a new learning rule, named add-if-silent, which makes the learning process much simpler and more stable. Nevertheless, a high recognition rate can be kept with a smaller scale of the network. Referring to the history of the neocognitron, this paper discusses recent advances in the neocognitron. We also show that various new functions can be realized by, for example, introducing top-down connections to the neocognitron: mechanism of selective attention, recognition and completion of partly occluded patterns, restoring occluded contours, and so on. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Non-invasive classification of gas-liquid two-phase horizontal flow regimes using an ultrasonic Doppler sensor and a neural network

    NASA Astrophysics Data System (ADS)

    Musa Abbagoni, Baba; Yeung, Hoi

    2016-08-01

    The identification of flow pattern is a key issue in multiphase flow which is encountered in the petrochemical industry. It is difficult to identify the gas-liquid flow regimes objectively with the gas-liquid two-phase flow. This paper presents the feasibility of a clamp-on instrument for an objective flow regime classification of two-phase flow using an ultrasonic Doppler sensor and an artificial neural network, which records and processes the ultrasonic signals reflected from the two-phase flow. Experimental data is obtained on a horizontal test rig with a total pipe length of 21 m and 5.08 cm internal diameter carrying air-water two-phase flow under slug, elongated bubble, stratified-wavy and, stratified flow regimes. Multilayer perceptron neural networks (MLPNNs) are used to develop the classification model. The classifier requires features as an input which is representative of the signals. Ultrasound signal features are extracted by applying both power spectral density (PSD) and discrete wavelet transform (DWT) methods to the flow signals. A classification scheme of ‘1-of-C coding method for classification’ was adopted to classify features extracted into one of four flow regime categories. To improve the performance of the flow regime classifier network, a second level neural network was incorporated by using the output of a first level networks feature as an input feature. The addition of the two network models provided a combined neural network model which has achieved a higher accuracy than single neural network models. Classification accuracies are evaluated in the form of both the PSD and DWT features. The success rates of the two models are: (1) using PSD features, the classifier missed 3 datasets out of 24 test datasets of the classification and scored 87.5% accuracy; (2) with the DWT features, the network misclassified only one data point and it was able to classify the flow patterns up to 95.8% accuracy. This approach has demonstrated the success of a clamp-on ultrasound sensor for flow regime classification that would be possible in industry practice. It is considerably more promising than other techniques as it uses a non-invasive and non-radioactive sensor.

  20. Identifying Broadband Rotational Spectra with Neural Networks

    NASA Astrophysics Data System (ADS)

    Zaleski, Daniel P.; Prozument, Kirill

    2017-06-01

    A typical broadband rotational spectrum may contain several thousand observable transitions, spanning many species. Identifying the individual spectra, particularly when the dynamic range reaches 1,000:1 or even 10,000:1, can be challenging. One approach is to apply automated fitting routines. In this approach, combinations of 3 transitions can be created to form a "triple", which allows fitting of the A, B, and C rotational constants in a Watson-type Hamiltonian. On a standard desktop computer, with a target molecule of interest, a typical AUTOFIT routine takes 2-12 hours depending on the spectral density. A new approach is to utilize machine learning to train a computer to recognize the patterns (frequency spacing and relative intensities) inherit in rotational spectra and to identify the individual spectra in a raw broadband rotational spectrum. Here, recurrent neural networks have been trained to identify different types of rotational spectra and classify them accordingly. Furthermore, early results in applying convolutional neural networks for spectral object recognition in broadband rotational spectra appear promising. Perez et al. "Broadband Fourier transform rotational spectroscopy for structure determination: The water heptamer." Chem. Phys. Lett., 2013, 571, 1-15. Seifert et al. "AUTOFIT, an Automated Fitting Tool for Broadband Rotational Spectra, and Applications to 1-Hexanal." J. Mol. Spectrosc., 2015, 312, 13-21. Bishop. "Neural networks for pattern recognition." Oxford university press, 1995.

  1. Applications of artificial neural network in AIDS research and therapy.

    PubMed

    Sardari, S; Sardari, D

    2002-01-01

    In recent years considerable effort has been devoted to applying pattern recognition techniques to the complex task of data analysis in drug research. Artificial neural networks (ANN) methodology is a modeling method with great ability to adapt to a new situation, or control an unknown system, using data acquired in previous experiments. In this paper, a brief history of ANN and the basic concepts behind the computing, the mathematical and algorithmic formulation of each of the techniques, and their developmental background is presented. Based on the abilities of ANNs in pattern recognition and estimation of system outputs from the known inputs, the neural network can be considered as a tool for molecular data analysis and interpretation. Analysis by neural networks improves the classification accuracy, data quantification and reduces the number of analogues necessary for correct classification of biologically active compounds. Conformational analysis and quantifying the components in mixtures using NMR spectra, aqueous solubility prediction and structure-activity correlation are among the reported applications of ANN as a new modeling method. Ranging from drug design and discovery to structure and dosage form design, the potential pharmaceutical applications of the ANN methodology are significant. In the areas of clinical monitoring, utilization of molecular simulation and design of bioactive structures, ANN would make the study of the status of the health and disease possible and brings their predicted chemotherapeutic response closer to reality.

  2. Successful Reconstruction of a Physiological Circuit with Known Connectivity from Spiking Activity Alone

    PubMed Central

    Gerhard, Felipe; Kispersky, Tilman; Gutierrez, Gabrielle J.; Marder, Eve; Kramer, Mark; Eden, Uri

    2013-01-01

    Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities. PMID:23874181

  3. Neural net diagnostics for VLSI test

    NASA Technical Reports Server (NTRS)

    Lin, T.; Tseng, H.; Wu, A.; Dogan, N.; Meador, J.

    1990-01-01

    This paper discusses the application of neural network pattern analysis algorithms to the IC fault diagnosis problem. A fault diagnostic is a decision rule combining what is known about an ideal circuit test response with information about how it is distorted by fabrication variations and measurement noise. The rule is used to detect fault existence in fabricated circuits using real test equipment. Traditional statistical techniques may be used to achieve this goal, but they can employ unrealistic a priori assumptions about measurement data. Our approach to this problem employs an adaptive pattern analysis technique based on feedforward neural networks. During training, a feedforward network automatically captures unknown sample distributions. This is important because distributions arising from the nonlinear effects of process variation can be more complex than is typically assumed. A feedforward network is also able to extract measurement features which contribute significantly to making a correct decision. Traditional feature extraction techniques employ matrix manipulations which can be particularly costly for large measurement vectors. In this paper we discuss a software system which we are developing that uses this approach. We also provide a simple example illustrating the use of the technique for fault detection in an operational amplifier.

  4. Long-range synchrony and emergence of neural reentry

    NASA Astrophysics Data System (ADS)

    Keren, Hanna; Marom, Shimon

    2016-11-01

    Neural synchronization across long distances is a functionally important phenomenon in health and disease. In order to access the basis of different modes of long-range synchrony, we monitor spiking activities over centimetre scale in cortical networks and show that the mode of synchrony depends upon a length scale, λ, which is the minimal path that activity should propagate through to find its point of origin ready for reactivation. When λ is larger than the physical dimension of the network, distant neuronal populations operate synchronously, giving rise to irregularly occurring network-wide events that last hundreds of milliseconds to several seconds. In contrast, when λ approaches the dimension of the network, a continuous self-sustained reentry propagation emerges, a regular seizure-like mode that is marked by precise spatiotemporal patterns (‘synfire chains’) and may last many minutes. Termination of a reentry phase is preceded by a decrease of propagation speed to a halt. Stimulation decreases both propagation speed and λ values, which modifies the synchrony mode respectively. The results contribute to the understanding of the origin and termination of different modes of neural synchrony as well as their long-range spatial patterns, while hopefully catering to manipulation of the phenomena in pathological conditions.

  5. On the Role of Situational Stressors in the Disruption of Global Neural Network Stability during Problem Solving.

    PubMed

    Liu, Mengting; Amey, Rachel C; Forbes, Chad E

    2017-12-01

    When individuals are placed in stressful situations, they are likely to exhibit deficits in cognitive capacity over and above situational demands. Despite this, individuals may still persevere and ultimately succeed in these situations. Little is known, however, about neural network properties that instantiate success or failure in both neutral and stressful situations, particularly with respect to regions integral for problem-solving processes that are necessary for optimal performance on more complex tasks. In this study, we outline how hidden Markov modeling based on multivoxel pattern analysis can be used to quantify unique brain states underlying complex network interactions that yield either successful or unsuccessful problem solving in more neutral or stressful situations. We provide evidence that brain network stability and states underlying synchronous interactions in regions integral for problem-solving processes are key predictors of whether individuals succeed or fail in stressful situations. Findings also suggested that individuals utilize discriminate neural patterns in successfully solving problems in stressful or neutral situations. Findings overall highlight how hidden Markov modeling can provide myriad possibilities for quantifying and better understanding the role of global network interactions in the problem-solving process and how the said interactions predict success or failure in different contexts.

  6. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks

    PubMed Central

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude

    2017-01-01

    Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703

  7. A Neural Network Aero Design System for Advanced Turbo-Engines

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1999-01-01

    An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a Neural Network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. Neural network systems have been attempted in the context of direct design methods. From properties ascribed to a set of blades the neural network is trained to infer the properties of an 'interpolated' blade shape. The problem is that, especially in transonic regimes where we deal with intrinsically non linear and ill posed problems, small perturbations of the blade shape can produce very large variations of the flow parameters. It is very unlikely that, under these circumstances, a neural network will be able to find the proper solution. The unique situation in the present method is that the neural network can be trained to extract the required input pressure distribution from a database of pressure distributions while the inverse method will still compute the exact blade shape that corresponds to this 'interpolated' input pressure distribution. In other words, the interpolation process is transferred to a smoother problem, namely, finding what pressure distribution would produce the required flow conditions and, once this is done, the inverse method will compute the exact solution for this problem. The use of neural network is, in this context, highly related to the use of proper optimization techniques. The optimization is used essentially as an automation procedure to force the input pressure distributions to achieve the required aero and structural design parameters. A multilayered feed forward network with back-propagation is used to train the system for pattern association and classification.

  8. Emerging Frontiers of Neuroengineering: A Network Science of Brain Connectivity

    PubMed Central

    Bassett, Danielle S.; Khambhati, Ankit N.; Grafton, Scott T.

    2018-01-01

    Neuroengineering is faced with unique challenges in repairing or replacing complex neural systems that are composed of many interacting parts. These interactions form intricate patterns over large spatiotemporal scales and produce emergent behaviors that are difficult to predict from individual elements. Network science provides a particularly appropriate framework in which to study and intervene in such systems by treating neural elements (cells, volumes) as nodes in a graph and neural interactions (synapses, white matter tracts) as edges in that graph. Here, we review the emerging discipline of network neuroscience, which uses and develops tools from graph theory to better understand and manipulate neural systems from micro- to macroscales. We present examples of how human brain imaging data are being modeled with network analysis and underscore potential pitfalls. We then highlight current computational and theoretical frontiers and emphasize their utility in informing diagnosis and monitoring, brain–machine interfaces, and brain stimulation. A flexible and rapidly evolving enterprise, network neuroscience provides a set of powerful approaches and fundamental insights that are critical for the neuroengineer’s tool kit. PMID:28375650

  9. Neural network computer simulation of medical aerosols.

    PubMed

    Richardson, C J; Barlow, D J

    1996-06-01

    Preliminary investigations have been conducted to assess the potential for using artificial neural networks to simulate aerosol behaviour, with a view to employing this type of methodology in the evaluation and design of pulmonary drug-delivery systems. Details are presented of the general purpose software developed for these tasks; it implements a feed-forward back-propagation algorithm with weight decay and connection pruning, the user having complete run-time control of the network architecture and mode of training. A series of exploratory investigations is then reported in which different network structures and training strategies are assessed in terms of their ability to simulate known patterns of fluid flow in simple model systems. The first of these involves simulations of cellular automata-generated data for fluid flow through a partially obstructed two-dimensional pipe. The artificial neural networks are shown to be highly successful in simulating the behaviour of this simple linear system, but with important provisos relating to the information content of the training data and the criteria used to judge when the network is properly trained. A second set of investigations is then reported in which similar networks are used to simulate patterns of fluid flow through aerosol generation devices, using training data furnished through rigorous computational fluid dynamics modelling. These more complex three-dimensional systems are modelled with equal success. It is concluded that carefully tailored, well trained networks could provide valuable tools not just for predicting but also for analysing the spatial dynamics of pharmaceutical aerosols.

  10. Development of human locomotion.

    PubMed

    Lacquaniti, Francesco; Ivanenko, Yuri P; Zago, Myrka

    2012-10-01

    Neural control of locomotion in human adults involves the generation of a small set of basic patterned commands directed to the leg muscles. The commands are generated sequentially in time during each step by neural networks located in the spinal cord, called Central Pattern Generators. This review outlines recent advances in understanding how motor commands are expressed at different stages of human development. Similar commands are found in several other vertebrates, indicating that locomotion development follows common principles of organization of the control networks. Movements show a high degree of flexibility at all stages of development, which is instrumental for learning and exploration of variable interactions with the environment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Identification and interpretation of patterns in rocket engine data: Artificial intelligence and neural network approaches

    NASA Technical Reports Server (NTRS)

    Ali, Moonis; Whitehead, Bruce; Gupta, Uday K.; Ferber, Harry

    1995-01-01

    This paper describes an expert system which is designed to perform automatic data analysis, identify anomalous events and determine the characteristic features of these events. We have employed both artificial intelligence and neural net approaches in the design of this expert system.

  12. Neural network classification of myoelectric signal for prosthesis control.

    PubMed

    Kelly, M F; Parker, P A; Scott, R N

    1991-12-01

    An alternate approach to deriving control for multidegree of freedom prosthetic arms is considered. By analyzing a single-channel myoelectric signal (MES), we can extract information that can be used to identify different contraction patterns in the upper arm. These contraction patterns are generated by subjects without previous training and are naturally associated with specific functions. Using a set of normalized MES spectral features, we can identify contraction patterns for four arm functions, specifically extension and flexion of the elbow and pronation and supination of the forearm. Performing identification independent of signal power is advantageous because this can then be used as a means for deriving proportional rate control for a prosthesis. An artificial neural network implementation is applied in the classification task. By using three single-layer perceptron networks, the MES is classified, with the spectral representations as input features. Trials performed on five subjects with normal limbs resulted in an average classification performance level of 85% for the four functions. Copyright © 1991. Published by Elsevier Ltd.

  13. Learning Universal Computations with Spikes

    PubMed Central

    Thalmeier, Dominik; Uhlmann, Marvin; Kappen, Hilbert J.; Memmesheimer, Raoul-Martin

    2016-01-01

    Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them. PMID:27309381

  14. Forecasting influenza-like illness dynamics for military populations using neural networks and social media

    PubMed Central

    Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D.

    2017-01-01

    This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in “real-time”) and forecasting (predicting the future) ILI dynamics in the 2011 – 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns. (g) Model performance improves with more tweets available per geo-location e.g., the error gets lower and the Pearson score gets higher for locations with more tweets. PMID:29244814

  15. Forecasting influenza-like illness dynamics for military populations using neural networks and social media.

    PubMed

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D

    2017-01-01

    This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in "real-time") and forecasting (predicting the future) ILI dynamics in the 2011 - 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns. (g) Model performance improves with more tweets available per geo-location e.g., the error gets lower and the Pearson score gets higher for locations with more tweets.

  16. A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing.

    PubMed

    Hu, Bin; Yue, Shigang; Zhang, Zhuhong

    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.

  17. Application of artificial neural networks to chemostratigraphy

    NASA Astrophysics Data System (ADS)

    Malmgren, BjöRn A.; Nordlund, Ulf

    1996-08-01

    Artificial neural networks, a branch of artificial intelligence, are computer systems formed by a number of simple, highly interconnected processing units that have the ability to learn a set of target vectors from a set of associated input signals. Neural networks learn by self-adjusting a set of parameters, using some pertinent algorithm to minimize the error between the desired output and network output. We explore the potential of this approach in solving a problem involving classification of geochemical data. The data, taken from the literature, are derived from four late Quaternary zones of volcanic ash of basaltic and rhyolithic origin from the Norwegian Sea. These ash layers span the oxygen isotope zones 1, 5, 7, and 11, respectively (last 420,000 years). The data consist of nine geochemical variables (oxides) determined in each of 183 samples. We employed a three-layer back propagation neural network to assess its efficiency to optimally differentiate samples from the four ash zones on the basis of their geochemical composition. For comparison, three statistical pattern recognition techniques, linear discriminant analysis, the k-nearest neighbor (k-NN) technique, and SIMCA (soft independent modeling of class analogy), were applied to the same data. All of these showed considerably higher error rates than the artificial neural network, indicating that the back propagation network was indeed more powerful in correctly classifying the ash particles to the appropriate zone on the basis of their geochemical composition.

  18. Modeling carbachol-induced hippocampal network synchronization using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Dragomir, Andrei; Akay, Yasemin M.; Akay, Metin

    2010-10-01

    In this work we studied the neural state transitions undergone by the hippocampal neural network using a hidden Markov model (HMM) framework. We first employed a measure based on the Lempel-Ziv (LZ) estimator to characterize the changes in the hippocampal oscillation patterns in terms of their complexity. These oscillations correspond to different modes of hippocampal network synchronization induced by the cholinergic agonist carbachol in the CA1 region of mice hippocampus. HMMs are then used to model the dynamics of the LZ-derived complexity signals as first-order Markov chains. Consequently, the signals corresponding to our oscillation recordings can be segmented into a sequence of statistically discriminated hidden states. The segmentation is used for detecting transitions in neural synchronization modes in data recorded from wild-type and triple transgenic mice models (3xTG) of Alzheimer's disease (AD). Our data suggest that transition from low-frequency (delta range) continuous oscillation mode into high-frequency (theta range) oscillation, exhibiting repeated burst-type patterns, occurs always through a mode resembling a mixture of the two patterns, continuous with burst. The relatively random patterns of oscillation during this mode may reflect the fact that the neuronal network undergoes re-organization. Further insight into the time durations of these modes (retrieved via the HMM segmentation of the LZ-derived signals) reveals that the mixed mode lasts significantly longer (p < 10-4) in 3xTG AD mice. These findings, coupled with the documented cholinergic neurotransmission deficits in the 3xTG mice model, may be highly relevant for the case of AD.

  19. Improving Pattern Recognition and Neural Network Algorithms with Applications to Solar Panel Energy Optimization

    NASA Astrophysics Data System (ADS)

    Zamora Ramos, Ernesto

    Artificial Intelligence is a big part of automation and with today's technological advances, artificial intelligence has taken great strides towards positioning itself as the technology of the future to control, enhance and perfect automation. Computer vision includes pattern recognition and classification and machine learning. Computer vision is at the core of decision making and it is a vast and fruitful branch of artificial intelligence. In this work, we expose novel algorithms and techniques built upon existing technologies to improve pattern recognition and neural network training, initially motivated by a multidisciplinary effort to build a robot that helps maintain and optimize solar panel energy production. Our contributions detail an improved non-linear pre-processing technique to enhance poorly illuminated images based on modifications to the standard histogram equalization for an image. While the original motivation was to improve nocturnal navigation, the results have applications in surveillance, search and rescue, medical imaging enhancing, and many others. We created a vision system for precise camera distance positioning motivated to correctly locate the robot for capture of solar panel images for classification. The classification algorithm marks solar panels as clean or dirty for later processing. Our algorithm extends past image classification and, based on historical and experimental data, it identifies the optimal moment in which to perform maintenance on marked solar panels as to minimize the energy and profit loss. In order to improve upon the classification algorithm, we delved into feedforward neural networks because of their recent advancements, proven universal approximation and classification capabilities, and excellent recognition rates. We explore state-of-the-art neural network training techniques offering pointers and insights, culminating on the implementation of a complete library with support for modern deep learning architectures, multilayer percepterons and convolutional neural networks. Our research with neural networks has encountered a great deal of difficulties regarding hyperparameter estimation for good training convergence rate and accuracy. Most hyperparameters, including architecture, learning rate, regularization, trainable parameters (or weights) initialization, and so on, are chosen via a trial and error process with some educated guesses. However, we developed the first quantitative method to compare weight initialization strategies, a critical hyperparameter choice during training, to estimate among a group of candidate strategies which would make the network converge to the highest classification accuracy faster with high probability. Our method provides a quick, objective measure to compare initialization strategies to select the best possible among them beforehand without having to complete multiple training sessions for each candidate strategy to compare final results.

  20. On the improvement of neural cryptography using erroneous transmitted information with error prediction.

    PubMed

    Allam, Ahmed M; Abbas, Hazem M

    2010-12-01

    Neural cryptography deals with the problem of "key exchange" between two neural networks using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between the two communicating parties is eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process. Therefore, diminishing the probability of such a threat improves the reliability of exchanging the output bits through a public channel. The synchronization with feedback algorithm is one of the existing algorithms that enhances the security of neural cryptography. This paper proposes three new algorithms to enhance the mutual learning process. They mainly depend on disrupting the attacker confidence in the exchanged outputs and input patterns during training. The first algorithm is called "Do not Trust My Partner" (DTMP), which relies on one party sending erroneous output bits, with the other party being capable of predicting and correcting this error. The second algorithm is called "Synchronization with Common Secret Feedback" (SCSFB), where inputs are kept partially secret and the attacker has to train its network on input patterns that are different from the training sets used by the communicating parties. The third algorithm is a hybrid technique combining the features of the DTMP and SCSFB. The proposed approaches are shown to outperform the synchronization with feedback algorithm in the time needed for the parties to synchronize.

  1. Quantum pattern recognition with multi-neuron interactions

    NASA Astrophysics Data System (ADS)

    Fard, E. Rezaei; Aghayar, K.; Amniat-Talab, M.

    2018-03-01

    We present a quantum neural network with multi-neuron interactions for pattern recognition tasks by a combination of extended classic Hopfield network and adiabatic quantum computation. This scheme can be used as an associative memory to retrieve partial patterns with any number of unknown bits. Also, we propose a preprocessing approach to classifying the pattern space S to suppress spurious patterns. The results of pattern clustering show that for pattern association, the number of weights (η ) should equal the numbers of unknown bits in the input pattern ( d). It is also remarkable that associative memory function depends on the location of unknown bits apart from the d and load parameter α.

  2. Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns

    PubMed Central

    Cruchet, Steeve; Gustafson, Kyle; Benton, Richard; Floreano, Dario

    2015-01-01

    The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs—locomotor bouts—matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior. PMID:26600381

  3. Unsupervised Extraction of Stable Expression Signatures from Public Compendia with an Ensemble of Neural Networks.

    PubMed

    Tan, Jie; Doing, Georgia; Lewis, Kimberley A; Price, Courtney E; Chen, Kathleen M; Cady, Kyle C; Perchuk, Barret; Laub, Michael T; Hogan, Deborah A; Greene, Casey S

    2017-07-26

    Cross-experiment comparisons in public data compendia are challenged by unmatched conditions and technical noise. The ADAGE method, which performs unsupervised integration with denoising autoencoder neural networks, can identify biological patterns, but because ADAGE models, like many neural networks, are over-parameterized, different ADAGE models perform equally well. To enhance model robustness and better build signatures consistent with biological pathways, we developed an ensemble ADAGE (eADAGE) that integrated stable signatures across models. We applied eADAGE to a compendium of Pseudomonas aeruginosa gene expression profiling experiments performed in 78 media. eADAGE revealed a phosphate starvation response controlled by PhoB in media with moderate phosphate and predicted that a second stimulus provided by the sensor kinase, KinB, is required for this PhoB activation. We validated this relationship using both targeted and unbiased genetic approaches. eADAGE, which captures stable biological patterns, enables cross-experiment comparisons that can highlight measured but undiscovered relationships. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  4. Density-based clustering: A 'landscape view' of multi-channel neural data for inference and dynamic complexity analysis.

    PubMed

    Baglietto, Gabriel; Gigante, Guido; Del Giudice, Paolo

    2017-01-01

    Two, partially interwoven, hot topics in the analysis and statistical modeling of neural data, are the development of efficient and informative representations of the time series derived from multiple neural recordings, and the extraction of information about the connectivity structure of the underlying neural network from the recorded neural activities. In the present paper we show that state-space clustering can provide an easy and effective option for reducing the dimensionality of multiple neural time series, that it can improve inference of synaptic couplings from neural activities, and that it can also allow the construction of a compact representation of the multi-dimensional dynamics, that easily lends itself to complexity measures. We apply a variant of the 'mean-shift' algorithm to perform state-space clustering, and validate it on an Hopfield network in the glassy phase, in which metastable states are largely uncorrelated from memories embedded in the synaptic matrix. In this context, we show that the neural states identified as clusters' centroids offer a parsimonious parametrization of the synaptic matrix, which allows a significant improvement in inferring the synaptic couplings from the neural activities. Moving to the more realistic case of a multi-modular spiking network, with spike-frequency adaptation inducing history-dependent effects, we propose a procedure inspired by Boltzmann learning, but extending its domain of application, to learn inter-module synaptic couplings so that the spiking network reproduces a prescribed pattern of spatial correlations; we then illustrate, in the spiking network, how clustering is effective in extracting relevant features of the network's state-space landscape. Finally, we show that the knowledge of the cluster structure allows casting the multi-dimensional neural dynamics in the form of a symbolic dynamics of transitions between clusters; as an illustration of the potential of such reduction, we define and analyze a measure of complexity of the neural time series.

  5. Multisource Transfer Learning With Convolutional Neural Networks for Lung Pattern Analysis.

    PubMed

    Christodoulidis, Stergios; Anthimopoulos, Marios; Ebner, Lukas; Christe, Andreas; Mougiakakou, Stavroula

    2017-01-01

    Early diagnosis of interstitial lung diseases is crucial for their treatment, but even experienced physicians find it difficult, as their clinical manifestations are similar. In order to assist with the diagnosis, computer-aided diagnosis systems have been developed. These commonly rely on a fixed scale classifier that scans CT images, recognizes textural lung patterns, and generates a map of pathologies. In a previous study, we proposed a method for classifying lung tissue patterns using a deep convolutional neural network (CNN), with an architecture designed for the specific problem. In this study, we present an improved method for training the proposed network by transferring knowledge from the similar domain of general texture classification. Six publicly available texture databases are used to pretrain networks with the proposed architecture, which are then fine-tuned on the lung tissue data. The resulting CNNs are combined in an ensemble and their fused knowledge is compressed back to a network with the original architecture. The proposed approach resulted in an absolute increase of about 2% in the performance of the proposed CNN. The results demonstrate the potential of transfer learning in the field of medical image analysis, indicate the textural nature of the problem and show that the method used for training a network can be as important as designing its architecture.

  6. Generation of optimal artificial neural networks using a pattern search algorithm: application to approximation of chemical systems.

    PubMed

    Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz

    2008-02-01

    A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.

  7. Classification of time-of-flight secondary ion mass spectrometry spectra from complex Cu-Fe sulphides by principal component analysis and artificial neural networks.

    PubMed

    Kalegowda, Yogesh; Harmer, Sarah L

    2013-01-08

    Artificial neural network (ANN) and a hybrid principal component analysis-artificial neural network (PCA-ANN) classifiers have been successfully implemented for classification of static time-of-flight secondary ion mass spectrometry (ToF-SIMS) mass spectra collected from complex Cu-Fe sulphides (chalcopyrite, bornite, chalcocite and pyrite) at different flotation conditions. ANNs are very good pattern classifiers because of: their ability to learn and generalise patterns that are not linearly separable; their fault and noise tolerance capability; and high parallelism. In the first approach, fragments from the whole ToF-SIMS spectrum were used as input to the ANN, the model yielded high overall correct classification rates of 100% for feed samples, 88% for conditioned feed samples and 91% for Eh modified samples. In the second approach, the hybrid pattern classifier PCA-ANN was integrated. PCA is a very effective multivariate data analysis tool applied to enhance species features and reduce data dimensionality. Principal component (PC) scores which accounted for 95% of the raw spectral data variance, were used as input to the ANN, the model yielded high overall correct classification rates of 88% for conditioned feed samples and 95% for Eh modified samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Selective attention to temporal features on nested time scales.

    PubMed

    Henry, Molly J; Herrmann, Björn; Obleser, Jonas

    2015-02-01

    Meaningful auditory stimuli such as speech and music often vary simultaneously along multiple time scales. Thus, listeners must selectively attend to, and selectively ignore, separate but intertwined temporal features. The current study aimed to identify and characterize the neural network specifically involved in this feature-selective attention to time. We used a novel paradigm where listeners judged either the duration or modulation rate of auditory stimuli, and in which the stimulation, working memory demands, response requirements, and task difficulty were held constant. A first analysis identified all brain regions where individual brain activation patterns were correlated with individual behavioral performance patterns, which thus supported temporal judgments generically. A second analysis then isolated those brain regions that specifically regulated selective attention to temporal features: Neural responses in a bilateral fronto-parietal network including insular cortex and basal ganglia decreased with degree of change of the attended temporal feature. Critically, response patterns in these regions were inverted when the task required selectively ignoring this feature. The results demonstrate how the neural analysis of complex acoustic stimuli with multiple temporal features depends on a fronto-parietal network that simultaneously regulates the selective gain for attended and ignored temporal features. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Damage Detection Using Holography and Interferometry

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2003-01-01

    This paper reviews classical approaches to damage detection using laser holography and interferometry. The paper then details the modern uses of electronic holography and neural-net-processed characteristic patterns to detect structural damage. The design of the neural networks and the preparation of the training sets are discussed. The use of a technique to optimize the training sets, called folding, is explained. Then a training procedure is detailed that uses the holography-measured vibration modes of the undamaged structures to impart damage-detection sensitivity to the neural networks. The inspections of an optical strain gauge mounting plate and an International Space Station cold plate are presented as examples.

  10. Spatiotemporal topology and temporal sequence identification with an adaptive time-delay neural network

    NASA Astrophysics Data System (ADS)

    Lin, Daw-Tung; Ligomenides, Panos A.; Dayhoff, Judith E.

    1993-08-01

    Inspired from the time delays that occur in neurobiological signal transmission, we describe an adaptive time delay neural network (ATNN) which is a powerful dynamic learning technique for spatiotemporal pattern transformation and temporal sequence identification. The dynamic properties of this network are formulated through the adaptation of time-delays and synapse weights, which are adjusted on-line based on gradient descent rules according to the evolution of observed inputs and outputs. We have applied the ATNN to examples that possess spatiotemporal complexity, with temporal sequences that are completed by the network. The ATNN is able to be applied to pattern completion. Simulation results show that the ATNN learns the topology of a circular and figure eight trajectories within 500 on-line training iterations, and reproduces the trajectory dynamically with very high accuracy. The ATNN was also trained to model the Fourier series expansion of the sum of different odd harmonics. The resulting network provides more flexibility and efficiency than the TDNN and allows the network to seek optimal values for time-delays as well as optimal synapse weights.

  11. Classification of epileptiform and wicket spike of EEG pattern using backpropagation neural network

    NASA Astrophysics Data System (ADS)

    Puspita, Juni Wijayanti; Jaya, Agus Indra; Gunadharma, Suryani

    2017-03-01

    Epilepsy is characterized by recurrent seizures that is resulted by permanent brain abnormalities. One of tools to support the diagnosis of epilepsy is Electroencephalograph (EEG), which describes the recording of brain electrical activity. Abnormal EEG patterns in epilepsy patients consist of Spike and Sharp waves. While both waves, there is a normal pattern that sometimes misinterpreted as epileptiform by electroenchepalographer (EEGer), namely Wicket Spike. The main difference of the three waves are on the time duration that related to the frequency. In this study, we proposed a method to classify a EEG wave into Sharp wave, Spike wave or Wicket spike group using Backpropagation Neural Network based on the frequency and amplitude of each wave. The results show that the proposed method can classifies the three group of waves with good accuracy.

  12. An Interactive Simulation Program for Exploring Computational Models of Auto-Associative Memory.

    PubMed

    Fink, Christian G

    2017-01-01

    While neuroscience students typically learn about activity-dependent plasticity early in their education, they often struggle to conceptually connect modification at the synaptic scale with network-level neuronal dynamics, not to mention with their own everyday experience of recalling a memory. We have developed an interactive simulation program (based on the Hopfield model of auto-associative memory) that enables the user to visualize the connections generated by any pattern of neural activity, as well as to simulate the network dynamics resulting from such connectivity. An accompanying set of student exercises introduces the concepts of pattern completion, pattern separation, and sparse versus distributed neural representations. Results from a conceptual assessment administered before and after students worked through these exercises indicate that the simulation program is a useful pedagogical tool for illustrating fundamental concepts of computational models of memory.

  13. Classification of crystal structure using a convolutional neural network

    PubMed Central

    Park, Woon Bae; Chung, Jiyong; Sohn, Keemin; Pyo, Myoungho

    2017-01-01

    A deep machine-learning technique based on a convolutional neural network (CNN) is introduced. It has been used for the classification of powder X-ray diffraction (XRD) patterns in terms of crystal system, extinction group and space group. About 150 000 powder XRD patterns were collected and used as input for the CNN with no handcrafted engineering involved, and thereby an appropriate CNN architecture was obtained that allowed determination of the crystal system, extinction group and space group. In sharp contrast with the traditional use of powder XRD pattern analysis, the CNN never treats powder XRD patterns as a deconvoluted and discrete peak position or as intensity data, but instead the XRD patterns are regarded as nothing but a pattern similar to a picture. The CNN interprets features that humans cannot recognize in a powder XRD pattern. As a result, accuracy levels of 81.14, 83.83 and 94.99% were achieved for the space-group, extinction-group and crystal-system classifications, respectively. The well trained CNN was then used for symmetry identification of unknown novel inorganic compounds. PMID:28875035

  14. Classification of crystal structure using a convolutional neural network.

    PubMed

    Park, Woon Bae; Chung, Jiyong; Jung, Jaeyoung; Sohn, Keemin; Singh, Satendra Pal; Pyo, Myoungho; Shin, Namsoo; Sohn, Kee-Sun

    2017-07-01

    A deep machine-learning technique based on a convolutional neural network (CNN) is introduced. It has been used for the classification of powder X-ray diffraction (XRD) patterns in terms of crystal system, extinction group and space group. About 150 000 powder XRD patterns were collected and used as input for the CNN with no handcrafted engineering involved, and thereby an appropriate CNN architecture was obtained that allowed determination of the crystal system, extinction group and space group. In sharp contrast with the traditional use of powder XRD pattern analysis, the CNN never treats powder XRD patterns as a deconvoluted and discrete peak position or as intensity data, but instead the XRD patterns are regarded as nothing but a pattern similar to a picture. The CNN interprets features that humans cannot recognize in a powder XRD pattern. As a result, accuracy levels of 81.14, 83.83 and 94.99% were achieved for the space-group, extinction-group and crystal-system classifications, respectively. The well trained CNN was then used for symmetry identification of unknown novel inorganic compounds.

  15. Pattern Recognition Using Artificial Neural Network: A Review

    NASA Astrophysics Data System (ADS)

    Kim, Tai-Hoon

    Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, artificial neural network techniques theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system using ANN and identify research topics and applications which are at the forefront of this exciting and challenging field.

  16. Classification and Prediction of RF Coupling inside A-320 and A-319 Airplanes using Feed Forward Neural Networks

    NASA Technical Reports Server (NTRS)

    Jafri, Madiha; Ely, Jay; Vahala, Linda

    2006-01-01

    Neural Network Modeling is introduced in this paper to classify and predict Interference Path Loss measurements on Airbus 319 and 320 airplanes. Interference patterns inside the aircraft are classified and predicted based on the locations of the doors, windows, aircraft structures and the communication/navigation system-of-concern. Modeled results are compared with measured data and a plan is proposed to enhance the modeling for better prediction of electromagnetic coupling problems inside aircraft.

  17. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  18. Neural Network for Image-to-Image Control of Optical Tweezers

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Anderson, Robert C.; Weiland, Kenneth E.; Wrbanek, Susan Y.

    2004-01-01

    A method is discussed for using neural networks to control optical tweezers. Neural-net outputs are combined with scaling and tiling to generate 480 by 480-pixel control patterns for a spatial light modulator (SLM). The SLM can be combined in various ways with a microscope to create movable tweezers traps with controllable profiles. The neural nets are intended to respond to scattered light from carbon and silicon carbide nanotube sensors. The nanotube sensors are to be held by the traps for manipulation and calibration. Scaling and tiling allow the 100 by 100-pixel maximum resolution of the neural-net software to be applied in stages to exploit the full 480 by 480-pixel resolution of the SLM. One of these stages is intended to create sensitive null detectors for detecting variations in the scattered light from the nanotube sensors.

  19. Forecasting influenza-like illness dynamics for military populations using neural networks and social media

    DOE PAGES

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; ...

    2017-12-15

    This work is the first to take advantage of recurrent neural networks to predict influenza-like-illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data [1, 2] and the state-of-the-art machine learning models [3, 4], we build and evaluate the predictive power of Long Short Term Memory (LSTMs) architectures capable of nowcasting (predicting in \\real-time") and forecasting (predicting the future) ILI dynamics in the 2011 { 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, stylistic and syntactic patterns,more » emotions and opinions, and communication behavior. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks. Finally, we combine ILI and social media signals to build joint neural network models for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance [1], specifically for military rather than general populations [3] in 26 U.S. and six international locations. Our approach demonstrates several advantages: (a) Neural network models learned from social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than syntactic and stylistic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns.« less

  20. Forecasting influenza-like illness dynamics for military populations using neural networks and social media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine

    This work is the first to take advantage of recurrent neural networks to predict influenza-like-illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data [1, 2] and the state-of-the-art machine learning models [3, 4], we build and evaluate the predictive power of Long Short Term Memory (LSTMs) architectures capable of nowcasting (predicting in \\real-time") and forecasting (predicting the future) ILI dynamics in the 2011 { 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, stylistic and syntactic patterns,more » emotions and opinions, and communication behavior. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks. Finally, we combine ILI and social media signals to build joint neural network models for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance [1], specifically for military rather than general populations [3] in 26 U.S. and six international locations. Our approach demonstrates several advantages: (a) Neural network models learned from social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than syntactic and stylistic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns.« less

  1. Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.

    PubMed

    Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre

    2017-06-01

    We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.

  2. Prediction of PM10 grades in Seoul, Korea using a neural network model based on synoptic patterns

    NASA Astrophysics Data System (ADS)

    Hur, S. K.; Oh, H. R.; Ho, C. H.; Kim, J.; Song, C. K.; Chang, L. S.; Lee, J. B.

    2016-12-01

    As of November 2014, the Korean Ministry of Environment (KME) started forecasting the level of ambient particulate matter with diameters ≤ 10 μm (PM10) as four grades: low (PM10 ≤ 30 μg m-3), moderate (30 < PM10 ≤ 80 μg m-3), high (80 < PM10 ≤ 150 μg m-3), and very high (PM10 > 150 μg m-3). Due to short history of forecast, overall performance of the operational forecasting system and its hit rate for the four PM10 grades are difficult to evaluate. In attempt to provide a statistical reference for the current air quality forecasting system, we hindcasted the four PM10 grades for the cold seasons (October-March) of 2001-2014 in Seoul, Korea using a neural network model based on the synoptic patterns of meteorological fields such as geopotential height, air temperature, relative humidity, and wind. In the form of cosine similarity, the distinctive synoptic patterns for each PM10 grades are well quantified as predictors to train the neural network model. Using these fields as predictors and considering the PM10 concentration in Seoul from the day before prediction as an additional predictor, an overall hit rate of 69% was achieved; the hit rates for the low, moderate, high, and very high PM10 grades were 33%, 83%, 45%, and 33%, respectively. This study reveals that the synoptic patterns of meteorological fields are useful predictors for the identification of favorable conditions for each PM10 grade, and the associated transboundary transport and local accumulation of PM10 from the industrialized regions of China. Consequently, the assessments of predictability obtained from the neural network model in this study are reliable to use as a statistical reference for the current air quality forecasting system.

  3. Neural feedback for instantaneous spatiotemporal modulation of afferent pathways in bi-directional brain-machine interfaces.

    PubMed

    Liu, Jianbo; Khalil, Hassan K; Oweiss, Karim G

    2011-10-01

    In bi-directional brain-machine interfaces (BMIs), precisely controlling the delivery of microstimulation, both in space and in time, is critical to continuously modulate the neural activity patterns that carry information about the state of the brain-actuated device to sensory areas in the brain. In this paper, we investigate the use of neural feedback to control the spatiotemporal firing patterns of neural ensembles in a model of the thalamocortical pathway. Control of pyramidal (PY) cells in the primary somatosensory cortex (S1) is achieved based on microstimulation of thalamic relay cells through multiple-input multiple-output (MIMO) feedback controllers. This closed loop feedback control mechanism is achieved by simultaneously varying the stimulation parameters across multiple stimulation electrodes in the thalamic circuit based on continuous monitoring of the difference between reference patterns and the evoked responses of the cortical PY cells. We demonstrate that it is feasible to achieve a desired level of performance by controlling the firing activity pattern of a few "key" neural elements in the network. Our results suggest that neural feedback could be an effective method to facilitate the delivery of information to the cortex to substitute lost sensory inputs in cortically controlled BMIs.

  4. Evolution of central pattern generators and rhythmic behaviours

    PubMed Central

    Katz, Paul S.

    2016-01-01

    Comparisons of rhythmic movements and the central pattern generators (CPGs) that control them uncover principles about the evolution of behaviour and neural circuits. Over the course of evolutionary history, gradual evolution of behaviours and their neural circuitry within any lineage of animals has been a predominant occurrence. Small changes in gene regulation can lead to divergence of circuit organization and corresponding changes in behaviour. However, some behavioural divergence has resulted from large-scale rewiring of the neural network. Divergence of CPG circuits has also occurred without a corresponding change in behaviour. When analogous rhythmic behaviours have evolved independently, it has generally been with different neural mechanisms. Repeated evolution of particular rhythmic behaviours has occurred within some lineages due to parallel evolution or latent CPGs. Particular motor pattern generating mechanisms have also evolved independently in separate lineages. The evolution of CPGs and rhythmic behaviours shows that although most behaviours and neural circuits are highly conserved, the nature of the behaviour does not dictate the neural mechanism and that the presence of homologous neural components does not determine the behaviour. This suggests that although behaviour is generated by neural circuits, natural selection can act separately on these two levels of biological organization. PMID:26598733

  5. Evolution of central pattern generators and rhythmic behaviours.

    PubMed

    Katz, Paul S

    2016-01-05

    Comparisons of rhythmic movements and the central pattern generators (CPGs) that control them uncover principles about the evolution of behaviour and neural circuits. Over the course of evolutionary history, gradual evolution of behaviours and their neural circuitry within any lineage of animals has been a predominant occurrence. Small changes in gene regulation can lead to divergence of circuit organization and corresponding changes in behaviour. However, some behavioural divergence has resulted from large-scale rewiring of the neural network. Divergence of CPG circuits has also occurred without a corresponding change in behaviour. When analogous rhythmic behaviours have evolved independently, it has generally been with different neural mechanisms. Repeated evolution of particular rhythmic behaviours has occurred within some lineages due to parallel evolution or latent CPGs. Particular motor pattern generating mechanisms have also evolved independently in separate lineages. The evolution of CPGs and rhythmic behaviours shows that although most behaviours and neural circuits are highly conserved, the nature of the behaviour does not dictate the neural mechanism and that the presence of homologous neural components does not determine the behaviour. This suggests that although behaviour is generated by neural circuits, natural selection can act separately on these two levels of biological organization. © 2015 The Author(s).

  6. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory

    PubMed Central

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374

  7. Intrinsic Network Connectivity Patterns Underlying Specific Dimensions of Impulsiveness in Healthy Young Adults.

    PubMed

    Kubera, Katharina M; Hirjak, Dusan; Wolf, Nadine D; Sambataro, Fabio; Thomann, Philipp A; Wolf, R Christian

    2018-05-01

    Impulsiveness is a central human personality trait and of high relevance for the development of several mental disorders. Impulsiveness is a multidimensional construct, yet little is known about dimension-specific neural correlates. Here, we address the question whether motor, attentional and non-planning components, as measured by the Barratt Impulsiveness Scale (BIS-11), are associated with distinct or overlapping neural network activity. In this study, we investigated brain activity at rest and its relationship to distinct dimensions of impulsiveness in 30 healthy young adults (m/f = 13/17; age mean/SD = 26.4/2.6 years) using resting-state functional magnetic resonance imaging at 3T. A spatial independent component analysis and a multivariate model selection strategy were used to identify systems loading on distinct impulsivity domains. We first identified eight networks for which we had a-priori hypotheses. These networks included basal ganglia, cortical motor, cingulate and lateral prefrontal systems. From the eight networks, three were associated with impulsiveness measures (p < 0.05, FDR corrected). There were significant relationships between right frontoparietal network function and all three BIS domains. Striatal and midcingulate network activity was associated with motor impulsiveness only. Within the networks regionally confined effects of age and gender were found. These data suggest distinct and overlapping patterns of neural activity underlying specific dimensions of impulsiveness. Motor impulsiveness appears to be specifically related to striatal and midcingulate network activity, in contrast to a domain-unspecific right frontoparietal system. Effects of age and gender have to be considered in young healthy samples.

  8. Neural circuits in anxiety and stress disorders: a focused review

    PubMed Central

    Duval, Elizabeth R; Javanbakht, Arash; Liberzon, Israel

    2015-01-01

    Anxiety and stress disorders are among the most prevalent neuropsychiatric disorders. In recent years, multiple studies have examined brain regions and networks involved in anxiety symptomatology in an effort to better understand the mechanisms involved and to develop more effective treatments. However, much remains unknown regarding the specific abnormalities and interactions between networks of regions underlying anxiety disorder presentations. We examined recent neuroimaging literature that aims to identify neural mechanisms underlying anxiety, searching for patterns of neural dysfunction that might be specific to different anxiety disorder categories. Across different anxiety and stress disorders, patterns of hyperactivation in emotion-generating regions and hypoactivation in prefrontal/regulatory regions are common in the literature. Interestingly, evidence of differential patterns is also emerging, such that within a spectrum of disorders ranging from more fear-based to more anxiety-based, greater involvement of emotion-generating regions is reported in panic disorder and specific phobia, and greater involvement of prefrontal regions is reported in generalized anxiety disorder and posttraumatic stress disorder. We summarize the pertinent literature and suggest areas for continued investigation. PMID:25670901

  9. Unsupervised learning of contextual constraints in neural networks for simultaneous visual processing of multiple objects

    NASA Astrophysics Data System (ADS)

    Marshall, Jonathan A.

    1992-12-01

    A simple self-organizing neural network model, called an EXIN network, that learns to process sensory information in a context-sensitive manner, is described. EXIN networks develop efficient representation structures for higher-level visual tasks such as segmentation, grouping, transparency, depth perception, and size perception. Exposure to a perceptual environment during a developmental period serves to configure the network to perform appropriate organization of sensory data. A new anti-Hebbian inhibitory learning rule permits superposition of multiple simultaneous neural activations (multiple winners), while maintaining contextual consistency constraints, instead of forcing winner-take-all pattern classifications. The activations can represent multiple patterns simultaneously and can represent uncertainty. The network performs parallel parsing, credit attribution, and simultaneous constraint satisfaction. EXIN networks can learn to represent multiple oriented edges even where they intersect and can learn to represent multiple transparently overlaid surfaces defined by stereo or motion cues. In the case of stereo transparency, the inhibitory learning implements both a uniqueness constraint and permits coactivation of cells representing multiple disparities at the same image location. Thus two or more disparities can be active simultaneously without interference. This behavior is analogous to that of Prazdny's stereo vision algorithm, with the bonus that each binocular point is assigned a unique disparity. In a large implementation, such a NN would also be able to represent effectively the disparities of a cloud of points at random depths, like human observers, and unlike Prazdny's method

  10. Data fusion with artificial neural networks (ANN) for classification of earth surface from microwave satellite measurements

    NASA Technical Reports Server (NTRS)

    Lure, Y. M. Fleming; Grody, Norman C.; Chiou, Y. S. Peter; Yeh, H. Y. Michael

    1993-01-01

    A data fusion system with artificial neural networks (ANN) is used for fast and accurate classification of five earth surface conditions and surface changes, based on seven SSMI multichannel microwave satellite measurements. The measurements include brightness temperatures at 19, 22, 37, and 85 GHz at both H and V polarizations (only V at 22 GHz). The seven channel measurements are processed through a convolution computation such that all measurements are located at same grid. Five surface classes including non-scattering surface, precipitation over land, over ocean, snow, and desert are identified from ground-truth observations. The system processes sensory data in three consecutive phases: (1) pre-processing to extract feature vectors and enhance separability among detected classes; (2) preliminary classification of Earth surface patterns using two separate and parallely acting classifiers: back-propagation neural network and binary decision tree classifiers; and (3) data fusion of results from preliminary classifiers to obtain the optimal performance in overall classification. Both the binary decision tree classifier and the fusion processing centers are implemented by neural network architectures. The fusion system configuration is a hierarchical neural network architecture, in which each functional neural net will handle different processing phases in a pipelined fashion. There is a total of around 13,500 samples for this analysis, of which 4 percent are used as the training set and 96 percent as the testing set. After training, this classification system is able to bring up the detection accuracy to 94 percent compared with 88 percent for back-propagation artificial neural networks and 80 percent for binary decision tree classifiers. The neural network data fusion classification is currently under progress to be integrated in an image processing system at NOAA and to be implemented in a prototype of a massively parallel and dynamically reconfigurable Modular Neural Ring (MNR).

  11. Differentiating malignant from benign breast tumors on acoustic radiation force impulse imaging using fuzzy-based neural networks with principle component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Hsiao-Chuan; Chou, Yi-Hong; Tiu, Chui-Mei; Hsieh, Chi-Wen; Liu, Brent; Shung, K. Kirk

    2017-03-01

    Many modalities have been developed as screening tools for breast cancer. A new screening method called acoustic radiation force impulse (ARFI) imaging was created for distinguishing breast lesions based on localized tissue displacement. This displacement was quantitated by virtual touch tissue imaging (VTI). However, VTIs sometimes express reverse results to intensity information in clinical observation. In the study, a fuzzy-based neural network with principle component analysis (PCA) was proposed to differentiate texture patterns of malignant breast from benign tumors. Eighty VTIs were randomly retrospected. Thirty four patients were determined as BI-RADS category 2 or 3, and the rest of them were determined as BI-RADS category 4 or 5 by two leading radiologists. Morphological method and Boolean algebra were performed as the image preprocessing to acquire region of interests (ROIs) on VTIs. Twenty four quantitative parameters deriving from first-order statistics (FOS), fractal dimension and gray level co-occurrence matrix (GLCM) were utilized to analyze the texture pattern of breast tumors on VTIs. PCA was employed to reduce the dimension of features. Fuzzy-based neural network as a classifier to differentiate malignant from benign breast tumors. Independent samples test was used to examine the significance of the difference between benign and malignant breast tumors. The area Az under the receiver operator characteristic (ROC) curve, sensitivity, specificity and accuracy were calculated to evaluate the performance of the system. Most all of texture parameters present significant difference between malignant and benign tumors with p-value of less than 0.05 except the average of fractal dimension. For all features classified by fuzzy-based neural network, the sensitivity, specificity, accuracy and Az were 95.7%, 97.1%, 95% and 0.964, respectively. However, the sensitivity, specificity, accuracy and Az can be increased to 100%, 97.1%, 98.8% and 0.985, respectively if PCA was performed to reduce the dimension of features. Patterns of breast tumors on VTIs can effectively be recognized by quantitative texture parameters, and differentiated malignant from benign lesions by fuzzy-based neural network with PCA.

  12. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks

    PubMed Central

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-01-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. PMID:26291608

  13. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    PubMed

    Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo

    2015-08-01

    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.

  14. Self-organization in neural networks - Applications in structural optimization

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat; Fu, B.; Berke, Laszlo

    1993-01-01

    The present paper discusses the applicability of ART (Adaptive Resonance Theory) networks, and the Hopfield and Elastic networks, in problems of structural analysis and design. A characteristic of these network architectures is the ability to classify patterns presented as inputs into specific categories. The categories may themselves represent distinct procedural solution strategies. The paper shows how this property can be adapted in the structural analysis and design problem. A second application is the use of Hopfield and Elastic networks in optimization problems. Of particular interest are problems characterized by the presence of discrete and integer design variables. The parallel computing architecture that is typical of neural networks is shown to be effective in such problems. Results of preliminary implementations in structural design problems are also included in the paper.

  15. Effects of Nerve Injury and Segmental Regeneration on the Cellular Correlates of Neural Morphallaxis

    PubMed Central

    Martinez, Veronica G.; Manson, Josiah M.B.; Zoran, Mark J.

    2009-01-01

    Functional recovery of neural networks after injury requires a series of signaling events similar to the embryonic processes that governed initial network construction. Neural morphallaxis, a form of nervous system regeneration, involves reorganization of adult neural connectivity patterns. Neural morphallaxis in the worm, Lumbriculus variegatus, occurs during asexual reproduction and segmental regeneration, as body fragments acquire new positional identities along the anterior–posterior axis. Ectopic head (EH) formation, induced by ventral nerve cord lesion, generated morphallactic plasticity including the reorganization of interneuronal sensory fields and the induction of a molecular marker of neural morphallaxis. Morphallactic changes occurred only in segments posterior to an EH. Neither EH formation, nor neural morphallaxis was observed after dorsal body lesions, indicating a role for nerve cord injury in morphallaxis induction. Furthermore, a hierarchical system of neurobehavioral control was observed, where anterior heads were dominant and an EH controlled body movements only in the absence of the anterior head. Both suppression of segmental regeneration and blockade of asexual fission, after treatment with boric acid, disrupted the maintenance of neural morphallaxis, but did not block its induction. Therefore, segmental regeneration (i.e., epimorphosis) may not be required for the induction of morphallactic remodeling of neural networks. However, on-going epimorphosis appears necessary for the long-term consolidation of cellular and molecular mechanisms underlying the morphallaxis of neural circuitry. PMID:18561185

  16. Proposal for an All-Spin Artificial Neural Network: Emulating Neural and Synaptic Functionalities Through Domain Wall Motion in Ferromagnets.

    PubMed

    Sengupta, Abhronil; Shim, Yong; Roy, Kaushik

    2016-12-01

    Non-Boolean computing based on emerging post-CMOS technologies can potentially pave the way for low-power neural computing platforms. However, existing work on such emerging neuromorphic architectures have either focused on solely mimicking the neuron, or the synapse functionality. While memristive devices have been proposed to emulate biological synapses, spintronic devices have proved to be efficient at performing the thresholding operation of the neuron at ultra-low currents. In this work, we propose an All-Spin Artificial Neural Network where a single spintronic device acts as the basic building block of the system. The device offers a direct mapping to synapse and neuron functionalities in the brain while inter-layer network communication is accomplished via CMOS transistors. To the best of our knowledge, this is the first demonstration of a neural architecture where a single nanoelectronic device is able to mimic both neurons and synapses. The ultra-low voltage operation of low resistance magneto-metallic neurons enables the low-voltage operation of the array of spintronic synapses, thereby leading to ultra-low power neural architectures. Device-level simulations, calibrated to experimental results, was used to drive the circuit and system level simulations of the neural network for a standard pattern recognition problem. Simulation studies indicate energy savings by  ∼  100× in comparison to a corresponding digital/analog CMOS neuron implementation.

  17. Emergent latent symbol systems in recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Monner, Derek; Reggia, James A.

    2012-12-01

    Fodor and Pylyshyn [(1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2), 3-71] famously argued that neural networks cannot behave systematically short of implementing a combinatorial symbol system. A recent response from Frank et al. [(2009). Connectionist semantic systematicity. Cognition, 110(3), 358-379] claimed to have trained a neural network to behave systematically without implementing a symbol system and without any in-built predisposition towards combinatorial representations. We believe systems like theirs may in fact implement a symbol system on a deeper and more interesting level: one where the symbols are latent - not visible at the level of network structure. In order to illustrate this possibility, we demonstrate our own recurrent neural network that learns to understand sentence-level language in terms of a scene. We demonstrate our model's learned understanding by testing it on novel sentences and scenes. By paring down our model into an architecturally minimal version, we demonstrate how it supports combinatorial computation over distributed representations by using the associative memory operations of Vector Symbolic Architectures. Knowledge of the model's memory scheme gives us tools to explain its errors and construct superior future models. We show how the model designs and manipulates a latent symbol system in which the combinatorial symbols are patterns of activation distributed across the layers of a neural network, instantiating a hybrid of classical symbolic and connectionist representations that combines advantages of both.

  18. Numerical solution of differential equations by artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1995-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks (ANN's) are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed by the author to mate the adaptability of the ANN with the speed and precision of the digital computer. This method has been successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  19. Use of pattern recognition and neural networks for non-metric sex diagnosis from lateral shape of calvarium: an innovative model for computer-aided diagnosis in forensic and physical anthropology.

    PubMed

    Cavalli, Fabio; Lusnig, Luca; Trentin, Edmondo

    2017-05-01

    Sex determination on skeletal remains is one of the most important diagnosis in forensic cases and in demographic studies on ancient populations. Our purpose is to realize an automatic operator-independent method to determine the sex from the bone shape and to test an intelligent, automatic pattern recognition system in an anthropological domain. Our multiple-classifier system is based exclusively on the morphological variants of a curve that represents the sagittal profile of the calvarium, modeled via artificial neural networks, and yields an accuracy higher than 80 %. The application of this system to other bone profiles is expected to further improve the sensibility of the methodology.

  20. Differential theory of learning for efficient neural network pattern recognition

    NASA Astrophysics Data System (ADS)

    Hampshire, John B., II; Vijaya Kumar, Bhagavatula

    1993-09-01

    We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generate well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.

  1. Differential theory of learning for efficient neural network pattern recognition

    NASA Astrophysics Data System (ADS)

    Hampshire, John B., II; Vijaya Kumar, Bhagavatula

    1993-08-01

    We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generalize well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.

  2. Improved GART neural network model for pattern classification and rule extraction with application to power systems.

    PubMed

    Yap, Keem Siah; Lim, Chee Peng; Au, Mau Teng

    2011-12-01

    Generalized adaptive resonance theory (GART) is a neural network model that is capable of online learning and is effective in tackling pattern classification tasks. In this paper, we propose an improved GART model (IGART), and demonstrate its applicability to power systems. IGART enhances the dynamics of GART in several aspects, which include the use of the Laplacian likelihood function, a new vigilance function, a new match-tracking mechanism, an ordering algorithm for determining the sequence of training data, and a rule extraction capability to elicit if-then rules from the network. To assess the effectiveness of IGART and to compare its performances with those from other methods, three datasets that are related to power systems are employed. The experimental results demonstrate the usefulness of IGART with the rule extraction capability in undertaking classification problems in power systems engineering.

  3. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain.

    PubMed

    Higgins, Irina; Stringer, Simon; Schnupp, Jan

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.

  4. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain

    PubMed Central

    Stringer, Simon

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable. PMID:28797034

  5. A text-based data mining and toxicity prediction modeling system for a clinical decision support in radiation oncology: A preliminary study

    NASA Astrophysics Data System (ADS)

    Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Chang, Kyung Hwan; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie

    2017-08-01

    The aim of this study is an integrated research for text-based data mining and toxicity prediction modeling system for clinical decision support system based on big data in radiation oncology as a preliminary research. The structured and unstructured data were prepared by treatment plans and the unstructured data were extracted by dose-volume data image pattern recognition of prostate cancer for research articles crawling through the internet. We modeled an artificial neural network to build a predictor model system for toxicity prediction of organs at risk. We used a text-based data mining approach to build the artificial neural network model for bladder and rectum complication predictions. The pattern recognition method was used to mine the unstructured toxicity data for dose-volume at the detection accuracy of 97.9%. The confusion matrix and training model of the neural network were achieved with 50 modeled plans (n = 50) for validation. The toxicity level was analyzed and the risk factors for 25% bladder, 50% bladder, 20% rectum, and 50% rectum were calculated by the artificial neural network algorithm. As a result, 32 plans could cause complication but 18 plans were designed as non-complication among 50 modeled plans. We integrated data mining and a toxicity modeling method for toxicity prediction using prostate cancer cases. It is shown that a preprocessing analysis using text-based data mining and prediction modeling can be expanded to personalized patient treatment decision support based on big data.

  6. Geophysical phenomena classification by artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gough, M. P.; Bruckner, J. R.

    1995-01-01

    Space science information systems involve accessing vast data bases. There is a need for an automatic process by which properties of the whole data set can be assimilated and presented to the user. Where data are in the form of spectrograms, phenomena can be detected by pattern recognition techniques. Presented are the first results obtained by applying unsupervised Artificial Neural Networks (ANN's) to the classification of magnetospheric wave spectra. The networks used here were a simple unsupervised Hamming network run on a PC and a more sophisticated CALM network run on a Sparc workstation. The ANN's were compared in their geophysical data recognition performance. CALM networks offer such qualities as fast learning, superiority in generalizing, the ability to continuously adapt to changes in the pattern set, and the possibility to modularize the network to allow the inter-relation between phenomena and data sets. This work is the first step toward an information system interface being developed at Sussex, the Whole Information System Expert (WISE). Phenomena in the data are automatically identified and provided to the user in the form of a data occurrence morphology, the Whole Information System Data Occurrence Morphology (WISDOM), along with relationships to other parameters and phenomena.

  7. Ground-state coding in partially connected neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1989-01-01

    Patterns over (-1,0,1) define, by their outer products, partially connected neural networks, consisting of internally strongly connected, externally weakly connected subnetworks. The connectivity patterns may have highly organized structures, such as lattices and fractal trees or nests. Subpatterns over (-1,1) define the subcodes stored in the subnetwork, that agree in their common bits. It is first shown that the code words are locally stable stares of the network, provided that each of the subcodes consists of mutually orthogonal words or of, at most, two words. Then it is shown that if each of the subcodes consists of two orthogonal words, the code words are the unique ground states (absolute minima) of the Hamiltonian associated with the network. The regions of attraction associated with the code words are shown to grow with the number of subnetworks sharing each of the neurons. Depending on the particular network architecture, the code sizes of partially connected networks can be vastly greater than those of fully connected ones and their error correction capabilities can be significantly greater than those of the disconnected subnetworks. The codes associated with lattice-structured and hierarchical networks are discussed in some detail.

  8. Intelligent model-based OPC

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.

  9. Identification of the connections in biologically inspired neural networks

    NASA Technical Reports Server (NTRS)

    Demuth, H.; Leung, K.; Beale, M.; Hicklin, J.

    1990-01-01

    We developed an identification method to find the strength of the connections between neurons from their behavior in small biologically-inspired artificial neural networks. That is, given the network external inputs and the temporal firing pattern of the neurons, we can calculate a solution for the strengths of the connections between neurons and the initial neuron activations if a solution exists. The method determines directly if there is a solution to a particular neural network problem. No training of the network is required. It should be noted that this is a first pass at the solution of a difficult problem. The neuron and network models chosen are related to biology but do not contain all of its complexities, some of which we hope to add to the model in future work. A variety of new results have been obtained. First, the method has been tailored to produce connection weight matrix solutions for networks with important features of biological neural (bioneural) networks. Second, a computationally efficient method of finding a robust central solution has been developed. This later method also enables us to find the most consistent solution in the presence of noisy data. Prospects of applying our method to identify bioneural network connections are exciting because such connections are almost impossible to measure in the laboratory. Knowledge of such connections would facilitate an understanding of bioneural networks and would allow the construction of the electronic counterparts of bioneural networks on very large scale integrated (VLSI) circuits.

  10. A Recurrent Network Model of Somatosensory Parametric Working Memory in the Prefrontal Cortex

    PubMed Central

    Miller, Paul; Brody, Carlos D; Romo, Ranulfo; Wang, Xiao-Jing

    2015-01-01

    A parametric working memory network stores the information of an analog stimulus in the form of persistent neural activity that is monotonically tuned to the stimulus. The family of persistent firing patterns with a continuous range of firing rates must all be realizable under exactly the same external conditions (during the delay when the transient stimulus is withdrawn). How this can be accomplished by neural mechanisms remains an unresolved question. Here we present a recurrent cortical network model of irregularly spiking neurons that was designed to simulate a somatosensory working memory experiment with behaving monkeys. Our model reproduces the observed positively and negatively monotonic persistent activity, and heterogeneous tuning curves of memory activity. We show that fine-tuning mathematically corresponds to a precise alignment of cusps in the bifurcation diagram of the network. Moreover, we show that the fine-tuned network can integrate stimulus inputs over several seconds. Assuming that such time integration occurs in neural populations downstream from a tonically persistent neural population, our model is able to account for the slow ramping-up and ramping-down behaviors of neurons observed in prefrontal cortex. PMID:14576212

  11. Sensitivity Analysis for Probabilistic Neural Network Structure Reduction.

    PubMed

    Kowalski, Piotr A; Kusy, Maciej

    2018-05-01

    In this paper, we propose the use of local sensitivity analysis (LSA) for the structure simplification of the probabilistic neural network (PNN). Three algorithms are introduced. The first algorithm applies LSA to the PNN input layer reduction by selecting significant features of input patterns. The second algorithm utilizes LSA to remove redundant pattern neurons of the network. The third algorithm combines the proposed two and constitutes the solution of how they can work together. PNN with a product kernel estimator is used, where each multiplicand computes a one-dimensional Cauchy function. Therefore, the smoothing parameter is separately calculated for each dimension by means of the plug-in method. The classification qualities of the reduced and full structure PNN are compared. Furthermore, we evaluate the performance of PNN, for which global sensitivity analysis (GSA) and the common reduction methods are applied, both in the input layer and the pattern layer. The models are tested on the classification problems of eight repository data sets. A 10-fold cross validation procedure is used to determine the prediction ability of the networks. Based on the obtained results, it is shown that the LSA can be used as an alternative PNN reduction approach.

  12. Speed and segmentation control mechanisms characterized in rhythmically-active circuits created from spinal neurons produced from genetically-tagged embryonic stem cells

    PubMed Central

    Sternfeld, Matthew J; Hinckley, Christopher A; Moore, Niall J; Pankratz, Matthew T; Hilde, Kathryn L; Driscoll, Shawn P; Hayashi, Marito; Amin, Neal D; Bonanomi, Dario; Gifford, Wesley D; Sharma, Kamal; Goulding, Martyn; Pfaff, Samuel L

    2017-01-01

    Flexible neural networks, such as the interconnected spinal neurons that control distinct motor actions, can switch their activity to produce different behaviors. Both excitatory (E) and inhibitory (I) spinal neurons are necessary for motor behavior, but the influence of recruiting different ratios of E-to-I cells remains unclear. We constructed synthetic microphysical neural networks, called circuitoids, using precise combinations of spinal neuron subtypes derived from mouse stem cells. Circuitoids of purified excitatory interneurons were sufficient to generate oscillatory bursts with properties similar to in vivo central pattern generators. Inhibitory V1 neurons provided dual layers of regulation within excitatory rhythmogenic networks - they increased the rhythmic burst frequency of excitatory V3 neurons, and segmented excitatory motor neuron activity into sub-networks. Accordingly, the speed and pattern of spinal circuits that underlie complex motor behaviors may be regulated by quantitatively gating the intra-network cellular activity ratio of E-to-I neurons. DOI: http://dx.doi.org/10.7554/eLife.21540.001 PMID:28195039

  13. A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis.

    PubMed

    Hayashi, Hideaki; Shibanoki, Taro; Shima, Keisuke; Kurita, Yuichi; Tsuji, Toshio

    2015-12-01

    This paper proposes a probabilistic neural network (NN) developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns. TSDCA involves the compression of high-dimensional time series into a lower dimensional space using a set of orthogonal transformations and the calculation of posterior probabilities based on a continuous-density hidden Markov model with a Gaussian mixture model expressed in the reduced-dimensional space. The analysis can be incorporated into an NN, which is named a time-series discriminant component network (TSDCN), so that parameters of dimensionality reduction and classification can be obtained simultaneously as network coefficients according to a backpropagation through time-based learning algorithm with the Lagrange multiplier method. The TSDCN is considered to enable high-accuracy classification of high-dimensional time-series patterns and to reduce the computation time taken for network training. The validity of the TSDCN is demonstrated for high-dimensional artificial data and electroencephalogram signals in the experiments conducted during the study.

  14. Long-term solar UV radiation reconstructed by Artificial Neural Networks (ANN)

    NASA Astrophysics Data System (ADS)

    Feister, U.; Junk, J.; Woldt, M.

    2008-01-01

    Artificial Neural Networks (ANN) are efficient tools to derive solar UV radiation from measured meteorological parameters such as global radiation, aerosol optical depths and atmospheric column ozone. The ANN model has been tested with different combinations of data from the two sites Potsdam and Lindenberg, and used to reconstruct solar UV radiation at eight European sites by more than 100 years into the past. Annual totals of UV radiation derived from reconstructed daily UV values reflect interannual variations and long-term patterns that are compatible with variabilities and changes of measured input data, in particular global dimming by about 1980-1990, subsequent global brightening, volcanic eruption effects such as that of Mt. Pinatubo, and the long-term ozone decline since the 1970s. Patterns of annual erythemal UV radiation are very similar at sites located at latitudes close to each other, but different patterns occur between UV radiation at sites in different latitude regions.

  15. Pattern recognition of visible and near-infrared spectroscopy from bayberry juice by use of partial least squares and a backpropagation neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cen Haiyan; Bao Yidan; He Yong

    2006-10-10

    Visible and near-infrared reflectance (visible-NIR) spectroscopy is applied to discriminate different varieties of bayberry juices. The discrimination of visible-NIR spectra from samples is a matter of pattern recognition. By partial least squares (PLS), the spectrum is reduced to certain factors, which are then taken as the input of the backpropagation neural network (BPNN). Through training and prediction, three different varieties of bayberry juice are classified based on the output of the BPNN. In addition, a mathematical model is built and the algorithm is optimized. With proper parameters in the training set,100% accuracy is obtained by the BPNN. Thus it ismore » concluded that the PLS analysis combined with the BPNN is an alternative for pattern recognition based on visible and NIR spectroscopy.« less

  16. Evidence for a distributed respiratory rhythm generating network in the goldfish (Carsssius auratus).

    PubMed

    Duchcherer, Maryana; Kottick, Andrew; Wilson, R J A

    2010-01-01

    Central pattern generators located in the brainstem regulate ventilatory behaviors in vertebrates. The development of the isolated brainstem preparation has allowed these neural networks to be characterized in a number of aquatic species. The aim of this study was to explore the architecture of the respiratory rhythm-generating site in the goldfish (Carassius auratus) and to determine the utility of a newly developed isolated brainstem preparation, the Sheep Dip. Here we provide evidence for a distributed organization of respiratory rhythm generating neurons along the rostrocaudal axis of the goldfish brainstem and outline the advantages of the Sheep Dip as a tool used to survey neural networks.

  17. Salient regions detection using convolutional neural networks and color volume

    NASA Astrophysics Data System (ADS)

    Liu, Guang-Hai; Hou, Yingkun

    2018-03-01

    Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.

  18. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks.

    PubMed

    Martin Cichy, Radoslaw; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude

    2017-06-01

    Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  19. The ART of representation: Memory reduction and noise tolerance in a neural network vision system

    NASA Astrophysics Data System (ADS)

    Langley, Christopher S.

    The Feature Cerebellar Model Arithmetic Computer (FCMAC) is a multiple-input-single-output neural network that can provide three-degree-of-freedom (3-DOF) pose estimation for a robotic vision system. The FCMAC provides sufficient accuracy to enable a manipulator to grasp an object from an arbitrary pose within its workspace. The network learns an appearance-based representation of an object by storing coarsely quantized feature patterns. As all unique patterns are encoded, the network size grows uncontrollably. A new architecture is introduced herein, which combines the FCMAC with an Adaptive Resonance Theory (ART) network. The ART module categorizes patterns observed during training into a set of prototypes that are used to build the FCMAC. As a result, the network no longer grows without bound, but constrains itself to a user-specified size. Pose estimates remain accurate since the ART layer tends to discard the least relevant information first. The smaller network performs recall faster, and in some cases is better for generalization, resulting in a reduction of error at recall time. The ART-Under-Constraint (ART-C) algorithm is extended to include initial filling with randomly selected patterns (referred to as ART-F). In experiments using a real-world data set, the new network performed equally well using less than one tenth the number of coarse patterns as a regular FCMAC. The FCMAC is also extended to include real-valued input activations. As a result, the network can be tuned to reject a variety of types of noise in the image feature detection. A quantitative analysis of noise tolerance was performed using four synthetic noise algorithms, and a qualitative investigation was made using noisy real-world image data. In validation experiments, the FCMAC system outperformed Radial Basis Function (RBF) networks for the 3-DOF problem, and had accuracy comparable to that of Principal Component Analysis (PCA) and superior to that of Shape Context Matching (SCM), both of which estimate orientation only.

  20. Automated sleep stage detection with a classical and a neural learning algorithm--methodological aspects.

    PubMed

    Schwaibold, M; Schöchlin, J; Bolz, A

    2002-01-01

    For classification tasks in biosignal processing, several strategies and algorithms can be used. Knowledge-based systems allow prior knowledge about the decision process to be integrated, both by the developer and by self-learning capabilities. For the classification stages in a sleep stage detection framework, three inference strategies were compared regarding their specific strengths: a classical signal processing approach, artificial neural networks and neuro-fuzzy systems. Methodological aspects were assessed to attain optimum performance and maximum transparency for the user. Due to their effective and robust learning behavior, artificial neural networks could be recommended for pattern recognition, while neuro-fuzzy systems performed best for the processing of contextual information.

  1. The performance evaluation of a new neural network based traffic management scheme for a satellite communication network

    NASA Technical Reports Server (NTRS)

    Ansari, Nirwan; Liu, Dequan

    1991-01-01

    A neural-network-based traffic management scheme for a satellite communication network is described. The scheme consists of two levels of management. The front end of the scheme is a derivation of Kohonen's self-organization model to configure maps for the satellite communication network dynamically. The model consists of three stages. The first stage is the pattern recognition task, in which an exemplar map that best meets the current network requirements is selected. The second stage is the analysis of the discrepancy between the chosen exemplar map and the state of the network, and the adaptive modification of the chosen exemplar map to conform closely to the network requirement (input data pattern) by means of Kohonen's self-organization. On the basis of certain performance criteria, whether a new map is generated to replace the original chosen map is decided in the third stage. A state-dependent routing algorithm, which arranges the incoming call to some proper path, is used to make the network more efficient and to lower the call block rate. Simulation results demonstrate that the scheme, which combines self-organization and the state-dependent routing mechanism, provides better performance in terms of call block rate than schemes that only have either the self-organization mechanism or the routing mechanism.

  2. Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.

    PubMed

    Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq

    2016-01-01

    This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.

  3. Pseudo-orthogonalization of memory patterns for associative memory.

    PubMed

    Oku, Makito; Makino, Takaki; Aihara, Kazuyuki

    2013-11-01

    A new method for improving the storage capacity of associative memory models on a neural network is proposed. The storage capacity of the network increases in proportion to the network size in the case of random patterns, but, in general, the capacity suffers from correlation among memory patterns. Numerous solutions to this problem have been proposed so far, but their high computational cost limits their scalability. In this paper, we propose a novel and simple solution that is locally computable without any iteration. Our method involves XNOR masking of the original memory patterns with random patterns, and the masked patterns and masks are concatenated. The resulting decorrelated patterns allow higher storage capacity at the cost of the pattern length. Furthermore, the increase in the pattern length can be reduced through blockwise masking, which results in a small amount of capacity loss. Movie replay and image recognition are presented as examples to demonstrate the scalability of the proposed method.

  4. Models of Acetylcholine and Dopamine Signals Differentially Improve Neural Representations

    PubMed Central

    Holca-Lamarre, Raphaël; Lücke, Jörg; Obermayer, Klaus

    2017-01-01

    Biological and artificial neural networks (ANNs) represent input signals as patterns of neural activity. In biology, neuromodulators can trigger important reorganizations of these neural representations. For instance, pairing a stimulus with the release of either acetylcholine (ACh) or dopamine (DA) evokes long lasting increases in the responses of neurons to the paired stimulus. The functional roles of ACh and DA in rearranging representations remain largely unknown. Here, we address this question using a Hebbian-learning neural network model. Our aim is both to gain a functional understanding of ACh and DA transmission in shaping biological representations and to explore neuromodulator-inspired learning rules for ANNs. We model the effects of ACh and DA on synaptic plasticity and confirm that stimuli coinciding with greater neuromodulator activation are over represented in the network. We then simulate the physiological release schedules of ACh and DA. We measure the impact of neuromodulator release on the network's representation and on its performance on a classification task. We find that ACh and DA trigger distinct changes in neural representations that both improve performance. The putative ACh signal redistributes neural preferences so that more neurons encode stimulus classes that are challenging for the network. The putative DA signal adapts synaptic weights so that they better match the classes of the task at hand. Our model thus offers a functional explanation for the effects of ACh and DA on cortical representations. Additionally, our learning algorithm yields performances comparable to those of state-of-the-art optimisation methods in multi-layer perceptrons while requiring weaker supervision signals and interacting with synaptically-local weight updates. PMID:28690509

  5. Vibration control of building structures using self-organizing and self-learning neural networks

    NASA Astrophysics Data System (ADS)

    Madan, Alok

    2005-11-01

    Past research in artificial intelligence establishes that artificial neural networks (ANN) are effective and efficient computational processors for performing a variety of tasks including pattern recognition, classification, associative recall, combinatorial problem solving, adaptive control, multi-sensor data fusion, noise filtering and data compression, modelling and forecasting. The paper presents a potentially feasible approach for training ANN in active control of earthquake-induced vibrations in building structures without the aid of teacher signals (i.e. target control forces). A counter-propagation neural network is trained to output the control forces that are required to reduce the structural vibrations in the absence of any feedback on the correctness of the output control forces (i.e. without any information on the errors in output activations of the network). The present study shows that, in principle, the counter-propagation network (CPN) can learn from the control environment to compute the required control forces without the supervision of a teacher (unsupervised learning). Simulated case studies are presented to demonstrate the feasibility of implementing the unsupervised learning approach in ANN for effective vibration control of structures under the influence of earthquake ground motions. The proposed learning methodology obviates the need for developing a mathematical model of structural dynamics or training a separate neural network to emulate the structural response for implementation in practice.

  6. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Associative memory in phasing neuron networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nair, Niketh S; Bochove, Erik J.; Braiman, Yehuda

    2014-01-01

    We studied pattern formation in a network of coupled Hindmarsh-Rose model neurons and introduced a new model for associative memory retrieval using networks of Kuramoto oscillators. Hindmarsh-Rose Neural Networks can exhibit a rich set of collective dynamics that can be controlled by their connectivity. Specifically, we showed an instance of Hebb's rule where spiking was correlated with network topology. Based on this, we presented a simple model of associative memory in coupled phase oscillators.

  8. Investigation of Back-off Based Interpolation Between Recurrent Neural Network and N-gram Language Models (Author’s Manuscript)

    DTIC Science & Technology

    2016-02-11

    INVESTIGATION OF BACK-OFF BASED INTERPOLATION BETWEEN RECURRENT NEURAL NETWORK AND N- GRAM LANGUAGE MODELS X. Chen, X. Liu, M. J. F. Gales, and P. C...As the gener- alization patterns of RNNLMs and n- gram LMs are inherently dif- ferent, RNNLMs are usually combined with n- gram LMs via a fixed...RNNLMs and n- gram LMs as n- gram level changes. In order to fully exploit the detailed n- gram level comple- mentary attributes between the two LMs, a

  9. NDRAM: nonlinear dynamic recurrent associative memory for learning bipolar and nonbipolar correlated patterns.

    PubMed

    Chartier, Sylvain; Proulx, Robert

    2005-11-01

    This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.

  10. Talking Drums: Generating drum grooves with neural networks

    NASA Astrophysics Data System (ADS)

    Hutchings, P.

    2017-05-01

    Presented is a method of generating a full drum kit part for a provided kick-drum sequence. A sequence to sequence neural network model used in natural language translation was adopted to encode multiple musical styles and an online survey was developed to test different techniques for sampling the output of the softmax function. The strongest results were found using a sampling technique that drew from the three most probable outputs at each subdivision of the drum pattern but the consistency of output was found to be heavily dependent on style.

  11. BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment.

    PubMed

    Kawahara, Jeremy; Brown, Colin J; Miller, Steven P; Booth, Brian G; Chau, Vann; Grunau, Ruth E; Zwicker, Jill G; Hamarneh, Ghassan

    2017-02-01

    We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Sequential associative memory with nonuniformity of the layer sizes.

    PubMed

    Teramae, Jun-Nosuke; Fukai, Tomoki

    2007-01-01

    Sequence retrieval has a fundamental importance in information processing by the brain, and has extensively been studied in neural network models. Most of the previous sequential associative memory embedded sequences of memory patterns have nearly equal sizes. It was recently shown that local cortical networks display many diverse yet repeatable precise temporal sequences of neuronal activities, termed "neuronal avalanches." Interestingly, these avalanches displayed size and lifetime distributions that obey power laws. Inspired by these experimental findings, here we consider an associative memory model of binary neurons that stores sequences of memory patterns with highly variable sizes. Our analysis includes the case where the statistics of these size variations obey the above-mentioned power laws. We study the retrieval dynamics of such memory systems by analytically deriving the equations that govern the time evolution of macroscopic order parameters. We calculate the critical sequence length beyond which the network cannot retrieve memory sequences correctly. As an application of the analysis, we show how the present variability in sequential memory patterns degrades the power-law lifetime distribution of retrieved neural activities.

  13. A patterned recombinant human IgM guides neurite outgrowth of CNS neurons

    PubMed Central

    Xu, Xiaohua; Wittenberg, Nathan J.; Jordan, Luke R.; Kumar, Shailabh; Watzlawik, Jens O.; Warrington, Arthur E.; Oh, Sang-Hyun; Rodriguez, Moses

    2013-01-01

    Matrix molecules convey biochemical and physical guiding signals to neurons in the central nervous system (CNS) and shape the trajectory of neuronal fibers that constitute neural networks. We have developed recombinant human IgMs that bind to epitopes on neural cells, with the aim of treating neurological diseases. Here we test the hypothesis that recombinant human IgMs (rHIgM) can guide neurite outgrowth of CNS neurons. Microcontact printing was employed to pattern rHIgM12 and rHIgM22, antibodies that were bioengineered to have variable regions capable of binding to neurons or oligodendrocytes, respectively. rHIgM12 promoted neuronal attachment and guided outgrowth of neurites from hippocampal neurons. Processes from spinal neurons followed grid patterns of rHIgM12 and formed a physical network. Comparison between rHIgM12 and rHIgM22 suggested the biochemistry that facilitates anchoring the neuronal surfaces is a prerequisite for the function of IgM, and spatial properties cooperate in guiding the assembly of neuronal networks. PMID:23881231

  14. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory.

    PubMed

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. A Deep Neural Network Model for Rainfall Estimation UsingPolarimetric WSR-88DP Radar Observations

    NASA Astrophysics Data System (ADS)

    Tan, H.; Chandra, C. V.; Chen, H.

    2016-12-01

    Rainfall estimation based on radar measurements has been an important topic for a few decades. Generally, radar rainfall estimation is conducted through parametric algorisms such as reflectivity-rainfall relation (i.e., Z-R relation). On the other hand, neural networks are developed for ground rainfall estimation based on radar measurements. This nonparametric method, which takes into account of both radar observations and rainfall measurements from ground rain gauges, has been demonstrated successfully for rainfall rate estimation. However, the neural network-based rainfall estimation is limited in practice due to the model complexity and structure, data quality, as well as different rainfall microphysics. Recently, the deep learning approach has been introduced in pattern recognition and machine learning areas. Compared to traditional neural networks, the deep learning based methodologies have larger number of hidden layers and more complex structure for data representation. Through a hierarchical learning process, the high level structured information and knowledge can be extracted automatically from low level features of the data. In this paper, we introduce a novel deep neural network model for rainfall estimation based on ground polarimetric radar measurements .The model is designed to capture the complex abstractions of radar measurements at different levels using multiple layers feature identification and extraction. The abstractions at different levels can be used independently or fused with other data resource such as satellite-based rainfall products and/or topographic data to represent the rain characteristics at certain location. In particular, the WSR-88DP radar and rain gauge data collected in Dallas - Fort Worth Metroplex and Florida are used extensively to train the model, and for demonstration purposes. Quantitative evaluation of the deep neural network based rainfall products will also be presented, which is based on an independent rain gauge network.

  16. Detecting central fixation by means of artificial neural networks in a pediatric vision screener using retinal birefringence scanning.

    PubMed

    Gramatikov, Boris I

    2017-04-27

    Reliable detection of central fixation and eye alignment is essential in the diagnosis of amblyopia ("lazy eye"), which can lead to blindness. Our lab has developed and reported earlier a pediatric vision screener that performs scanning of the retina around the fovea and analyzes changes in the polarization state of light as the scan progresses. Depending on the direction of gaze and the instrument design, the screener produces several signal frequencies that can be utilized in the detection of central fixation. The objective of this study was to compare artificial neural networks with classical statistical methods, with respect to their ability to detect central fixation reliably. A classical feedforward, pattern recognition, two-layer neural network architecture was used, consisting of one hidden layer and one output layer. The network has four inputs, representing normalized spectral powers at four signal frequencies generated during retinal birefringence scanning. The hidden layer contains four neurons. The output suggests presence or absence of central fixation. Backpropagation was used to train the network, using the gradient descent algorithm and the cross-entropy error as the performance function. The network was trained, validated and tested on a set of controlled calibration data obtained from 600 measurements from ten eyes in a previous study, and was additionally tested on a clinical set of 78 eyes, independently diagnosed by an ophthalmologist. In the first part of this study, a neural network was designed around the calibration set. With a proper architecture and training, the network provided performance that was comparable to classical statistical methods, allowing perfect separation between the central and paracentral fixation data, with both the sensitivity and the specificity of the instrument being 100%. In the second part of the study, the neural network was applied to the clinical data. It allowed reliable separation between normal subjects and affected subjects, its accuracy again matching that of the statistical methods. With a proper choice of a neural network architecture and a good, uncontaminated training data set, the artificial neural network can be an efficient classification tool for detecting central fixation based on retinal birefringence scanning.

  17. An Effective and Novel Neural Network Ensemble for Shift Pattern Detection in Control Charts.

    PubMed

    Barghash, Mahmoud

    2015-01-01

    Pattern recognition in control charts is critical to make a balance between discovering faults as early as possible and reducing the number of false alarms. This work is devoted to designing a multistage neural network ensemble that achieves this balance which reduces rework and scrape without reducing productivity. The ensemble under focus is composed of a series of neural network stages and a series of decision points. Initially, this work compared using multidecision points and single-decision point on the performance of the ANN which showed that multidecision points are highly preferable to single-decision points. This work also tested the effect of population percentages on the ANN and used this to optimize the ANN's performance. Also this work used optimized and nonoptimized ANNs in an ensemble and proved that using nonoptimized ANN may reduce the performance of the ensemble. The ensemble that used only optimized ANNs has improved performance over individual ANNs and three-sigma level rule. In that respect using the designed ensemble can help in reducing the number of false stops and increasing productivity. It also can be used to discover even small shifts in the mean as early as possible.

  18. Analyzing psychotherapy process as intersubjective sensemaking: an approach based on discourse analysis and neural networks.

    PubMed

    Nitti, Mariangela; Ciavolino, Enrico; Salvatore, Sergio; Gennaro, Alessandro

    2010-09-01

    The authors propose a method for analyzing the psychotherapy process: discourse flow analysis (DFA). DFA is a technique representing the verbal interaction between therapist and patient as a discourse network, aimed at measuring the therapist-patient discourse ability to generate new meanings through time. DFA assumes that the main function of psychotherapy is to produce semiotic novelty. DFA is applied to the verbatim transcript of the psychotherapy. It defines the main meanings active within the therapeutic discourse by means of the combined use of text analysis and statistical techniques. Subsequently, it represents the dynamic interconnections among these meanings in terms of a "discursive network." The dynamic and structural indexes of the discursive network have been shown to provide a valid representation of the patient-therapist communicative flow as well as an estimation of its clinical quality. Finally, a neural network is designed specifically to identify patterns of functioning of the discursive network and to verify the clinical validity of these patterns in terms of their association with specific phases of the psychotherapy process. An application of the DFA to a case of psychotherapy is provided to illustrate the method and the kinds of results it produces.

  19. How to cluster in parallel with neural networks

    NASA Technical Reports Server (NTRS)

    Kamgar-Parsi, Behzad; Gualtieri, J. A.; Devaney, Judy E.; Kamgar-Parsi, Behrooz

    1988-01-01

    Partitioning a set of N patterns in a d-dimensional metric space into K clusters - in a way that those in a given cluster are more similar to each other than the rest - is a problem of interest in astrophysics, image analysis and other fields. As there are approximately K(N)/K (factorial) possible ways of partitioning the patterns among K clusters, finding the best solution is beyond exhaustive search when N is large. Researchers show that this problem can be formulated as an optimization problem for which very good, but not necessarily optimal solutions can be found by using a neural network. To do this the network must start from many randomly selected initial states. The network is simulated on the MPP (a 128 x 128 SIMD array machine), where researchers use the massive parallelism not only in solving the differential equations that govern the evolution of the network, but also by starting the network from many initial states at once, thus obtaining many solutions in one run. Researchers obtain speedups of two to three orders of magnitude over serial implementations and the promise through Analog VLSI implementations of speedups comensurate with human perceptual abilities.

  20. ChemNet: A Transferable and Generalizable Deep Neural Network for Small-Molecule Property Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goh, Garrett B.; Siegel, Charles M.; Vishnu, Abhinav

    With access to large datasets, deep neural networks through representation learning have been able to identify patterns from raw data, achieving human-level accuracy in image and speech recognition tasks. However, in chemistry, availability of large standardized and labelled datasets is scarce, and with a multitude of chemical properties of interest, chemical data is inherently small and fragmented. In this work, we explore transfer learning techniques in conjunction with the existing Chemception CNN model, to create a transferable and generalizable deep neural network for small-molecule property prediction. Our latest model, ChemNet learns in a semi-supervised manner from inexpensive labels computed frommore » the ChEMBL database. When fine-tuned to the Tox21, HIV and FreeSolv dataset, which are 3 separate chemical tasks that ChemNet was not originally trained on, we demonstrate that ChemNet exceeds the performance of existing Chemception models, contemporary MLP models that trains on molecular fingerprints, and it matches the performance of the ConvGraph algorithm, the current state-of-the-art. Furthermore, as ChemNet has been pre-trained on a large diverse chemical database, it can be used as a universal “plug-and-play” deep neural network, which accelerates the deployment of deep neural networks for the prediction of novel small-molecule chemical properties.« less

  1. Parametric motion control of robotic arms: A biologically based approach using neural networks

    NASA Technical Reports Server (NTRS)

    Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.

    1993-01-01

    A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.

  2. Patterns of thought: Population variation in the associations between large-scale network organisation and self-reported experiences at rest.

    PubMed

    Wang, Hao-Ting; Bzdok, Danilo; Margulies, Daniel; Craddock, Cameron; Milham, Michael; Jefferies, Elizabeth; Smallwood, Jonathan

    2018-08-01

    Contemporary cognitive neuroscience recognises unconstrained processing varies across individuals, describing variation in meaningful attributes, such as intelligence. It may also have links to patterns of on-going experience. This study examined whether dimensions of population variation in different modes of unconstrained processing can be described by the associations between patterns of neural activity and self-reports of experience during the same period. We selected 258 individuals from a publicly available data set who had measures of resting-state functional magnetic resonance imaging, and self-reports of experience during the scan. We used machine learning to determine patterns of association between the neural and self-reported data, finding variation along four dimensions. 'Purposeful' experiences were associated with lower connectivity - in particular default mode and limbic networks were less correlated with attention and sensorimotor networks. 'Emotional' experiences were associated with higher connectivity, especially between limbic and ventral attention networks. Experiences focused on themes of 'personal importance' were associated with reduced functional connectivity within attention and control systems. Finally, visual experiences were associated with stronger connectivity between visual and other networks, in particular the limbic system. Some of these patterns had contrasting links with cognitive function as assessed in a separate laboratory session - purposeful thinking was linked to greater intelligence and better abstract reasoning, while a focus on personal importance had the opposite relationship. Together these findings are consistent with an emerging literature on unconstrained states and also underlines that these states are heterogeneous, with distinct modes of population variation reflecting the interplay of different large-scale networks. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Information Compression, Multiple Alignment, and the Representation and Processing of Knowledge in the Brain.

    PubMed

    Wolff, J Gerard

    2016-01-01

    The SP theory of intelligence , with its realization in the SP computer model , aims to simplify and integrate observations and concepts across artificial intelligence, mainstream computing, mathematics, and human perception and cognition, with information compression as a unifying theme. This paper describes how abstract structures and processes in the theory may be realized in terms of neurons, their interconnections, and the transmission of signals between neurons. This part of the SP theory- SP-neural -is a tentative and partial model for the representation and processing of knowledge in the brain. Empirical support for the SP theory-outlined in the paper-provides indirect support for SP-neural. In the abstract part of the SP theory (SP-abstract), all kinds of knowledge are represented with patterns , where a pattern is an array of atomic symbols in one or two dimensions. In SP-neural, the concept of a "pattern" is realized as an array of neurons called a pattern assembly , similar to Hebb's concept of a "cell assembly" but with important differences. Central to the processing of information in SP-abstract is information compression via the matching and unification of patterns (ICMUP) and, more specifically, information compression via the powerful concept of multiple alignment , borrowed and adapted from bioinformatics. Processes such as pattern recognition, reasoning and problem solving are achieved via the building of multiple alignments, while unsupervised learning is achieved by creating patterns from sensory information and also by creating patterns from multiple alignments in which there is a partial match between one pattern and another. It is envisaged that, in SP-neural, short-lived neural structures equivalent to multiple alignments will be created via an inter-play of excitatory and inhibitory neural signals. It is also envisaged that unsupervised learning will be achieved by the creation of pattern assemblies from sensory information and from the neural equivalents of multiple alignments, much as in the non-neural SP theory-and significantly different from the "Hebbian" kinds of learning which are widely used in the kinds of artificial neural network that are popular in computer science. The paper discusses several associated issues, with relevant empirical evidence.

  4. Spontaneous network activity and synaptic development

    PubMed Central

    Kerschensteiner, Daniel

    2014-01-01

    Throughout development, the nervous system produces patterned spontaneous activity. Research over the last two decades has revealed a core group of mechanisms that mediate spontaneous activity in diverse circuits. Many circuits engage several of these mechanisms sequentially to accommodate developmental changes in connectivity. In addition to shared mechanisms, activity propagates through developing circuits and neuronal pathways (i.e. linked circuits in different brain areas) in stereotypic patterns. Increasing evidence suggests that spontaneous network activity shapes synaptic development in vivo. Variations in activity-dependent plasticity may explain how similar mechanisms and patterns of activity can be employed to establish diverse circuits. Here, I will review common mechanisms and patterns of spontaneous activity in emerging neural networks and discuss recent insights into their contribution to synaptic development. PMID:24280071

  5. Embracing the comparative approach: how robust phylogenies and broader developmental sampling impacts the understanding of nervous system evolution.

    PubMed

    Hejnol, Andreas; Lowe, Christopher J

    2015-12-19

    Molecular biology has provided a rich dataset to develop hypotheses of nervous system evolution. The startling patterning similarities between distantly related animals during the development of their central nervous system (CNS) have resulted in the hypothesis that a CNS with a single centralized medullary cord and a partitioned brain is homologous across bilaterians. However, the ability to precisely reconstruct ancestral neural architectures from molecular genetic information requires that these gene networks specifically map with particular neural anatomies. A growing body of literature representing the development of a wider range of metazoan neural architectures demonstrates that patterning gene network complexity is maintained in animals with more modest levels of neural complexity. Furthermore, a robust phylogenetic framework that provides the basis for testing the congruence of these homology hypotheses has been lacking since the advent of the field of 'evo-devo'. Recent progress in molecular phylogenetics is refining the necessary framework to test previous homology statements that span large evolutionary distances. In this review, we describe recent advances in animal phylogeny and exemplify for two neural characters-the partitioned brain of arthropods and the ventral centralized nerve cords of annelids-a test for congruence using this framework. The sequential sister taxa at the base of Ecdysozoa and Spiralia comprise small, interstitial groups. This topology is not consistent with the hypothesis of homology of tripartitioned brain of arthropods and vertebrates as well as the ventral arthropod and rope-like ladder nervous system of annelids. There can be exquisite conservation of gene regulatory networks between distantly related groups with contrasting levels of nervous system centralization and complexity. Consequently, the utility of molecular characters to reconstruct ancestral neural organization in deep time is limited. © 2015 The Authors.

  6. Embracing the comparative approach: how robust phylogenies and broader developmental sampling impacts the understanding of nervous system evolution

    PubMed Central

    Hejnol, Andreas; Lowe, Christopher J.

    2015-01-01

    Molecular biology has provided a rich dataset to develop hypotheses of nervous system evolution. The startling patterning similarities between distantly related animals during the development of their central nervous system (CNS) have resulted in the hypothesis that a CNS with a single centralized medullary cord and a partitioned brain is homologous across bilaterians. However, the ability to precisely reconstruct ancestral neural architectures from molecular genetic information requires that these gene networks specifically map with particular neural anatomies. A growing body of literature representing the development of a wider range of metazoan neural architectures demonstrates that patterning gene network complexity is maintained in animals with more modest levels of neural complexity. Furthermore, a robust phylogenetic framework that provides the basis for testing the congruence of these homology hypotheses has been lacking since the advent of the field of ‘evo-devo’. Recent progress in molecular phylogenetics is refining the necessary framework to test previous homology statements that span large evolutionary distances. In this review, we describe recent advances in animal phylogeny and exemplify for two neural characters—the partitioned brain of arthropods and the ventral centralized nerve cords of annelids—a test for congruence using this framework. The sequential sister taxa at the base of Ecdysozoa and Spiralia comprise small, interstitial groups. This topology is not consistent with the hypothesis of homology of tripartitioned brain of arthropods and vertebrates as well as the ventral arthropod and rope-like ladder nervous system of annelids. There can be exquisite conservation of gene regulatory networks between distantly related groups with contrasting levels of nervous system centralization and complexity. Consequently, the utility of molecular characters to reconstruct ancestral neural organization in deep time is limited. PMID:26554039

  7. Initial results on fault diagnosis of DSN antenna control assemblies using pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Smyth, P.; Mellstrom, J.

    1990-01-01

    Initial results obtained from an investigation using pattern recognition techniques for identifying fault modes in the Deep Space Network (DSN) 70 m antenna control loops are described. The overall background to the problem is described, the motivation and potential benefits of this approach are outlined. In particular, an experiment is described in which fault modes were introduced into a state-space simulation of the antenna control loops. By training a multilayer feed-forward neural network on the simulated sensor output, classification rates of over 95 percent were achieved with a false alarm rate of zero on unseen tests data. It concludes that although the neural classifier has certain practical limitations at present, it also has considerable potential for problems of this nature.

  8. Automatic comparison of striation marks and automatic classification of shoe prints

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Keijzer, Jan; Keereweer, Isaac

    1995-09-01

    A database for toolmarks (named TRAX) and a database for footwear outsole designs (named REBEZO) have been developed on a PC. The databases are filled with video-images and administrative data about the toolmarks and the footwear designs. An algorithm for the automatic comparison of the digitized striation patterns has been developed for TRAX. The algorithm appears to work well for deep and complete striation marks and will be implemented in TRAX. For REBEZO some efforts have been made to the automatic classification of outsole patterns. The algorithm first segments the shoeprofile. Fourier-features are selected for the separate elements and are classified with a neural network. In future developments information on invariant moments of the shape and rotation angle will be included in the neural network.

  9. Software tool for data mining and its applications

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Ye, Chenzhou; Chen, Nianyi

    2002-03-01

    A software tool for data mining is introduced, which integrates pattern recognition (PCA, Fisher, clustering, hyperenvelop, regression), artificial intelligence (knowledge representation, decision trees), statistical learning (rough set, support vector machine), computational intelligence (neural network, genetic algorithm, fuzzy systems). It consists of nine function models: pattern recognition, decision trees, association rule, fuzzy rule, neural network, genetic algorithm, Hyper Envelop, support vector machine, visualization. The principle and knowledge representation of some function models of data mining are described. The software tool of data mining is realized by Visual C++ under Windows 2000. Nonmonotony in data mining is dealt with by concept hierarchy and layered mining. The software tool of data mining has satisfactorily applied in the prediction of regularities of the formation of ternary intermetallic compounds in alloy systems, and diagnosis of brain glioma.

  10. Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks

    NASA Astrophysics Data System (ADS)

    Dana, Hod; Marom, Anat; Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-06-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bioengineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes per second of structures with mm-scale dimensions containing a network of over 1,000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances.

  11. Classification of images acquired with colposcopy using artificial neural networks.

    PubMed

    Simões, Priscyla W; Izumi, Narjara B; Casagrande, Ramon S; Venson, Ramon; Veronezi, Carlos D; Moretti, Gustavo P; da Rocha, Edroaldo L; Cechinel, Cristian; Ceretta, Luciane B; Comunello, Eros; Martins, Paulo J; Casagrande, Rogério A; Snoeyer, Maria L; Manenti, Sandra A

    2014-01-01

    To explore the advantages of using artificial neural networks (ANNs) to recognize patterns in colposcopy to classify images in colposcopy. Transversal, descriptive, and analytical study of a quantitative approach with an emphasis on diagnosis. The training test e validation set was composed of images collected from patients who underwent colposcopy. These images were provided by a gynecology clinic located in the city of Criciúma (Brazil). The image database (n = 170) was divided; 48 images were used for the training process, 58 images were used for the tests, and 64 images were used for the validation. A hybrid neural network based on Kohonen self-organizing maps and multilayer perceptron (MLP) networks was used. After 126 cycles, the validation was performed. The best results reached an accuracy of 72.15%, a sensibility of 69.78%, and a specificity of 68%. Although the preliminary results still exhibit an average efficiency, the present approach is an innovative and promising technique that should be deeply explored in the context of the present study.

  12. Motor Patterns in Walking.

    PubMed

    Lacquaniti, F.; Grasso, R.; Zago, M.

    1999-08-01

    Despite the fact that locomotion may differ widely in mammals, common principles of kinematic control are at work. These reflect common mechanical and neural constraints. The former are related to the need to maintain balance and to limit energy expenditure. The latter are related to the organization of the central pattern-generating networks.

  13. Investigation of Dynamic Algorithms for Pattern Recognition Identified in Cerebral Cortex

    DTIC Science & Technology

    1991-12-02

    oscillatory and possibly chaotic activity forin the actual cortical substrate of the diverse sensory, motor, and cognitive operations now studied in...September Neural Information Processing Systems - Natural and Synthetic, Denver, Colo., November 1989 U.C. San Diego, Cognitive Science Dept...Baird. Biologically applied neural networks may foster the co-evolution of neurobiology and cognitive psychology. Brain and Behavioral Sciences, 37

  14. Image texture segmentation using a neural network

    NASA Astrophysics Data System (ADS)

    Sayeh, Mohammed R.; Athinarayanan, Ragu; Dhali, Pushpuak

    1992-09-01

    In this paper we use a neural network called the Lyapunov associative memory (LYAM) system to segment image texture into different categories or clusters. The LYAM system is constructed by a set of ordinary differential equations which are simulated on a digital computer. The clustering can be achieved by using a single tuning parameter in the simplest model. Pattern classes are represented by the stable equilibrium states of the system. Design of the system is based on synthesizing two local energy functions, namely, the learning and recall energy functions. Before the implementation of the segmentation process, a Gauss-Markov random field (GMRF) model is applied to the raw image. This application suitably reduces the image data and prepares the texture information for the neural network process. We give a simple image example illustrating the capability of the technique. The GMRF-generated features are also used for a clustering, based on the Euclidean distance.

  15. Multi-objective evolutionary optimization for constructing neural networks for virtual reality visual data mining: application to geophysical prospecting.

    PubMed

    Valdés, Julio J; Barton, Alan J

    2007-05-01

    A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.

  16. Vehicle Signal Analysis Using Artificial Neural Networks for a Bridge Weigh-in-Motion System

    PubMed Central

    Kim, Sungkon; Lee, Jungwhee; Park, Min-Seok; Jo, Byung-Wan

    2009-01-01

    This paper describes the procedures for development of signal analysis algorithms using artificial neural networks for Bridge Weigh-in-Motion (B-WIM) systems. Through the analysis procedure, the extraction of information concerning heavy traffic vehicles such as weight, speed, and number of axles from the time domain strain data of the B-WIM system was attempted. As one of the several possible pattern recognition techniques, an Artificial Neural Network (ANN) was employed since it could effectively include dynamic effects and bridge-vehicle interactions. A number of vehicle traveling experiments with sufficient load cases were executed on two different types of bridges, a simply supported pre-stressed concrete girder bridge and a cable-stayed bridge. Different types of WIM systems such as high-speed WIM or low-speed WIM were also utilized during the experiments for cross-checking and to validate the performance of the developed algorithms. PMID:22408487

  17. Copula Entropy coupled with Wavelet Neural Network Model for Hydrological Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Yin; Yue, JiGuang; Liu, ShuGuang; Wang, Li

    2018-02-01

    Artificial Neural network(ANN) has been widely used in hydrological forecasting. in this paper an attempt has been made to find an alternative method for hydrological prediction by combining Copula Entropy(CE) with Wavelet Neural Network(WNN), CE theory permits to calculate mutual information(MI) to select Input variables which avoids the limitations of the traditional linear correlation(LCC) analysis. Wavelet analysis can provide the exact locality of any changes in the dynamical patterns of the sequence Coupled with ANN Strong non-linear fitting ability. WNN model was able to provide a good fit with the hydrological data. finally, the hybrid model(CE+WNN) have been applied to daily water level of Taihu Lake Basin, and compared with CE ANN, LCC WNN and LCC ANN. Results showed that the hybrid model produced better results in estimating the hydrograph properties than the latter models.

  18. Vehicle Signal Analysis Using Artificial Neural Networks for a Bridge Weigh-in-Motion System.

    PubMed

    Kim, Sungkon; Lee, Jungwhee; Park, Min-Seok; Jo, Byung-Wan

    2009-01-01

    This paper describes the procedures for development of signal analysis algorithms using artificial neural networks for Bridge Weigh-in-Motion (B-WIM) systems. Through the analysis procedure, the extraction of information concerning heavy traffic vehicles such as weight, speed, and number of axles from the time domain strain data of the B-WIM system was attempted. As one of the several possible pattern recognition techniques, an Artificial Neural Network (ANN) was employed since it could effectively include dynamic effects and bridge-vehicle interactions. A number of vehicle traveling experiments with sufficient load cases were executed on two different types of bridges, a simply supported pre-stressed concrete girder bridge and a cable-stayed bridge. Different types of WIM systems such as high-speed WIM or low-speed WIM were also utilized during the experiments for cross-checking and to validate the performance of the developed algorithms.

  19. Review of Medical Image Classification using the Adaptive Neuro-Fuzzy Inference System

    PubMed Central

    Hosseini, Monireh Sheikh; Zekri, Maryam

    2012-01-01

    Image classification is an issue that utilizes image processing, pattern recognition and classification methods. Automatic medical image classification is a progressive area in image classification, and it is expected to be more developed in the future. Because of this fact, automatic diagnosis can assist pathologists by providing second opinions and reducing their workload. This paper reviews the application of the adaptive neuro-fuzzy inference system (ANFIS) as a classifier in medical image classification during the past 16 years. ANFIS is a fuzzy inference system (FIS) implemented in the framework of an adaptive fuzzy neural network. It combines the explicit knowledge representation of an FIS with the learning power of artificial neural networks. The objective of ANFIS is to integrate the best features of fuzzy systems and neural networks. A brief comparison with other classifiers, main advantages and drawbacks of this classifier are investigated. PMID:23493054

  20. Perceiving nonverbal behavior: neural correlates of processing movement fluency and contingency in dyadic interactions.

    PubMed

    Georgescu, Alexandra L; Kuzmanovic, Bojana; Santos, Natacha S; Tepest, Ralf; Bente, Gary; Tittgemeyer, Marc; Vogeley, Kai

    2014-04-01

    Despite the fact that nonverbal dyadic social interactions are abundant in the environment, the neural mechanisms underlying their processing are not yet fully understood. Research in the field of social neuroscience has suggested that two neural networks appear to be involved in social understanding: (1) the action observation network (AON) and (2) the social neural network (SNN). The aim of this study was to determine the differential contributions of the AON and the SNN to the processing of nonverbal behavior as observed in dyadic social interactions. To this end, we used short computer animation sequences displaying dyadic social interactions between two virtual characters and systematically manipulated two key features of movement activity, which are known to influence the perception of meaning in nonverbal stimuli: (1) movement fluency and (2) contingency of movement patterns. A group of 21 male participants rated the "naturalness" of the observed scenes on a four-point scale while undergoing fMRI. Behavioral results showed that both fluency and contingency significantly influenced the "naturalness" experience of the presented animations. Neurally, the AON was preferentially engaged when processing contingent movement patterns, but did not discriminate between different degrees of movement fluency. In contrast, regions of the SNN were engaged more strongly when observing dyads with disturbed movement fluency. In conclusion, while the AON is involved in the general processing of contingent social actions, irrespective of their kinematic properties, the SNN is preferentially recruited when atypical kinematic properties prompt inferences about the agents' intentions. Copyright © 2013 Wiley Periodicals, Inc.

  1. A Feasibility Study for Perioperative Ventricular Tachycardia Prognosis and Detection and Noise Detection Using a Neural Network and Predictive Linear Operators

    NASA Technical Reports Server (NTRS)

    Moebes, T. A.

    1994-01-01

    To locate the accessory pathway(s) in preexicitation syndromes, epicardial and endocardial ventricular mapping is performed during anterograde ventricular activation via accessory pathway(s) from data originally received in signal form. As the number of channels increases, it is pertinent that more automated detection of coherent/incoherent signals is achieved as well as the prediction and prognosis of ventricular tachywardia (VT). Today's computers and computer program algorithms are not good in simple perceptual tasks such as recognizing a pattern or identifying a sound. This discrepancy, among other things, has been a major motivating factor in developing brain-based, massively parallel computing architectures. Neural net paradigms have proven to be effective at pattern recognition tasks. In signal processing, the picking of coherent/incoherent signals represents a pattern recognition task for computer systems. The picking of signals representing the onset ot VT also represents such a computer task. We attacked this problem by defining four signal attributes for each potential first maximal arrival peak and one signal attribute over the entire signal as input to a back propagation neural network. One attribute was the predicted amplitude value after the maximum amplitude over a data window. Then, by using a set of known (user selected) coherent/incoherent signals, and signals representing the onset of VT, we trained the back propagation network to recognize coherent/incoherent signals, and signals indicating the onset of VT. Since our output scheme involves a true or false decision, and since the output unit computes values between 0 and 1, we used a Fuzzy Arithmetic approach to classify data as coherent/incoherent signals. Furthermore, a Mean-Square Error Analysis was used to determine system stability. The neural net based picking coherent/incoherent signal system achieved high accuracy on picking coherent/incoherent signals on different patients. The system also achieved a high accuracy of picking signals which represent the onset of VT, that is, VT immediately followed these signals. A special binary representation of the input and output data allowed the neural network to train very rapidly as compared to another standard decimal or normalized representations of the data.

  2. Neural computing for numeric-to-symbolic conversion in control systems

    NASA Technical Reports Server (NTRS)

    Passino, Kevin M.; Sartori, Michael A.; Antsaklis, Panos J.

    1989-01-01

    A type of neural network, the multilayer perceptron, is used to classify numeric data and assign appropriate symbols to various classes. This numeric-to-symbolic conversion results in a type of information extraction, which is similar to what is called data reduction in pattern recognition. The use of the neural network as a numeric-to-symbolic converter is introduced, its application in autonomous control is discussed, and several applications are studied. The perceptron is used as a numeric-to-symbolic converter for a discrete-event system controller supervising a continuous variable dynamic system. It is also shown how the perceptron can implement fault trees, which provide useful information (alarms) in a biological system and information for failure diagnosis and control purposes in an aircraft example.

  3. Novel probabilistic neuroclassifier

    NASA Astrophysics Data System (ADS)

    Hong, Jiang; Serpen, Gursel

    2003-09-01

    A novel probabilistic potential function neural network classifier algorithm to deal with classes which are multi-modally distributed and formed from sets of disjoint pattern clusters is proposed in this paper. The proposed classifier has a number of desirable properties which distinguish it from other neural network classifiers. A complete description of the algorithm in terms of its architecture and the pseudocode is presented. Simulation analysis of the newly proposed neuro-classifier algorithm on a set of benchmark problems is presented. Benchmark problems tested include IRIS, Sonar, Vowel Recognition, Two-Spiral, Wisconsin Breast Cancer, Cleveland Heart Disease and Thyroid Gland Disease. Simulation results indicate that the proposed neuro-classifier performs consistently better for a subset of problems for which other neural classifiers perform relatively poorly.

  4. Self-organized network with a supervised training and its comparison with FALVQ in artificial odor recognition system

    NASA Astrophysics Data System (ADS)

    Kusumoputro, Benyamin; Rostiviani, Linda; Saptawijaya, Ari

    2000-07-01

    Artificial odor recognition system is developed in order to mimic the human sensory test in cosmetics, parfum and beverage industries. The developed system however, lacks of ability to recognize the unknown type of odor. To improve the system's capability, a hybrid neural system with a supervised learning paradigm is developed and used as a pattern classifier. In this paper, the performance of the hybrid neural system is investigated, together with that of FALVQ neural system.

  5. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks.

    PubMed

    Honegger, Thibault; Thielen, Moritz I; Feizi, Soheil; Sanjana, Neville E; Voldman, Joel

    2016-06-22

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  6. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks

    NASA Astrophysics Data System (ADS)

    Honegger, Thibault; Thielen, Moritz I.; Feizi, Soheil; Sanjana, Neville E.; Voldman, Joel

    2016-06-01

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  7. The use of decision tree induction and artificial neural networks for recognizing the geochemical distribution patterns of LREE in the Choghart deposit, Central Iran

    NASA Astrophysics Data System (ADS)

    Zaremotlagh, S.; Hezarkhani, A.

    2017-04-01

    Some evidences of rare earth elements (REE) concentrations are found in iron oxide-apatite (IOA) deposits which are located in Central Iranian microcontinent. There are many unsolved problems about the origin and metallogenesis of IOA deposits in this district. Although it is considered that felsic magmatism and mineralization were simultaneous in the district, interaction of multi-stage hydrothermal-magmatic processes within the Early Cambrian volcano-sedimentary sequence probably caused some epigenetic mineralizations. Secondary geological processes (e.g., multi-stage mineralization, alteration, and weathering) have affected on variations of major elements and possible redistribution of REE in IOA deposits. Hence, the geochemical behaviors and distribution patterns of REE are expected to be complicated in different zones of these deposits. The aim of this paper is recognizing LREE distribution patterns based on whole-rock chemical compositions and automatic discovery of their geochemical rules. For this purpose, the pattern recognition techniques including decision tree and neural network were applied on a high-dimensional geochemical dataset from Choghart IOA deposit. Because some data features were irrelevant or redundant in recognizing the distribution patterns of each LREE, a greedy attribute subset selection technique was employed to select the best subset of predictors used in classification tasks. The decision trees (CART algorithm) were pruned optimally to more accurately categorize independent test data than unpruned ones. The most effective classification rules were extracted from the pruned tree to describe the meaningful relationships between the predictors and different concentrations of LREE. A feed-forward artificial neural network was also applied to reliably predict the influence of various rock compositions on the spatial distribution patterns of LREE with a better performance than the decision tree induction. The findings of this study could be effectively used to visualize the LREE distribution patterns as geochemical maps.

  8. Chimera states in brain networks: Empirical neural vs. modular fractal connectivity

    NASA Astrophysics Data System (ADS)

    Chouzouris, Teresa; Omelchenko, Iryna; Zakharova, Anna; Hlinka, Jaroslav; Jiruska, Premysl; Schöll, Eckehard

    2018-04-01

    Complex spatiotemporal patterns, called chimera states, consist of coexisting coherent and incoherent domains and can be observed in networks of coupled oscillators. The interplay of synchrony and asynchrony in complex brain networks is an important aspect in studies of both the brain function and disease. We analyse the collective dynamics of FitzHugh-Nagumo neurons in complex networks motivated by its potential application to epileptology and epilepsy surgery. We compare two topologies: an empirical structural neural connectivity derived from diffusion-weighted magnetic resonance imaging and a mathematically constructed network with modular fractal connectivity. We analyse the properties of chimeras and partially synchronized states and obtain regions of their stability in the parameter planes. Furthermore, we qualitatively simulate the dynamics of epileptic seizures and study the influence of the removal of nodes on the network synchronizability, which can be useful for applications to epileptic surgery.

  9. Pattern reverberation in networks of excitable systems with connection delays

    NASA Astrophysics Data System (ADS)

    Lücken, Leonhard; Rosin, David P.; Worlitzer, Vasco M.; Yanchuk, Serhiy

    2017-01-01

    We consider the recurrent pulse-coupled networks of excitable elements with delayed connections, which are inspired by the biological neural networks. If the delays are tuned appropriately, the network can either stay in the steady resting state, or alternatively, exhibit a desired spiking pattern. It is shown that such a network can be used as a pattern-recognition system. More specifically, the application of the correct pattern as an external input to the network leads to a self-sustained reverberation of the encoded pattern. In terms of the coupling structure, the tolerance and the refractory time of the individual systems, we determine the conditions for the uniqueness of the sustained activity, i.e., for the functionality of the network as an unambiguous pattern detector. We point out the relation of the considered systems with cyclic polychronous groups and show how the assumed delay configurations may arise in a self-organized manner when a spike-time dependent plasticity of the connection delays is assumed. As excitable elements, we employ the simplistic coincidence detector models as well as the Hodgkin-Huxley neuron models. Moreover, the system is implemented experimentally on a Field-Programmable Gate Array.

  10. Optimal Design for Hetero-Associative Memory: Hippocampal CA1 Phase Response Curve and Spike-Timing-Dependent Plasticity

    PubMed Central

    Miyata, Ryota; Ota, Keisuke; Aonishi, Toru

    2013-01-01

    Recently reported experimental findings suggest that the hippocampal CA1 network stores spatio-temporal spike patterns and retrieves temporally reversed and spread-out patterns. In this paper, we explore the idea that the properties of the neural interactions and the synaptic plasticity rule in the CA1 network enable it to function as a hetero-associative memory recalling such reversed and spread-out spike patterns. In line with Lengyel’s speculation (Lengyel et al., 2005), we firstly derive optimally designed spike-timing-dependent plasticity (STDP) rules that are matched to neural interactions formalized in terms of phase response curves (PRCs) for performing the hetero-associative memory function. By maximizing object functions formulated in terms of mutual information for evaluating memory retrieval performance, we search for STDP window functions that are optimal for retrieval of normal and doubly spread-out patterns under the constraint that the PRCs are those of CA1 pyramidal neurons. The system, which can retrieve normal and doubly spread-out patterns, can also retrieve reversed patterns with the same quality. Finally, we demonstrate that purposely designed STDP window functions qualitatively conform to typical ones found in CA1 pyramidal neurons. PMID:24204822

  11. Resting-State Brain and the FTO Obesity Risk Allele: Default Mode, Sensorimotor, and Salience Network Connectivity Underlying Different Somatosensory Integration and Reward Processing between Genotypes.

    PubMed

    Olivo, Gaia; Wiemerslage, Lyle; Nilsson, Emil K; Solstrand Dahlberg, Linda; Larsen, Anna L; Olaya Búcaro, Marcela; Gustafsson, Veronica P; Titova, Olga E; Bandstein, Marcus; Larsson, Elna-Marie; Benedict, Christian; Brooks, Samantha J; Schiöth, Helgi B

    2016-01-01

    Single-nucleotide polymorphisms (SNPs) of the fat mass and obesity associated (FTO) gene are linked to obesity, but how these SNPs influence resting-state neural activation is unknown. Few brain-imaging studies have investigated the influence of obesity-related SNPs on neural activity, and no study has investigated resting-state connectivity patterns. We tested connectivity within three, main resting-state networks: default mode (DMN), sensorimotor (SMN), and salience network (SN) in 30 male participants, grouped based on genotype for the rs9939609 FTO SNP, as well as punishment and reward sensitivity measured by the Behavioral Inhibition (BIS) and Behavioral Activation System (BAS) questionnaires. Because obesity is associated with anomalies in both systems, we calculated a BIS/BAS ratio (BBr) accounting for features of both scores. A prominence of BIS over BAS (higher BBr) resulted in increased connectivity in frontal and paralimbic regions. These alterations were more evident in the obesity-associated AA genotype, where a high BBr was also associated with increased SN connectivity in dopaminergic circuitries, and in a subnetwork involved in somatosensory integration regarding food. Participants with AA genotype and high BBr, compared to corresponding participants in the TT genotype, also showed greater DMN connectivity in regions involved in the processing of food cues, and in the SMN for regions involved in visceral perception and reward-based learning. These findings suggest that neural connectivity patterns influence the sensitivity toward punishment and reward more closely in the AA carriers, predisposing them to developing obesity. Our work explains a complex interaction between genetics, neural patterns, and behavioral measures in determining the risk for obesity and may help develop individually-tailored strategies for obesity prevention.

  12. Resting-State Brain and the FTO Obesity Risk Allele: Default Mode, Sensorimotor, and Salience Network Connectivity Underlying Different Somatosensory Integration and Reward Processing between Genotypes

    PubMed Central

    Olivo, Gaia; Wiemerslage, Lyle; Nilsson, Emil K.; Solstrand Dahlberg, Linda; Larsen, Anna L.; Olaya Búcaro, Marcela; Gustafsson, Veronica P.; Titova, Olga E.; Bandstein, Marcus; Larsson, Elna-Marie; Benedict, Christian; Brooks, Samantha J.; Schiöth, Helgi B.

    2016-01-01

    Single-nucleotide polymorphisms (SNPs) of the fat mass and obesity associated (FTO) gene are linked to obesity, but how these SNPs influence resting-state neural activation is unknown. Few brain-imaging studies have investigated the influence of obesity-related SNPs on neural activity, and no study has investigated resting-state connectivity patterns. We tested connectivity within three, main resting-state networks: default mode (DMN), sensorimotor (SMN), and salience network (SN) in 30 male participants, grouped based on genotype for the rs9939609 FTO SNP, as well as punishment and reward sensitivity measured by the Behavioral Inhibition (BIS) and Behavioral Activation System (BAS) questionnaires. Because obesity is associated with anomalies in both systems, we calculated a BIS/BAS ratio (BBr) accounting for features of both scores. A prominence of BIS over BAS (higher BBr) resulted in increased connectivity in frontal and paralimbic regions. These alterations were more evident in the obesity-associated AA genotype, where a high BBr was also associated with increased SN connectivity in dopaminergic circuitries, and in a subnetwork involved in somatosensory integration regarding food. Participants with AA genotype and high BBr, compared to corresponding participants in the TT genotype, also showed greater DMN connectivity in regions involved in the processing of food cues, and in the SMN for regions involved in visceral perception and reward-based learning. These findings suggest that neural connectivity patterns influence the sensitivity toward punishment and reward more closely in the AA carriers, predisposing them to developing obesity. Our work explains a complex interaction between genetics, neural patterns, and behavioral measures in determining the risk for obesity and may help develop individually-tailored strategies for obesity prevention. PMID:26924971

  13. Memory and pattern storage in neural networks with activity dependent synapses

    NASA Astrophysics Data System (ADS)

    Mejias, J. F.; Torres, J. J.

    2009-01-01

    We present recently obtained results on the influence of the interplay between several activity dependent synaptic mechanisms, such as short-term depression and facilitation, on the maximum memory storage capacity in an attractor neural network [1]. In contrast with the case of synaptic depression, which drastically reduces the capacity of the network to store and retrieve activity patterns [2], synaptic facilitation is able to enhance the memory capacity in different situations. In particular, we find that a convenient balance between depression and facilitation can enhance the memory capacity, reaching maximal values similar to those obtained with static synapses, that is, without activity-dependent processes. We also argue, employing simple arguments, that this level of balance is compatible with experimental data recorded from some cortical areas, where depression and facilitation may play an important role for both memory-oriented tasks and information processing. We conclude that depressing synapses with a certain level of facilitation allow to recover the good retrieval properties of networks with static synapses while maintaining the nonlinear properties of dynamic synapses, convenient for information processing and coding.

  14. Intelligent-based Structural Damage Detection Model

    NASA Astrophysics Data System (ADS)

    Lee, Eric Wai Ming; Yu, Kin Fung

    2010-05-01

    This paper presents the application of a novel Artificial Neural Network (ANN) model for the diagnosis of structural damage. The ANN model, denoted as the GRNNFA, is a hybrid model combining the General Regression Neural Network Model (GRNN) and the Fuzzy ART (FA) model. It not only retains the important features of the GRNN and FA models (i.e. fast and stable network training and incremental growth of network structure) but also facilitates the removal of the noise embedded in the training samples. Structural damage alters the stiffness distribution of the structure and so as to change the natural frequencies and mode shapes of the system. The measured modal parameter changes due to a particular damage are treated as patterns for that damage. The proposed GRNNFA model was trained to learn those patterns in order to detect the possible damage location of the structure. Simulated data is employed to verify and illustrate the procedures of the proposed ANN-based damage diagnosis methodology. The results of this study have demonstrated the feasibility of applying the GRNNFA model to structural damage diagnosis even when the training samples were noise contaminated.

  15. Modelling fuel cell performance using artificial intelligence

    NASA Astrophysics Data System (ADS)

    Ogaji, S. O. T.; Singh, R.; Pilidis, P.; Diacakis, M.

    Over the last few years, fuel cell technology has been increasing promisingly its share in the generation of stationary power. Numerous pilot projects are operating worldwide, continuously increasing the amount of operating hours either as stand-alone devices or as part of gas turbine combined cycles. An essential tool for the adequate and dynamic analysis of such systems is a software model that enables the user to assess a large number of alternative options in the least possible time. On the other hand, the sphere of application of artificial neural networks has widened covering such endeavours of life such as medicine, finance and unsurprisingly engineering (diagnostics of faults in machines). Artificial neural networks have been described as diagrammatic representation of a mathematical equation that receives values (inputs) and gives out results (outputs). Artificial neural networks systems have the capacity to recognise and associate patterns and because of their inherent design features, they can be applied to linear and non-linear problem domains. In this paper, the performance of the fuel cell is modelled using artificial neural networks. The inputs to the network are variables that are critical to the performance of the fuel cell while the outputs are the result of changes in any one or all of the fuel cell design variables, on its performance. Critical parameters for the cell include the geometrical configuration as well as the operating conditions. For the neural network, various network design parameters such as the network size, training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. Results from the analysis as well as the limitations of the approach are presented and discussed.

  16. Discriminant analysis of fused positive and negative ion mobility spectra using multivariate self-modeling mixture analysis and neural networks.

    PubMed

    Chen, Ping; Harrington, Peter B

    2008-02-01

    A new method coupling multivariate self-modeling mixture analysis and pattern recognition has been developed to identify toxic industrial chemicals using fused positive and negative ion mobility spectra (dual scan spectra). A Smiths lightweight chemical detector (LCD), which can measure positive and negative ion mobility spectra simultaneously, was used to acquire the data. Simple-to-use interactive self-modeling mixture analysis (SIMPLISMA) was used to separate the analytical peaks in the ion mobility spectra from the background reactant ion peaks (RIP). The SIMPLSIMA analytical components of the positive and negative ion peaks were combined together in a butterfly representation (i.e., negative spectra are reported with negative drift times and reflected with respect to the ordinate and juxtaposed with the positive ion mobility spectra). Temperature constrained cascade-correlation neural network (TCCCN) models were built to classify the toxic industrial chemicals. Seven common toxic industrial chemicals were used in this project to evaluate the performance of the algorithm. Ten bootstrapped Latin partitions demonstrated that the classification of neural networks using the SIMPLISMA components was statistically better than neural network models trained with fused ion mobility spectra (IMS).

  17. COMPUTATIONAL ANALYSIS BASED ON ARTIFICIAL NEURAL NETWORKS FOR AIDING IN DIAGNOSING OSTEOARTHRITIS OF THE LUMBAR SPINE.

    PubMed

    Veronezi, Carlos Cassiano Denipotti; de Azevedo Simões, Priscyla Waleska Targino; Dos Santos, Robson Luiz; da Rocha, Edroaldo Lummertz; Meláo, Suelen; de Mattos, Merisandra Côrtes; Cechinel, Cristian

    2011-01-01

    To ascertain the advantages of applying artificial neural networks to recognize patterns on lumbar spine radiographies in order to aid in the process of diagnosing primary osteoarthritis. This was a cross-sectional descriptive analytical study with a quantitative approach and an emphasis on diagnosis. The training set was composed of images collected between January and July 2009 from patients who had undergone lateral-view digital radiographies of the lumbar spine, which were provided by a radiology clinic located in the municipality of Criciúma (SC). Out of the total of 260 images gathered, those with distortions, those presenting pathological conditions that altered the architecture of the lumbar spine and those with patterns that were difficult to characterize were discarded, resulting in 206 images. The image data base (n = 206) was then subdivided, resulting in 68 radiographies for the training stage, 68 images for tests and 70 for validation. A hybrid neural network based on Kohonen self-organizing maps and on Multilayer Perceptron networks was used. After 90 cycles, the validation was carried out on the best results, achieving accuracy of 62.85%, sensitivity of 65.71% and specificity of 60%. Even though the effectiveness shown was moderate, this study is still innovative. The values show that the technique used has a promising future, pointing towards further studies on image and cycle processing methodology with a larger quantity of radiographies.

  18. Implementations of back propagation algorithm in ecosystems applications

    NASA Astrophysics Data System (ADS)

    Ali, Khalda F.; Sulaiman, Riza; Elamir, Amir Mohamed

    2015-05-01

    Artificial Neural Networks (ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is in solving problems which are too complex for conventional technologies, that do not have an algorithmic solutions or their algorithmic Solutions is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are developed from concept that evolved in the late twentieth century neuro-physiological experiments on the cells of the human brain to overcome the perceived inadequacies with conventional ecological data analysis methods. ANNs have gained increasing attention in ecosystems applications, because of ANN's capacity to detect patterns in data through non-linear relationships, this characteristic confers them a superior predictive ability. In this research, ANNs is applied in an ecological system analysis. The neural networks use the well known Back Propagation (BP) Algorithm with the Delta Rule for adaptation of the system. The Back Propagation (BP) training Algorithm is an effective analytical method for adaptation of the ecosystems applications, the main reason because of their capacity to detect patterns in data through non-linear relationships. This characteristic confers them a superior predicting ability. The BP algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error is calculated. The idea of the back propagation algorithm is to reduce this error, until the ANNs learns the training data. The training begins with random weights, and the goal is to adjust them so that the error will be minimal. This research evaluated the use of artificial neural networks (ANNs) techniques in an ecological system analysis and modeling. The experimental results from this research demonstrate that an artificial neural network system can be trained to act as an expert ecosystem analyzer for many applications in ecological fields. The pilot ecosystem analyzer shows promising ability for generalization and requires further tuning and refinement of the basis neural network system for optimal performance.

  19. Network-Level Structure-Function Relationships in Human Neocortex

    PubMed Central

    Mišić, Bratislav; Betzel, Richard F.; de Reus, Marcel A.; van den Heuvel, Martijn P.; Berman, Marc G.; McIntosh, Anthony R.; Sporns, Olaf

    2016-01-01

    The dynamics of spontaneous fluctuations in neural activity are shaped by underlying patterns of anatomical connectivity. While numerous studies have demonstrated edge-wise correspondence between structural and functional connections, much less is known about how large-scale coherent functional network patterns emerge from the topology of structural networks. In the present study, we deploy a multivariate statistical technique, partial least squares, to investigate the association between spatially extended structural networks and functional networks. We find multiple statistically robust patterns, reflecting reliable combinations of structural and functional subnetworks that are optimally associated with one another. Importantly, these patterns generally do not show a one-to-one correspondence between structural and functional edges, but are instead distributed and heterogeneous, with many functional relationships arising from nonoverlapping sets of anatomical connections. We also find that structural connections between high-degree hubs are disproportionately represented, suggesting that these connections are particularly important in establishing coherent functional networks. Altogether, these results demonstrate that the network organization of the cerebral cortex supports the emergence of diverse functional network configurations that often diverge from the underlying anatomical substrate. PMID:27102654

  20. Application of artificial neural networks to identify equilibration in computer simulations

    NASA Astrophysics Data System (ADS)

    Leibowitz, Mitchell H.; Miller, Evan D.; Henry, Michael M.; Jankowski, Eric

    2017-11-01

    Determining which microstates generated by a thermodynamic simulation are representative of the ensemble for which sampling is desired is a ubiquitous, underspecified problem. Artificial neural networks are one type of machine learning algorithm that can provide a reproducible way to apply pattern recognition heuristics to underspecified problems. Here we use the open-source TensorFlow machine learning library and apply it to the problem of identifying which hypothetical observation sequences from a computer simulation are “equilibrated” and which are not. We generate training populations and test populations of observation sequences with embedded linear and exponential correlations. We train a two-neuron artificial network to distinguish the correlated and uncorrelated sequences. We find that this simple network is good enough for > 98% accuracy in identifying exponentially-decaying energy trajectories from molecular simulations.

  1. A feasibility study for long-path multiple detection using a neural network

    NASA Technical Reports Server (NTRS)

    Feuerbacher, G. A.; Moebes, T. A.

    1994-01-01

    Least-squares inverse filters have found widespread use in the deconvolution of seismograms and the removal of multiples. The use of least-squares prediction filters with prediction distances greater than unity leads to the method of predictive deconvolution which can be used for the removal of long path multiples. The predictive technique allows one to control the length of the desired output wavelet by control of the predictive distance, and hence to specify the desired degree of resolution. Events which are periodic within given repetition ranges can be attenuated selectively. The method is thus effective in the suppression of rather complex reverberation patterns. A back propagation(BP) neural network is constructed to perform the detection of first arrivals of the multiples and therefore aid in the more accurate determination of the predictive distance of the multiples. The neural detector is applied to synthetic reflection coefficients and synthetic seismic traces. The processing results show that the neural detector is accurate and should lead to an automated fast method for determining predictive distances across vast amounts of data such as seismic field records. The neural network system used in this study was the NASA Software Technology Branch's NETS system.

  2. The Topographical Mapping in Drosophila Central Complex Network and Its Signal Routing

    PubMed Central

    Chang, Po-Yen; Su, Ta-Shun; Shih, Chi-Tin; Lo, Chung-Chuan

    2017-01-01

    Neural networks regulate brain functions by routing signals. Therefore, investigating the detailed organization of a neural circuit at the cellular levels is a crucial step toward understanding the neural mechanisms of brain functions. To study how a complicated neural circuit is organized, we analyzed recently published data on the neural circuit of the Drosophila central complex, a brain structure associated with a variety of functions including sensory integration and coordination of locomotion. We discovered that, except for a small number of “atypical” neuron types, the network structure formed by the identified 194 neuron types can be described by only a few simple mathematical rules. Specifically, the topological mapping formed by these neurons can be reconstructed by applying a generation matrix on a small set of initial neurons. By analyzing how information flows propagate with or without the atypical neurons, we found that while the general pattern of signal propagation in the central complex follows the simple topological mapping formed by the “typical” neurons, some atypical neurons can substantially re-route the signal pathways, implying specific roles of these neurons in sensory signal integration. The present study provides insights into the organization principle and signal integration in the central complex. PMID:28443014

  3. Evaluation of tactical training in team handball by means of artificial neural networks.

    PubMed

    Hassan, Amr; Schrapf, Norbert; Ramadan, Wael; Tilp, Markus

    2017-04-01

    While tactical performance in competition has been analysed extensively, the assessment of training processes of tactical behaviour has rather been neglected in the literature. Therefore, the purpose of this study is to provide a methodology to assess the acquisition and implementation of offensive tactical behaviour in team handball. The use of game analysis software combined with an artificial neural network (ANN) software enabled identifying tactical target patterns from high level junior players based on their positions during offensive actions. These patterns were then trained by an amateur junior handball team (n = 14, 17 (0.5) years)). Following 6 weeks of tactical training an exhibition game was performed where the players were advised to use the target patterns as often as possible. Subsequently, the position data of the game was analysed with an ANN. The test revealed that 58% of the played patterns could be related to the trained target patterns. The similarity between executed patterns and target patterns was assessed by calculating the mean distance between key positions of the players in the game and the target pattern which was 0.49 (0.20) m. In summary, the presented method appears to be a valid instrument to assess tactical training.

  4. Do Convolutional Neural Networks Learn Class Hierarchy?

    PubMed

    Bilal, Alsallakh; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu

    2018-01-01

    Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.

  5. The neural processing of taste

    PubMed Central

    Lemon, Christian H; Katz, Donald B

    2007-01-01

    Although there have been many recent advances in the field of gustatory neurobiology, our knowledge of how the nervous system is organized to process information about taste is still far from complete. Many studies on this topic have focused on understanding how gustatory neural circuits are spatially organized to represent information about taste quality (e.g., "sweet", "salty", "bitter", etc.). Arguments pertaining to this issue have largely centered on whether taste is carried by dedicated neural channels or a pattern of activity across a neural population. But there is now mounting evidence that the timing of neural events may also importantly contribute to the representation of taste. In this review, we attempt to summarize recent findings in the field that pertain to these issues. Both space and time are variables likely related to the mechanism of the gustatory neural code: information about taste appears to reside in spatial and temporal patterns of activation in gustatory neurons. What is more, the organization of the taste network in the brain would suggest that the parameters of space and time extend to the neural processing of gustatory information on a much grander scale. PMID:17903281

  6. Dynamic neural network models of the premotoneuronal circuitry controlling wrist movements in primates.

    PubMed

    Maier, M A; Shupe, L E; Fetz, E E

    2005-10-01

    Dynamic recurrent neural networks were derived to simulate neuronal populations generating bidirectional wrist movements in the monkey. The models incorporate anatomical connections of cortical and rubral neurons, muscle afferents, segmental interneurons and motoneurons; they also incorporate the response profiles of four populations of neurons observed in behaving monkeys. The networks were derived by gradient descent algorithms to generate the eight characteristic patterns of motor unit activations observed during alternating flexion-extension wrist movements. The resulting model generated the appropriate input-output transforms and developed connection strengths resembling those in physiological pathways. We found that this network could be further trained to simulate additional tasks, such as experimentally observed reflex responses to limb perturbations that stretched or shortened the active muscles, and scaling of response amplitudes in proportion to inputs. In the final comprehensive network, motor units are driven by the combined activity of cortical, rubral, spinal and afferent units during step tracking and perturbations. The model displayed many emergent properties corresponding to physiological characteristics. The resulting neural network provides a working model of premotoneuronal circuitry and elucidates the neural mechanisms controlling motoneuron activity. It also predicts several features to be experimentally tested, for example the consequences of eliminating inhibitory connections in cortex and red nucleus. It also reveals that co-contraction can be achieved by simultaneous activation of the flexor and extensor circuits without invoking features specific to co-contraction.

  7. Multiscale Rotation-Invariant Convolutional Neural Networks for Lung Texture Classification.

    PubMed

    Wang, Qiangchang; Zheng, Yuanjie; Yang, Gongping; Jin, Weidong; Chen, Xinjian; Yin, Yilong

    2018-01-01

    We propose a new multiscale rotation-invariant convolutional neural network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography. MRCNN employs Gabor-local binary pattern that introduces a good property in image analysis-invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches. Experimental results on a public interstitial lung disease database show a superior performance of the proposed method to state of the art.

  8. Interference Path Loss Prediction in A319/320 Airplanes Using Modulated Fuzzy Logic and Neural Networks

    NASA Technical Reports Server (NTRS)

    Jafri, Madiha J.; Ely, Jay J.; Vahala, Linda L.

    2007-01-01

    In this paper, neural network (NN) modeling is combined with fuzzy logic to estimate Interference Path Loss measurements on Airbus 319 and 320 airplanes. Interference patterns inside the aircraft are classified and predicted based on the locations of the doors, windows, aircraft structures and the communication/navigation system-of-concern. Modeled results are compared with measured data. Combining fuzzy logic and NN modeling is shown to improve estimates of measured data over estimates obtained with NN alone. A plan is proposed to enhance the modeling for better prediction of electromagnetic coupling problems inside aircraft.

  9. Connectomics and graph theory analyses: Novel insights into network abnormalities in epilepsy.

    PubMed

    Gleichgerrcht, Ezequiel; Kocher, Madison; Bonilha, Leonardo

    2015-11-01

    The assessment of neural networks in epilepsy has become increasingly relevant in the context of translational research, given that localized forms of epilepsy are more likely to be related to abnormal function within specific brain networks, as opposed to isolated focal brain pathology. It is notable that variability in clinical outcomes from epilepsy treatment may be a reflection of individual patterns of network abnormalities. As such, network endophenotypes may be important biomarkers for the diagnosis and treatment of epilepsy. Despite its exceptional potential, measuring abnormal networks in translational research has been thus far constrained by methodologic limitations. Fortunately, recent advancements in neuroscience, particularly in the field of connectomics, permit a detailed assessment of network organization, dynamics, and function at an individual level. Data from the personal connectome can be assessed using principled forms of network analyses based on graph theory, which may disclose patterns of organization that are prone to abnormal dynamics and epileptogenesis. Although the field of connectomics is relatively new, there is already a rapidly growing body of evidence to suggest that it can elucidate several important and fundamental aspects of abnormal networks to epilepsy. In this article, we provide a review of the emerging evidence from connectomics research regarding neural network architecture, dynamics, and function related to epilepsy. We discuss how connectomics may bring together pathophysiologic hypotheses from conceptual and basic models of epilepsy and in vivo biomarkers for clinical translational research. By providing neural network information unique to each individual, the field of connectomics may help to elucidate variability in clinical outcomes and open opportunities for personalized medicine approaches to epilepsy. Connectomics involves complex and rich data from each subject, thus collaborative efforts to enable the systematic and rigorous evaluation of this form of "big data" are paramount to leverage the full potential of this new approach. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.

  10. Using an Artificial Neural Bypass to Restore Cortical Control of Rhythmic Movements in a Human with Quadriplegia

    NASA Astrophysics Data System (ADS)

    Sharma, Gaurav; Friedenberg, David A.; Annetta, Nicholas; Glenn, Bradley; Bockbrader, Marcie; Majstorovic, Connor; Domas, Stephanie; Mysiw, W. Jerry; Rezai, Ali; Bouton, Chad

    2016-09-01

    Neuroprosthetic technology has been used to restore cortical control of discrete (non-rhythmic) hand movements in a paralyzed person. However, cortical control of rhythmic movements which originate in the brain but are coordinated by Central Pattern Generator (CPG) neural networks in the spinal cord has not been demonstrated previously. Here we show a demonstration of an artificial neural bypass technology that decodes cortical activity and emulates spinal cord CPG function allowing volitional rhythmic hand movement. The technology uses a combination of signals recorded from the brain, machine-learning algorithms to decode the signals, a numerical model of CPG network, and a neuromuscular electrical stimulation system to evoke rhythmic movements. Using the neural bypass, a quadriplegic participant was able to initiate, sustain, and switch between rhythmic and discrete finger movements, using his thoughts alone. These results have implications in advancing neuroprosthetic technology to restore complex movements in people living with paralysis.

  11. Three-dimensional neural cultures produce networks that mimic native brain activity.

    PubMed

    Bourke, Justin L; Quigley, Anita F; Duchi, Serena; O'Connell, Cathal D; Crook, Jeremy M; Wallace, Gordon G; Cook, Mark J; Kapsa, Robert M I

    2018-02-01

    Development of brain function is critically dependent on neuronal networks organized through three dimensions. Culture of central nervous system neurons has traditionally been limited to two dimensions, restricting growth patterns and network formation to a single plane. Here, with the use of multichannel extracellular microelectrode arrays, we demonstrate that neurons cultured in a true three-dimensional environment recapitulate native neuronal network formation and produce functional outcomes more akin to in vivo neuronal network activity. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Inference in the brain: Statistics flowing in redundant population codes

    PubMed Central

    Pitkow, Xaq; Angelaki, Dora E

    2017-01-01

    It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors. PMID:28595050

  13. Ion track based tunable device as humidity sensor: a neural network approach

    NASA Astrophysics Data System (ADS)

    Sharma, Mamta; Sharma, Anuradha; Bhattacherjee, Vandana

    2013-01-01

    Artificial Neural Network (ANN) has been applied in statistical model development, adaptive control system, pattern recognition in data mining, and decision making under uncertainty. The nonlinear dependence of any sensor output on the input physical variable has been the motivation for many researchers to attempt unconventional modeling techniques such as neural networks and other machine learning approaches. Artificial neural network (ANN) is a computational tool inspired by the network of neurons in biological nervous system. It is a network consisting of arrays of artificial neurons linked together with different weights of connection. The states of the neurons as well as the weights of connections among them evolve according to certain learning rules.. In the present work we focus on the category of sensors which respond to electrical property changes such as impedance or capacitance. Recently, sensor materials have been embedded in etched tracks due to their nanometric dimensions and high aspect ratio which give high surface area available for exposure to sensing material. Various materials can be used for this purpose to probe physical (light intensity, temperature etc.), chemical (humidity, ammonia gas, alcohol etc.) or biological (germs, hormones etc.) parameters. The present work involves the application of TEMPOS structures as humidity sensors. The sample to be studied was prepared using the polymer electrolyte (PEO/NH4ClO4) with CdS nano-particles dispersed in the polymer electrolyte. In the present research we have attempted to correlate the combined effects of voltage and frequency on impedance of humidity sensors using a neural network model and results have indicated that the mean absolute error of the ANN Model for the training data was 3.95% while for the validation data it was 4.65%. The corresponding values for the LR model were 8.28% and 8.35% respectively. It was also demonstrated the percentage improvement of the ANN Model with respect to the linear regression model. This demonstrates the suitability of neural networks to perform such modeling.

  14. Effects of low frequency rTMS treatment on brain networks for inner speech in patients with schizophrenia and auditory verbal hallucinations.

    PubMed

    Bais, Leonie; Liemburg, Edith; Vercammen, Ans; Bruggeman, Richard; Knegtering, Henderikus; Aleman, André

    2017-08-01

    Efficacy of repetitive Transcranial Magnetic Stimulation (rTMS) targeting the temporo-parietal junction (TPJ) for the treatment of auditory verbal hallucinations (AVH) remains under debate. We assessed the influence of a 1Hz rTMS treatment on neural networks involved in a cognitive mechanism proposed to subserve AVH. Patients with schizophrenia (N=24) experiencing medication-resistant AVH completed a 10-day 1Hz rTMS treatment. Participants were randomized to active stimulation of the left or bilateral TPJ, or sham stimulation. The effects of rTMS on neural networks were investigated with an inner speech task during fMRI. Changes within and between neural networks were analyzed using Independent Component Analysis. rTMS of the left and bilateral TPJ areas resulted in a weaker network contribution of the left supramarginal gyrus to the bilateral fronto-temporal network. Left-sided rTMS resulted in stronger network contributions of the right superior temporal gyrus to the auditory-sensorimotor network, right inferior gyrus to the left fronto-parietal network, and left middle frontal gyrus to the default mode network. Bilateral rTMS was associated with a predominant inhibitory effect on network contribution. Sham stimulation showed different patterns of change compared to active rTMS. rTMS of the left temporo-parietal region decreased the contribution of the left supramarginal gyrus to the bilateral fronto-temporal network, which may reduce the likelihood of speech intrusions. On the other hand, left rTMS appeared to increase the contribution of functionally connected regions involved in perception, cognitive control and self-referential processing. These findings hint to potential neural mechanisms underlying rTMS for hallucinations but need corroboration in larger samples. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Recursive least-squares learning algorithms for neural networks

    NASA Astrophysics Data System (ADS)

    Lewis, Paul S.; Hwang, Jenq N.

    1990-11-01

    This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].

  16. Comparison between sparsely distributed memory and Hopfield-type neural network models

    NASA Technical Reports Server (NTRS)

    Keeler, James D.

    1986-01-01

    The Sparsely Distributed Memory (SDM) model (Kanerva, 1984) is compared to Hopfield-type neural-network models. A mathematical framework for comparing the two is developed, and the capacity of each model is investigated. The capacity of the SDM can be increased independently of the dimension of the stored vectors, whereas the Hopfield capacity is limited to a fraction of this dimension. However, the total number of stored bits per matrix element is the same in the two models, as well as for extended models with higher order interactions. The models are also compared in their ability to store sequences of patterns. The SDM is extended to include time delays so that contextual information can be used to cover sequences. Finally, it is shown how a generalization of the SDM allows storage of correlated input pattern vectors.

  17. Machine vision inspection of lace using a neural network

    NASA Astrophysics Data System (ADS)

    Sanby, Christopher; Norton-Wayne, Leonard

    1995-03-01

    Lace is particularly difficult to inspect using machine vision since it comprises a fine and complex pattern of threads which must be verified, on line and in real time. Small distortions in the pattern are unavoidable. This paper describes instrumentation for inspecting lace actually on the knitting machine. A CCD linescan camera synchronized to machine motions grabs an image of the lace. Differences between this lace image and a perfect prototype image are detected by comparison methods, thresholding techniques, and finally, a neural network (to distinguish real defects from false alarms). Though produced originally in a laboratory on SUN Sparc work-stations, the processing has subsequently been implemented on a 50 Mhz 486 PC-look-alike. Successful operation has been demonstrated in a factory, but over a restricted width. Full width coverage awaits provision of faster processing.

  18. The characteristic patterns of neuronal avalanches in mice under anesthesia and at rest: An investigation using constrained artificial neural networks

    PubMed Central

    Knöpfel, Thomas; Leech, Robert

    2018-01-01

    Local perturbations within complex dynamical systems can trigger cascade-like events that spread across significant portions of the system. Cascades of this type have been observed across a broad range of scales in the brain. Studies of these cascades, known as neuronal avalanches, usually report the statistics of large numbers of avalanches, without probing the characteristic patterns produced by the avalanches themselves. This is partly due to limitations in the extent or spatiotemporal resolution of commonly used neuroimaging techniques. In this study, we overcome these limitations by using optical voltage (genetically encoded voltage indicators) imaging. This allows us to record cortical activity in vivo across an entire cortical hemisphere, at both high spatial (~30um) and temporal (~20ms) resolution in mice that are either in an anesthetized or awake state. We then use artificial neural networks to identify the characteristic patterns created by neuronal avalanches in our data. The avalanches in the anesthetized cortex are most accurately classified by an artificial neural network architecture that simultaneously connects spatial and temporal information. This is in contrast with the awake cortex, in which avalanches are most accurately classified by an architecture that treats spatial and temporal information separately, due to the increased levels of spatiotemporal complexity. This is in keeping with reports of higher levels of spatiotemporal complexity in the awake brain coinciding with features of a dynamical system operating close to criticality. PMID:29795654

  19. Quasi-periodic patterns (QPP): large-scale dynamics in resting state fMRI that correlate with local infraslow electrical activity.

    PubMed

    Thompson, Garth John; Pan, Wen-Ju; Magnuson, Matthew Evan; Jaeger, Dieter; Keilholz, Shella Dawn

    2014-01-01

    Functional connectivity measurements from resting state blood-oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) are proving a powerful tool to probe both normal brain function and neuropsychiatric disorders. However, the neural mechanisms that coordinate these large networks are poorly understood, particularly in the context of the growing interest in network dynamics. Recent work in anesthetized rats has shown that the spontaneous BOLD fluctuations are tightly linked to infraslow local field potentials (LFPs) that are seldom recorded but comparable in frequency to the slow BOLD fluctuations. These findings support the hypothesis that long-range coordination involves low frequency neural oscillations and establishes infraslow LFPs as an excellent candidate for probing the neural underpinnings of the BOLD spatiotemporal patterns observed in both rats and humans. To further examine the link between large-scale network dynamics and infraslow LFPs, simultaneous fMRI and microelectrode recording were performed in anesthetized rats. Using an optimized filter to isolate shared components of the signals, we found that time-lagged correlation between infraslow LFPs and BOLD is comparable in spatial extent and timing to a quasi-periodic pattern (QPP) found from BOLD alone, suggesting that fMRI-measured QPPs and the infraslow LFPs share a common mechanism. As fMRI allows spatial resolution and whole brain coverage not available with electroencephalography, QPPs can be used to better understand the role of infraslow oscillations in normal brain function and neurological or psychiatric disorders. © 2013.

  20. Quasi-periodic patterns (QPP): large-scale dynamics in resting state fMRI that correlate with local infraslow electrical activity

    PubMed Central

    Thompson, Garth John; Pan, Wen-Ju; Magnuson, Matthew Evan; Jaeger, Dieter; Keilholz, Shella Dawn

    2013-01-01

    Functional connectivity measurements from resting state blood-oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) are proving a powerful tool to probe both normal brain function and neuropsychiatric disorders. However, the neural mechanisms that coordinate these large networks are poorly understood, particularly in the context of the growing interest in network dynamics. Recent work in anesthetized rats has shown that the spontaneous BOLD fluctuations are tightly linked to infraslow local field potentials (LFPs) that are seldom recorded but comparable in frequency to the slow BOLD fluctuations. These findings support the hypothesis that long-range coordination involves low frequency neural oscillations and establishes infraslow LFPs as an excellent candidate for probing the neural underpinnings of the BOLD spatiotemporal patterns observed in both rats and humans. To further examine the link between large-scale network dynamics and infraslow LFPs, simultaneous fMRI and microelectrode recording were performed in anesthetized rats. Using an optimized filter to isolate shared components of the signals, we found that time-lagged correlation between infraslow LFPs and BOLD is comparable in spatial extent and timing to a quasi-periodic pattern (QPP) found from BOLD alone, suggesting that fMRI-measured QPPs and the infraslow LFPs share a common mechanism. As fMRI allows spatial resolution and whole brain coverage not available with electroencephalography, QPPs can be used to better understand the role of infraslow oscillations in normal brain function and neurological or psychiatric disorders. PMID:24071524

Top