Sample records for neural network structures

  1. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    PubMed

    Nitta, Tohru

    2017-10-01

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  2. Control of magnetic bearing systems via the Chebyshev polynomial-based unified model (CPBUM) neural network.

    PubMed

    Jeng, J T; Lee, T T

    2000-01-01

    A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  3. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule.

    PubMed

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-11-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  4. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than themore » SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.« less

  5. Structural reliability calculation method based on the dual neural network and direct integration method.

    PubMed

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  6. The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.

    PubMed

    Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun

    2018-01-01

    Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.

  7. Structure-function clustering in multiplex brain networks

    NASA Astrophysics Data System (ADS)

    Crofts, J. J.; Forrester, M.; O'Dea, R. D.

    2016-10-01

    A key question in neuroscience is to understand how a rich functional repertoire of brain activity arises within relatively static networks of structurally connected neural populations: elucidating the subtle interactions between evoked “functional connectivity” and the underlying “structural connectivity” has the potential to address this. These structural-functional networks (and neural networks more generally) are more naturally described using a multilayer or multiplex network approach, in favour of standard single-layer network analyses that are more typically applied to such systems. In this letter, we address such issues by exploring important structure-function relations in the Macaque cortical network by modelling it as a duplex network that comprises an anatomical layer, describing the known (macro-scale) network topology of the Macaque monkey, and a functional layer derived from simulated neural activity. We investigate and characterize correlations between structural and functional layers, as system parameters controlling simulated neural activity are varied, by employing recently described multiplex network measures. Moreover, we propose a novel measure of multiplex structure-function clustering which allows us to investigate the emergence of functional connections that are distinct from the underlying cortical structure, and to highlight the dependence of multiplex structure on the neural dynamical regime.

  8. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    PubMed

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Kohonen and counterpropagation neural networks applied for mapping and interpretation of IR spectra.

    PubMed

    Novic, Marjana

    2008-01-01

    The principles of learning strategy of Kohonen and counterpropagation neural networks are introduced. The advantages of unsupervised learning are discussed. The self-organizing maps produced in both methods are suitable for a wide range of applications. Here, we present an example of Kohonen and counterpropagation neural networks used for mapping, interpretation, and simulation of infrared (IR) spectra. The artificial neural network models were trained for prediction of structural fragments of an unknown compound from its infrared spectrum. The training set contained over 3,200 IR spectra of diverse compounds of known chemical structure. The structure-spectra relationship was encompassed by the counterpropagation neural network, which assigned structural fragments to individual compounds within certain probability limits, assessed from the predictions of test compounds. The counterpropagation neural network model for prediction of fragments of chemical structure is reversible, which means that, for a given structural domain, limited to the training data set in the study, it can be used to simulate the IR spectrum of a chemical defined with a set of structural fragments.

  10. Geometrical structure of Neural Networks: Geodesics, Jeffrey's Prior and Hyper-ribbons

    NASA Astrophysics Data System (ADS)

    Hayden, Lorien; Alemi, Alex; Sethna, James

    2014-03-01

    Neural networks are learning algorithms which are employed in a host of Machine Learning problems including speech recognition, object classification and data mining. In practice, neural networks learn a low dimensional representation of high dimensional data and define a model manifold which is an embedding of this low dimensional structure in the higher dimensional space. In this work, we explore the geometrical structure of a neural network model manifold. A Stacked Denoising Autoencoder and a Deep Belief Network are trained on handwritten digits from the MNIST database. Construction of geodesics along the surface and of slices taken from the high dimensional manifolds reveal a hierarchy of widths corresponding to a hyper-ribbon structure. This property indicates that neural networks fall into the class of sloppy models, in which certain parameter combinations dominate the behavior. Employing this information could prove valuable in designing both neural network architectures and training algorithms. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No . DGE-1144153.

  11. Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene.

    PubMed

    Li, Jun; Mei, Xue; Prokhorov, Danil; Tao, Dacheng

    2017-03-01

    Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling.

  12. The effect of the neural activity on topological properties of growing neural networks.

    PubMed

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  13. Neural-like growing networks

    NASA Astrophysics Data System (ADS)

    Yashchenko, Vitaliy A.

    2000-03-01

    On the basis of the analysis of scientific ideas reflecting the law in the structure and functioning the biological structures of a brain, and analysis and synthesis of knowledge, developed by various directions in Computer Science, also there were developed the bases of the theory of a new class neural-like growing networks, not having the analogue in world practice. In a base of neural-like growing networks the synthesis of knowledge developed by classical theories - semantic and neural of networks is. The first of them enable to form sense, as objects and connections between them in accordance with construction of the network. With thus each sense gets a separate a component of a network as top, connected to other tops. In common it quite corresponds to structure reflected in a brain, where each obvious concept is presented by certain structure and has designating symbol. Secondly, this network gets increased semantic clearness at the expense owing to formation not only connections between neural by elements, but also themselves of elements as such, i.e. here has a place not simply construction of a network by accommodation sense structures in environment neural of elements, and purely creation of most this environment, as of an equivalent of environment of memory. Thus neural-like growing networks are represented by the convenient apparatus for modeling of mechanisms of teleological thinking, as a fulfillment of certain psychophysiological of functions.

  14. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  15. Nested Neural Networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1992-01-01

    Report presents analysis of nested neural networks, consisting of interconnected subnetworks. Analysis based on simplified mathematical models more appropriate for artificial electronic neural networks, partly applicable to biological neural networks. Nested structure allows for retrieval of individual subpatterns. Requires fewer wires and connection devices than fully connected networks, and allows for local reconstruction of damaged subnetworks without rewiring entire network.

  16. The neural network to determine the mechanical properties of the steels

    NASA Astrophysics Data System (ADS)

    Yemelyanov, Vitaliy; Yemelyanova, Nataliya; Safonova, Marina; Nedelkin, Aleksey

    2018-04-01

    The authors describe the neural network structure and software that is designed and developed to determine the mechanical properties of steels. The neural network is developed to refine upon the values of the steels properties. The results of simulations of the developed neural network are shown. The authors note the low standard error of the proposed neural network. To realize the proposed neural network the specialized software has been developed.

  17. The Laplacian spectrum of neural networks

    PubMed Central

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  18. Establishing a Dynamic Self-Adaptation Learning Algorithm of the BP Neural Network and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min

    2015-12-01

    In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.

  19. Neural net target-tracking system using structured laser patterns

    NASA Astrophysics Data System (ADS)

    Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun

    1996-06-01

    In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.

  20. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  1. Automated selection of computed tomography display parameters using neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Di; Neu, Scott; Valentino, Daniel J.

    2001-07-01

    A collection of artificial neural networks (ANN's) was trained to identify simple anatomical structures in a set of x-ray computed tomography (CT) images. These neural networks learned to associate a point in an image with the anatomical structure containing the point by using the image pixels located on the horizontal and vertical lines that ran through the point. The neural networks were integrated into a computer software tool whose function is to select an index into a list of CT window/level values from the location of the user's mouse cursor. Based upon the anatomical structure selected by the user, the software tool automatically adjusts the image display to optimally view the structure.

  2. Protein secondary structure prediction using modular reciprocal bidirectional recurrent neural networks.

    PubMed

    Babaei, Sepideh; Geranmayeh, Amir; Seyyedsalehi, Seyyed Ali

    2010-12-01

    The supervised learning of recurrent neural networks well-suited for prediction of protein secondary structures from the underlying amino acids sequence is studied. Modular reciprocal recurrent neural networks (MRR-NN) are proposed to model the strong correlations between adjacent secondary structure elements. Besides, a multilayer bidirectional recurrent neural network (MBR-NN) is introduced to capture the long-range intramolecular interactions between amino acids in formation of the secondary structure. The final modular prediction system is devised based on the interactive integration of the MRR-NN and the MBR-NN structures to arbitrarily engage the neighboring effects of the secondary structure types concurrent with memorizing the sequential dependencies of amino acids along the protein chain. The advanced combined network augments the percentage accuracy (Q₃) to 79.36% and boosts the segment overlap (SOV) up to 70.09% when tested on the PSIPRED dataset in three-fold cross-validation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks.

    PubMed

    Aguiar, Manuela A D; Dias, Ana Paula S; Ferreira, Flora

    2017-01-01

    We consider feed-forward and auto-regulation feed-forward neural (weighted) coupled cell networks. In feed-forward neural networks, cells are arranged in layers such that the cells of the first layer have empty input set and cells of each other layer receive only inputs from cells of the previous layer. An auto-regulation feed-forward neural coupled cell network is a feed-forward neural network where additionally some cells of the first layer have auto-regulation, that is, they have a self-loop. Given a network structure, a robust pattern of synchrony is a space defined in terms of equalities of cell coordinates that is flow-invariant for any coupled cell system (with additive input structure) associated with the network. In this paper, we describe the robust patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks. Regarding feed-forward neural networks, we show that only cells in the same layer can synchronize. On the other hand, in the presence of auto-regulation, we prove that cells in different layers can synchronize in a robust way and we give a characterization of the possible patterns of synchrony that can occur for auto-regulation feed-forward neural networks.

  4. Stability analysis of fractional-order Hopfield neural networks with time delays.

    PubMed

    Wang, Hu; Yu, Yongguang; Wen, Guoguang

    2014-07-01

    This paper investigates the stability for fractional-order Hopfield neural networks with time delays. Firstly, the fractional-order Hopfield neural networks with hub structure and time delays are studied. Some sufficient conditions for stability of the systems are obtained. Next, two fractional-order Hopfield neural networks with different ring structures and time delays are developed. By studying the developed neural networks, the corresponding sufficient conditions for stability of the systems are also derived. It is shown that the stability conditions are independent of time delays. Finally, numerical simulations are given to illustrate the effectiveness of the theoretical results obtained in this paper. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Neural network to diagnose lining condition

    NASA Astrophysics Data System (ADS)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  6. Generalised Transfer Functions of Neural Networks

    NASA Astrophysics Data System (ADS)

    Fung, C. F.; Billings, S. A.; Zhang, H.

    1997-11-01

    When artificial neural networks are used to model non-linear dynamical systems, the system structure which can be extremely useful for analysis and design, is buried within the network architecture. In this paper, explicit expressions for the frequency response or generalised transfer functions of both feedforward and recurrent neural networks are derived in terms of the network weights. The derivation of the algorithm is established on the basis of the Taylor series expansion of the activation functions used in a particular neural network. This leads to a representation which is equivalent to the non-linear recursive polynomial model and enables the derivation of the transfer functions to be based on the harmonic expansion method. By mapping the neural network into the frequency domain information about the structure of the underlying non-linear system can be recovered. Numerical examples are included to demonstrate the application of the new algorithm. These examples show that the frequency response functions appear to be highly sensitive to the network topology and training, and that the time domain properties fail to reveal deficiencies in the trained network structure.

  7. Bio-inspired spiking neural network for nonlinear systems control.

    PubMed

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Evolutionary neural networks for anomaly detection based on the behavior of a program.

    PubMed

    Han, Sang-Jun; Cho, Sung-Bae

    2006-06-01

    The process of learning the behavior of a given program by using machine-learning techniques (based on system-call audit data) is effective to detect intrusions. Rule learning, neural networks, statistics, and hidden Markov models (HMMs) are some of the kinds of representative methods for intrusion detection. Among them, neural networks are known for good performance in learning system-call sequences. In order to apply this knowledge to real-world problems successfully, it is important to determine the structures and weights of these call sequences. However, finding the appropriate structures requires very long time periods because there are no suitable analytical solutions. In this paper, a novel intrusion-detection technique based on evolutionary neural networks (ENNs) is proposed. One advantage of using ENNs is that it takes less time to obtain superior neural networks than when using conventional approaches. This is because they discover the structures and weights of the neural networks simultaneously. Experimental results with the 1999 Defense Advanced Research Projects Agency (DARPA) Intrusion Detection Evaluation (IDEVAL) data confirm that ENNs are promising tools for intrusion detection.

  9. Application of artificial neural networks to the design optimization of aerospace structural components

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.

    1993-01-01

    The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.

  10. Optimum Design of Aerospace Structural Components Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Berke, L.; Patnaik, S. N.; Murthy, P. L. N.

    1993-01-01

    The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires a trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network using the code NETS. Optimum designs for new design conditions were predicted using the trained network. Neural net prediction of optimum designs was found to be satisfactory for the majority of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.

  11. Adaptive neural network/expert system that learns fault diagnosis for different structures

    NASA Astrophysics Data System (ADS)

    Simon, Solomon H.

    1992-08-01

    Corporations need better real-time monitoring and control systems to improve productivity by watching quality and increasing production flexibility. The innovative technology to achieve this goal is evolving in the form artificial intelligence and neural networks applied to sensor processing, fusion, and interpretation. By using these advanced Al techniques, we can leverage existing systems and add value to conventional techniques. Neural networks and knowledge-based expert systems can be combined into intelligent sensor systems which provide real-time monitoring, control, evaluation, and fault diagnosis for production systems. Neural network-based intelligent sensor systems are more reliable because they can provide continuous, non-destructive monitoring and inspection. Use of neural networks can result in sensor fusion and the ability to model highly, non-linear systems. Improved models can provide a foundation for more accurate performance parameters and predictions. We discuss a research software/hardware prototype which integrates neural networks, expert systems, and sensor technologies and which can adapt across a variety of structures to perform fault diagnosis. The flexibility and adaptability of the prototype in learning two structures is presented. Potential applications are discussed.

  12. Improvement of the Hopfield Neural Network by MC-Adaptation Rule

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Zhao, Hong

    2006-06-01

    We show that the performance of the Hopfield neural networks, especially the quality of the recall and the capacity of the effective storing, can be greatly improved by making use of a recently presented neural network designing method without altering the whole structure of the network. In the improved neural network, a memory pattern is recalled exactly from initial states having a given degree of similarity with the memory pattern, and thus one can avoids to apply the overlap criterion as carried out in the Hopfield neural networks.

  13. Neural model of gene regulatory network: a survey on supportive meta-heuristics.

    PubMed

    Biswas, Surama; Acharyya, Sriyankar

    2016-06-01

    Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here.

  14. Artificial neural network prediction of aircraft aeroelastic behavior

    NASA Astrophysics Data System (ADS)

    Pesonen, Urpo Juhani

    An Artificial Neural Network that predicts aeroelastic behavior of aircraft is presented. The neural net was designed to predict the shape of a flexible wing in static flight conditions using results from a structural analysis and an aerodynamic analysis performed with traditional computational tools. To generate reliable training and testing data for the network, an aeroelastic analysis code using these tools as components was designed and validated. To demonstrate the advantages and reliability of Artificial Neural Networks, a network was also designed and trained to predict airfoil maximum lift at low Reynolds numbers where wind tunnel data was used for the training. Finally, a neural net was designed and trained to predict the static aeroelastic behavior of a wing without the need to iterate between the structural and aerodynamic solvers.

  15. Modular representation of layered neural networks.

    PubMed

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Explicitly integrating parameter, input, and structure uncertainties into Bayesian Neural Networks for probabilistic hydrologic forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong; Liang, Faming; Yu, Beibei

    2011-11-09

    Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less

  17. Linking structure and activity in nonlinear spiking networks

    PubMed Central

    Josić, Krešimir; Shea-Brown, Eric

    2017-01-01

    Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks’ spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities—including those of different cell types—combine with connectivity to shape population activity and function. PMID:28644840

  18. The optimization of force inputs for active structural acoustic control using a neural network

    NASA Technical Reports Server (NTRS)

    Cabell, R. H.; Lester, H. C.; Silcox, R. J.

    1992-01-01

    This paper investigates the use of a neural network to determine which force actuators, of a multi-actuator array, are best activated in order to achieve structural-acoustic control. The concept is demonstrated using a cylinder/cavity model on which the control forces, produced by piezoelectric actuators, are applied with the objective of reducing the interior noise. A two-layer neural network is employed and the back propagation solution is compared with the results calculated by a conventional, least-squares optimization analysis. The ability of the neural network to accurately and efficiently control actuator activation for interior noise reduction is demonstrated.

  19. Computational neural networks in chemistry: Model free mapping devices for predicting chemical reactivity from molecular structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.W.

    1992-01-01

    Computational neural networks (CNNs) are a computational paradigm inspired by the brain's massively parallel network of highly interconnected neurons. The power of computational neural networks derives not so much from their ability to model the brain as from their ability to learn by example and to map highly complex, nonlinear functions, without the need to explicitly specify the functional relationship. Two central questions about CNNs were investigated in the context of predicting chemical reactions: (1) the mapping properties of neural networks and (2) the representation of chemical information for use in CNNs. Chemical reactivity is here considered an example ofmore » a complex, nonlinear function of molecular structure. CNN's were trained using modifications of the back propagation learning rule to map a three dimensional response surface similar to those typically observed in quantitative structure-activity and structure-property relationships. The computational neural network's mapping of the response surface was found to be robust to the effects of training sample size, noisy data and intercorrelated input variables. The investigation of chemical structure representation led to the development of a molecular structure-based connection-table representation suitable for neural network training. An extension of this work led to a BE-matrix structure representation that was found to be general for several classes of reactions. The CNN prediction of chemical reactivity and regiochemistry was investigated for electrophilic aromatic substitution reactions, Markovnikov addition to alkenes, Saytzeff elimination from haloalkanes, Diels-Alder cycloaddition, and retro Diels-Alder ring opening reactions using these connectivity-matrix derived representations. The reaction predictions made by the CNNs were more accurate than those of an expert system and were comparable to predictions made by chemists.« less

  20. Prediction of strain values in reinforcements and concrete of a RC frame using neural networks

    NASA Astrophysics Data System (ADS)

    Vafaei, Mohammadreza; Alih, Sophia C.; Shad, Hossein; Falah, Ali; Halim, Nur Hajarul Falahi Abdul

    2018-03-01

    The level of strain in structural elements is an important indicator for the presence of damage and its intensity. Considering this fact, often structural health monitoring systems employ strain gauges to measure strains in critical elements. However, because of their sensitivity to the magnetic fields, inadequate long-term durability especially in harsh environments, difficulties in installation on existing structures, and maintenance cost, installation of strain gauges is not always possible for all structural components. Therefore, a reliable method that can accurately estimate strain values in critical structural elements is necessary for damage identification. In this study, a full-scale test was conducted on a planar RC frame to investigate the capability of neural networks for predicting the strain values. Two neural networks each of which having a single hidden layer was trained to relate the measured rotations and vertical displacements of the frame to the strain values measured at different locations of the frame. Results of trained neural networks indicated that they accurately estimated the strain values both in reinforcements and concrete. In addition, the trained neural networks were capable of predicting strains for the unseen input data set.

  1. Variable Neural Adaptive Robust Control: A Switched System Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, Jianming; Hu, Jianghai; Zak, Stanislaw H.

    2015-05-01

    Variable neural adaptive robust control strategies are proposed for the output tracking control of a class of multi-input multi-output uncertain systems. The controllers incorporate a variable-structure radial basis function (RBF) network as the self-organizing approximator for unknown system dynamics. The variable-structure RBF network solves the problem of structure determination associated with fixed-structure RBF networks. It can determine the network structure on-line dynamically by adding or removing radial basis functions according to the tracking performance. The structure variation is taken into account in the stability analysis of the closed-loop system using a switched system approach with the aid of the piecewisemore » quadratic Lyapunov function. The performance of the proposed variable neural adaptive robust controllers is illustrated with simulations.« less

  2. Artificial Intelligence in Prediction of Secondary Protein Structure Using CB513 Database

    PubMed Central

    Avdagic, Zikrija; Purisevic, Elvir; Omanovic, Samir; Coralic, Zlatan

    2009-01-01

    In this paper we describe CB513 a non-redundant dataset, suitable for development of algorithms for prediction of secondary protein structure. A program was made in Borland Delphi for transforming data from our dataset to make it suitable for learning of neural network for prediction of secondary protein structure implemented in MATLAB Neural-Network Toolbox. Learning (training and testing) of neural network is researched with different sizes of windows, different number of neurons in the hidden layer and different number of training epochs, while using dataset CB513. PMID:21347158

  3. Use long short-term memory to enhance Internet of Things for combined sewer overflow monitoring

    NASA Astrophysics Data System (ADS)

    Zhang, Duo; Lindholm, Geir; Ratnaweera, Harsha

    2018-01-01

    Combined sewer overflow causes severe water pollution, urban flooding and reduced treatment plant efficiency. Understanding the behavior of CSO structures is vital for urban flooding prevention and overflow control. Neural networks have been extensively applied in water resource related fields. In this study, we collect data from an Internet of Things monitoring CSO structure and build different neural network models for simulating and predicting the water level of the CSO structure. Through a comparison of four different neural networks, namely multilayer perceptron (MLP), wavelet neural network (WNN), long short-term memory (LSTM) and gated recurrent unit (GRU), the LSTM and GRU present superior capabilities for multi-step-ahead time series prediction. Furthermore, GRU achieves prediction performances similar to LSTM with a quicker learning curve.

  4. eLoom and Flatland: specification, simulation and visualization engines for the study of arbitrary hierarchical neural architectures.

    PubMed

    Caudell, Thomas P; Xiao, Yunhai; Healy, Michael J

    2003-01-01

    eLoom is an open source graph simulation software tool, developed at the University of New Mexico (UNM), that enables users to specify and simulate neural network models. Its specification language and libraries enables users to construct and simulate arbitrary, potentially hierarchical network structures on serial and parallel processing systems. In addition, eLoom is integrated with UNM's Flatland, an open source virtual environments development tool to provide real-time visualizations of the network structure and activity. Visualization is a useful method for understanding both learning and computation in artificial neural networks. Through 3D animated pictorially representations of the state and flow of information in the network, a better understanding of network functionality is achieved. ART-1, LAPART-II, MLP, and SOM neural networks are presented to illustrate eLoom and Flatland's capabilities.

  5. Functional approximation using artificial neural networks in structural mechanics

    NASA Technical Reports Server (NTRS)

    Alam, Javed; Berke, Laszlo

    1993-01-01

    The artificial neural networks (ANN) methodology is an outgrowth of research in artificial intelligence. In this study, the feed-forward network model that was proposed by Rumelhart, Hinton, and Williams was applied to the mapping of functions that are encountered in structural mechanics problems. Several different network configurations were chosen to train the available data for problems in materials characterization and structural analysis of plates and shells. By using the recall process, the accuracy of these trained networks was assessed.

  6. A Feasibility Study of Synthesizing Subsurfaces Modeled with Computational Neural Networks

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Housner, Jerrold M.; Szewczyk, Z. Peter

    1998-01-01

    This paper investigates the feasibility of synthesizing substructures modeled with computational neural networks. Substructures are modeled individually with computational neural networks and the response of the assembled structure is predicted by synthesizing the neural networks. A superposition approach is applied to synthesize models for statically determinate substructures while an interface displacement collocation approach is used to synthesize statically indeterminate substructure models. Beam and plate substructures along with components of a complicated Next Generation Space Telescope (NGST) model are used in this feasibility study. In this paper, the limitations and difficulties of synthesizing substructures modeled with neural networks are also discussed.

  7. Classification of 2-dimensional array patterns: assembling many small neural networks is better than using a large one.

    PubMed

    Chen, Liang; Xue, Wei; Tokuda, Naoyuki

    2010-08-01

    In many pattern classification/recognition applications of artificial neural networks, an object to be classified is represented by a fixed sized 2-dimensional array of uniform type, which corresponds to the cells of a 2-dimensional grid of the same size. A general neural network structure, called an undistricted neural network, which takes all the elements in the array as inputs could be used for problems such as these. However, a districted neural network can be used to reduce the training complexity. A districted neural network usually consists of two levels of sub-neural networks. Each of the lower level neural networks, called a regional sub-neural network, takes the elements in a region of the array as its inputs and is expected to output a temporary class label, called an individual opinion, based on the partial information of the entire array. The higher level neural network, called an assembling sub-neural network, uses the outputs (opinions) of regional sub-neural networks as inputs, and by consensus derives the label decision for the object. Each of the sub-neural networks can be trained separately and thus the training is less expensive. The regional sub-neural networks can be trained and performed in parallel and independently, therefore a high speed can be achieved. We prove theoretically in this paper, using a simple model, that a districted neural network is actually more stable than an undistricted neural network in noisy environments. We conjecture that the result is valid for all neural networks. This theory is verified by experiments involving gender classification and human face recognition. We conclude that a districted neural network is highly recommended for neural network applications in recognition or classification of 2-dimensional array patterns in highly noisy environments. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  8. Evolvable rough-block-based neural network and its biomedical application to hypoglycemia detection system.

    PubMed

    San, Phyo Phyo; Ling, Sai Ho; Nuryani; Nguyen, Hung

    2014-08-01

    This paper focuses on the hybridization technology using rough sets concepts and neural computing for decision and classification purposes. Based on the rough set properties, the lower region and boundary region are defined to partition the input signal to a consistent (predictable) part and an inconsistent (random) part. In this way, the neural network is designed to deal only with the boundary region, which mainly consists of an inconsistent part of applied input signal causing inaccurate modeling of the data set. Owing to different characteristics of neural network (NN) applications, the same structure of conventional NN might not give the optimal solution. Based on the knowledge of application in this paper, a block-based neural network (BBNN) is selected as a suitable classifier due to its ability to evolve internal structures and adaptability in dynamic environments. This architecture will systematically incorporate the characteristics of application to the structure of hybrid rough-block-based neural network (R-BBNN). A global training algorithm, hybrid particle swarm optimization with wavelet mutation is introduced for parameter optimization of proposed R-BBNN. The performance of the proposed R-BBNN algorithm was evaluated by an application to the field of medical diagnosis using real hypoglycemia episodes in patients with Type 1 diabetes mellitus. The performance of the proposed hybrid system has been compared with some of the existing neural networks. The comparison results indicated that the proposed method has improved classification performance and results in early convergence of the network.

  9. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Corpus callosum segmentation using deep neural networks with prior information from multi-atlas images

    NASA Astrophysics Data System (ADS)

    Park, Gilsoon; Hong, Jinwoo; Lee, Jong-Min

    2018-03-01

    In human brain, Corpus Callosum (CC) is the largest white matter structure, connecting between right and left hemispheres. Structural features such as shape and size of CC in midsagittal plane are of great significance for analyzing various neurological diseases, for example Alzheimer's disease, autism and epilepsy. For quantitative and qualitative studies of CC in brain MR images, robust segmentation of CC is important. In this paper, we present a novel method for CC segmentation. Our approach is based on deep neural networks and the prior information generated from multi-atlas images. Deep neural networks have recently shown good performance in various image processing field. Convolutional neural networks (CNN) have shown outstanding performance for classification and segmentation in medical image fields. We used convolutional neural networks for CC segmentation. Multi-atlas based segmentation model have been widely used in medical image segmentation because atlas has powerful information about the target structure we want to segment, consisting of MR images and corresponding manual segmentation of the target structure. We combined the prior information, such as location and intensity distribution of target structure (i.e. CC), made from multi-atlas images in CNN training process for more improving training. The CNN with prior information showed better segmentation performance than without.

  11. Using Neural Networks in the Mapping of Mixed Discrete/Continuous Design Spaces With Application to Structural Design

    DTIC Science & Technology

    1994-02-01

    desired that the problem to which the design space mapping techniques were applied be easily analyzed, yet provide a design space with realistic complexity...consistent fully stressed solution. 3 DESIGN SPACE MAPPING In order to reduce the computational expense required to optimize design spaces, neural networks...employed in this study. Some of the issues involved in using neural networks to do design space mapping are how to configure the neural network, how much

  12. Vibration control of building structures using self-organizing and self-learning neural networks

    NASA Astrophysics Data System (ADS)

    Madan, Alok

    2005-11-01

    Past research in artificial intelligence establishes that artificial neural networks (ANN) are effective and efficient computational processors for performing a variety of tasks including pattern recognition, classification, associative recall, combinatorial problem solving, adaptive control, multi-sensor data fusion, noise filtering and data compression, modelling and forecasting. The paper presents a potentially feasible approach for training ANN in active control of earthquake-induced vibrations in building structures without the aid of teacher signals (i.e. target control forces). A counter-propagation neural network is trained to output the control forces that are required to reduce the structural vibrations in the absence of any feedback on the correctness of the output control forces (i.e. without any information on the errors in output activations of the network). The present study shows that, in principle, the counter-propagation network (CPN) can learn from the control environment to compute the required control forces without the supervision of a teacher (unsupervised learning). Simulated case studies are presented to demonstrate the feasibility of implementing the unsupervised learning approach in ANN for effective vibration control of structures under the influence of earthquake ground motions. The proposed learning methodology obviates the need for developing a mathematical model of structural dynamics or training a separate neural network to emulate the structural response for implementation in practice.

  13. Hierarchical classification with a competitive evolutionary neural tree.

    PubMed

    Adams, R G.; Butchart, K; Davey, N

    1999-04-01

    A new, dynamic, tree structured network, the Competitive Evolutionary Neural Tree (CENT) is introduced. The network is able to provide a hierarchical classification of unlabelled data sets. The main advantage that the CENT offers over other hierarchical competitive networks is its ability to self determine the number, and structure, of the competitive nodes in the network, without the need for externally set parameters. The network produces stable classificatory structures by halting its growth using locally calculated heuristics. The results of network simulations are presented over a range of data sets, including Anderson's IRIS data set. The CENT network demonstrates its ability to produce a representative hierarchical structure to classify a broad range of data sets.

  14. Lunar Circular Structure Classification from Chang 'e 2 High Resolution Lunar Images with Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Zeng, X. G.; Liu, J. J.; Zuo, W.; Chen, W. L.; Liu, Y. X.

    2018-04-01

    Circular structures are widely distributed around the lunar surface. The most typical of them could be lunar impact crater, lunar dome, et.al. In this approach, we are trying to use the Convolutional Neural Network to classify the lunar circular structures from the lunar images.

  15. A neural network for controlling the configuration of frame structure with elastic members

    NASA Technical Reports Server (NTRS)

    Tsutsumi, Kazuyoshi

    1989-01-01

    A neural network for controlling the configuration of frame structure with elastic members is proposed. In the present network, the structure is modeled not by using the relative angles of the members but by using the distances between the joint locations alone. The relationship between the environment and the joints is also defined by their mutual distances. The analog neural network attains the reaching motion of the manipulator as a minimization problem of the energy constructed by the distances between the joints, the target, and the obstacles. The network can generate not only the final but also the transient configurations and the trajectory. This framework with flexibility and parallelism is very suitable for controlling the Space Telerobotic systems with many degrees of freedom.

  16. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  17. LavaNet—Neural network development environment in a general mine planning package

    NASA Astrophysics Data System (ADS)

    Kapageridis, Ioannis Konstantinou; Triantafyllou, A. G.

    2011-04-01

    LavaNet is a series of scripts written in Perl that gives access to a neural network simulation environment inside a general mine planning package. A well known and a very popular neural network development environment, the Stuttgart Neural Network Simulator, is used as the base for the development of neural networks. LavaNet runs inside VULCAN™—a complete mine planning package with advanced database, modelling and visualisation capabilities. LavaNet is taking advantage of VULCAN's Perl based scripting environment, Lava, to bring all the benefits of neural network development and application to geologists, mining engineers and other users of the specific mine planning package. LavaNet enables easy development of neural network training data sets using information from any of the data and model structures available, such as block models and drillhole databases. Neural networks can be trained inside VULCAN™ and the results be used to generate new models that can be visualised in 3D. Direct comparison of developed neural network models with conventional and geostatistical techniques is now possible within the same mine planning software package. LavaNet supports Radial Basis Function networks, Multi-Layer Perceptrons and Self-Organised Maps.

  18. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  19. An improved wavelet neural network medical image segmentation algorithm with combined maximum entropy

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang

    2018-05-01

    In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.

  20. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    PubMed

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  1. Synchronization in a noise-driven developing neural network

    NASA Astrophysics Data System (ADS)

    Lin, I.-H.; Wu, R.-K.; Chen, C.-M.

    2011-11-01

    We use computer simulations to investigate the structural and dynamical properties of a developing neural network whose activity is driven by noise. Structurally, the constructed neural networks in our simulations exhibit the small-world properties that have been observed in several neural networks. The dynamical change of neuronal membrane potential is described by the Hodgkin-Huxley model, and two types of learning rules, including spike-timing-dependent plasticity (STDP) and inverse STDP, are considered to restructure the synaptic strength between neurons. Clustered synchronized firing (SF) of the network is observed when the network connectivity (number of connections/maximal connections) is about 0.75, in which the firing rate of neurons is only half of the network frequency. At the connectivity of 0.86, all neurons fire synchronously at the network frequency. The network SF frequency increases logarithmically with the culturing time of a growing network and decreases exponentially with the delay time in signal transmission. These conclusions are consistent with experimental observations. The phase diagrams of SF in a developing network are investigated for both learning rules.

  2. Application of dynamic recurrent neural networks in nonlinear system identification

    NASA Astrophysics Data System (ADS)

    Du, Yun; Wu, Xueli; Sun, Huiqin; Zhang, Suying; Tian, Qiang

    2006-11-01

    An adaptive identification method of simple dynamic recurrent neural network (SRNN) for nonlinear dynamic systems is presented in this paper. This method based on the theory that by using the inner-states feed-back of dynamic network to describe the nonlinear kinetic characteristics of system can reflect the dynamic characteristics more directly, deduces the recursive prediction error (RPE) learning algorithm of SRNN, and improves the algorithm by studying topological structure on recursion layer without the weight values. The simulation results indicate that this kind of neural network can be used in real-time control, due to its less weight values, simpler learning algorithm, higher identification speed, and higher precision of model. It solves the problems of intricate in training algorithm and slow rate in convergence caused by the complicate topological structure in usual dynamic recurrent neural network.

  3. Neural network-based model reference adaptive control system.

    PubMed

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  4. A loop-based neural architecture for structured behavior encoding and decoding.

    PubMed

    Gisiger, Thomas; Boukadoum, Mounir

    2018-02-01

    We present a new type of artificial neural network that generalizes on anatomical and dynamical aspects of the mammal brain. Its main novelty lies in its topological structure which is built as an array of interacting elementary motifs shaped like loops. These loops come in various types and can implement functions such as gating, inhibitory or executive control, or encoding of task elements to name a few. Each loop features two sets of neurons and a control region, linked together by non-recurrent projections. The two neural sets do the bulk of the loop's computations while the control unit specifies the timing and the conditions under which the computations implemented by the loop are to be performed. By functionally linking many such loops together, a neural network is obtained that may perform complex cognitive computations. To demonstrate the potential offered by such a system, we present two neural network simulations. The first illustrates the structure and dynamics of a single loop implementing a simple gating mechanism. The second simulation shows how connecting four loops in series can produce neural activity patterns that are sufficient to pass a simplified delayed-response task. We also show that this network reproduces electrophysiological measurements gathered in various regions of the brain of monkeys performing similar tasks. We also demonstrate connections between this type of neural network and recurrent or long short-term memory network models, and suggest ways to generalize them for future artificial intelligence research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Hybrid computing using a neural network with dynamic external memory.

    PubMed

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  6. Deep learning for computational chemistry.

    PubMed

    Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav

    2017-06-15

    The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. Advanced obstacle avoidance for a laser based wheelchair using optimised Bayesian neural networks.

    PubMed

    Trieu, Hoang T; Nguyen, Hung T; Willey, Keith

    2008-01-01

    In this paper we present an advanced method of obstacle avoidance for a laser based intelligent wheelchair using optimized Bayesian neural networks. Three neural networks are designed for three separate sub-tasks: passing through a door way, corridor and wall following and general obstacle avoidance. The accurate usable accessible space is determined by including the actual wheelchair dimensions in a real-time map used as inputs to each networks. Data acquisitions are performed separately to collect the patterns required for specified sub-tasks. Bayesian frame work is used to determine the optimal neural network structure in each case. Then these networks are trained under the supervision of Bayesian rule. Experiment results showed that compare to the VFH algorithm our neural networks navigated a smoother path following a near optimum trajectory.

  8. Neural network for solving convex quadratic bilevel programming problems.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. A Neural Network Model of the Structure and Dynamics of Human Personality

    ERIC Educational Resources Information Center

    Read, Stephen J.; Monroe, Brian M.; Brownstein, Aaron L.; Yang, Yu; Chopra, Gurveen; Miller, Lynn C.

    2010-01-01

    We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two…

  10. Neurocomputing

    NASA Technical Reports Server (NTRS)

    Hecht-Nielsen, Robert

    1990-01-01

    The present work is intended to give technologists, research scientists, and mathematicians a graduate-level overview of the field of neurocomputing. After exploring the relationship of this field to general neuroscience, attention is given to neural network building blocks, the self-adaptation equations of learning laws, the data-transformation structures of associative networks, and the multilayer data-transformation structures of mapping networks. Also treated are the neurocomputing frontiers of spatiotemporal, stochastic, and hierarchical networks, 'neurosoftware', the creation of neural network-based computers, and neurocomputing applications in sensor processing, control, and data analysis.

  11. Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.

    PubMed

    Li, Xiumin; Small, Michael

    2012-06-01

    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.

  12. Network evolution induced by asynchronous stimuli through spike-timing-dependent plasticity.

    PubMed

    Yuan, Wu-Jie; Zhou, Jian-Fang; Zhou, Changsong

    2013-01-01

    In sensory neural system, external asynchronous stimuli play an important role in perceptual learning, associative memory and map development. However, the organization of structure and dynamics of neural networks induced by external asynchronous stimuli are not well understood. Spike-timing-dependent plasticity (STDP) is a typical synaptic plasticity that has been extensively found in the sensory systems and that has received much theoretical attention. This synaptic plasticity is highly sensitive to correlations between pre- and postsynaptic firings. Thus, STDP is expected to play an important role in response to external asynchronous stimuli, which can induce segregative pre- and postsynaptic firings. In this paper, we study the impact of external asynchronous stimuli on the organization of structure and dynamics of neural networks through STDP. We construct a two-dimensional spatial neural network model with local connectivity and sparseness, and use external currents to stimulate alternately on different spatial layers. The adopted external currents imposed alternately on spatial layers can be here regarded as external asynchronous stimuli. Through extensive numerical simulations, we focus on the effects of stimulus number and inter-stimulus timing on synaptic connecting weights and the property of propagation dynamics in the resulting network structure. Interestingly, the resulting feedforward structure induced by stimulus-dependent asynchronous firings and its propagation dynamics reflect both the underlying property of STDP. The results imply a possible important role of STDP in generating feedforward structure and collective propagation activity required for experience-dependent map plasticity in developing in vivo sensory pathways and cortices. The relevance of the results to cue-triggered recall of learned temporal sequences, an important cognitive function, is briefly discussed as well. Furthermore, this finding suggests a potential application for examining STDP by measuring neural population activity in a cultured neural network.

  13. Structure-function relationships during segregated and integrated network states of human brain functional connectivity.

    PubMed

    Fukushima, Makoto; Betzel, Richard F; He, Ye; van den Heuvel, Martijn P; Zuo, Xi-Nian; Sporns, Olaf

    2018-04-01

    Structural white matter connections are thought to facilitate integration of neural information across functionally segregated systems. Recent studies have demonstrated that changes in the balance between segregation and integration in brain networks can be tracked by time-resolved functional connectivity derived from resting-state functional magnetic resonance imaging (rs-fMRI) data and that fluctuations between segregated and integrated network states are related to human behavior. However, how these network states relate to structural connectivity is largely unknown. To obtain a better understanding of structural substrates for these network states, we investigated how the relationship between structural connectivity, derived from diffusion tractography, and functional connectivity, as measured by rs-fMRI, changes with fluctuations between segregated and integrated states in the human brain. We found that the similarity of edge weights between structural and functional connectivity was greater in the integrated state, especially at edges connecting the default mode and the dorsal attention networks. We also demonstrated that the similarity of network partitions, evaluated between structural and functional connectivity, increased and the density of direct structural connections within modules in functional networks was elevated during the integrated state. These results suggest that, when functional connectivity exhibited an integrated network topology, structural connectivity and functional connectivity were more closely linked to each other and direct structural connections mediated a larger proportion of neural communication within functional modules. Our findings point out the possibility of significant contributions of structural connections to integrative neural processes underlying human behavior.

  14. On the neural substrates leading to the emergence of mental operational structures

    NASA Technical Reports Server (NTRS)

    Ogmen, H.

    1993-01-01

    A developmental approach to the study of the emergence of mental operational structures in neural networks is presented. Neural architectures proposed to underlie the six stages of the sensory-motor period are discussed.

  15. Effect of dilution in asymmetric recurrent neural networks.

    PubMed

    Folli, Viola; Gosti, Giorgio; Leonetti, Marco; Ruocco, Giancarlo

    2018-04-16

    We study with numerical simulation the possible limit behaviors of synchronous discrete-time deterministic recurrent neural networks composed of N binary neurons as a function of a network's level of dilution and asymmetry. The network dilution measures the fraction of neuron couples that are connected, and the network asymmetry measures to what extent the underlying connectivity matrix is asymmetric. For each given neural network, we study the dynamical evolution of all the different initial conditions, thus characterizing the full dynamical landscape without imposing any learning rule. Because of the deterministic dynamics, each trajectory converges to an attractor, that can be either a fixed point or a limit cycle. These attractors form the set of all the possible limit behaviors of the neural network. For each network we then determine the convergence times, the limit cycles' length, the number of attractors, and the sizes of the attractors' basin. We show that there are two network structures that maximize the number of possible limit behaviors. The first optimal network structure is fully-connected and symmetric. On the contrary, the second optimal network structure is highly sparse and asymmetric. The latter optimal is similar to what observed in different biological neuronal circuits. These observations lead us to hypothesize that independently from any given learning model, an efficient and effective biologic network that stores a number of limit behaviors close to its maximum capacity tends to develop a connectivity structure similar to one of the optimal networks we found. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. Neural substrates of decision-making.

    PubMed

    Broche-Pérez, Y; Herrera Jiménez, L F; Omar-Martínez, E

    2016-06-01

    Decision-making is the process of selecting a course of action from among 2 or more alternatives by considering the potential outcomes of selecting each option and estimating its consequences in the short, medium and long term. The prefrontal cortex (PFC) has traditionally been considered the key neural structure in decision-making process. However, new studies support the hypothesis that describes a complex neural network including both cortical and subcortical structures. The aim of this review is to summarise evidence on the anatomical structures underlying the decision-making process, considering new findings that support the existence of a complex neural network that gives rise to this complex neuropsychological process. Current evidence shows that the cortical structures involved in decision-making include the orbitofrontal cortex (OFC), anterior cingulate cortex (ACC), and dorsolateral prefrontal cortex (DLPFC). This process is assisted by subcortical structures including the amygdala, thalamus, and cerebellum. Findings to date show that both cortical and subcortical brain regions contribute to the decision-making process. The neural basis of decision-making is a complex neural network of cortico-cortical and cortico-subcortical connections which includes subareas of the PFC, limbic structures, and the cerebellum. Copyright © 2014 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.

  17. Classification of Magneto-Optic Images using Neural Networks

    NASA Technical Reports Server (NTRS)

    Nath, Shridhar; Wincheski, Buzz; Fulton, Jim; Namkung, Min

    1994-01-01

    A real time imaging system with a neural network classifier has been incorporated on a Macintosh computer in conjunction with an MOI system. This system images rivets on aircraft aluminium structures using eddy currents and magnetic imaging. Moment invariant functions from the image of a rivet is used to train a multilayer perceptron neural network to classify the rivets as good or bad (rivets with cracks).

  18. Natural language acquisition in large scale neural semantic networks

    NASA Astrophysics Data System (ADS)

    Ealey, Douglas

    This thesis puts forward the view that a purely signal- based approach to natural language processing is both plausible and desirable. By questioning the veracity of symbolic representations of meaning, it argues for a unified, non-symbolic model of knowledge representation that is both biologically plausible and, potentially, highly efficient. Processes to generate a grounded, neural form of this model-dubbed the semantic filter-are discussed. The combined effects of local neural organisation, coincident with perceptual maturation, are used to hypothesise its nature. This theoretical model is then validated in light of a number of fundamental neurological constraints and milestones. The mechanisms of semantic and episodic development that the model predicts are then used to explain linguistic properties, such as propositions and verbs, syntax and scripting. To mimic the growth of locally densely connected structures upon an unbounded neural substrate, a system is developed that can grow arbitrarily large, data- dependant structures composed of individual self- organising neural networks. The maturational nature of the data used results in a structure in which the perception of concepts is refined by the networks, but demarcated by subsequent structure. As a consequence, the overall structure shows significant memory and computational benefits, as predicted by the cognitive and neural models. Furthermore, the localised nature of the neural architecture also avoids the increasing error sensitivity and redundancy of traditional systems as the training domain grows. The semantic and episodic filters have been demonstrated to perform as well, or better, than more specialist networks, whilst using significantly larger vocabularies, more complex sentence forms and more natural corpora.

  19. Research on FBG-Based CFRP Structural Damage Identification Using BP Neural Network

    NASA Astrophysics Data System (ADS)

    Geng, Xiangyi; Lu, Shizeng; Jiang, Mingshun; Sui, Qingmei; Lv, Shanshan; Xiao, Hang; Jia, Yuxi; Jia, Lei

    2018-06-01

    A damage identification system of carbon fiber reinforced plastics (CFRP) structures is investigated using fiber Bragg grating (FBG) sensors and back propagation (BP) neural network. FBG sensors are applied to construct the sensing network to detect the structural dynamic response signals generated by active actuation. The damage identification model is built based on the BP neural network. The dynamic signal characteristics extracted by the Fourier transform are the inputs, and the damage states are the outputs of the model. Besides, damages are simulated by placing lumped masses with different weights instead of inducing real damages, which is confirmed to be feasible by finite element analysis (FEA). At last, the damage identification system is verified on a CFRP plate with 300 mm × 300 mm experimental area, with the accurate identification of varied damage states. The system provides a practical way for CFRP structural damage identification.

  20. Modified neural networks for rapid recovery of tokamak plasma parameters for real time control

    NASA Astrophysics Data System (ADS)

    Sengupta, A.; Ranjan, P.

    2002-07-01

    Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.

  1. Robust neural network with applications to credit portfolio data analysis.

    PubMed

    Feng, Yijia; Li, Runze; Sudjianto, Agus; Zhang, Yiyun

    2010-01-01

    In this article, we study nonparametric conditional quantile estimation via neural network structure. We proposed an estimation method that combines quantile regression and neural network (robust neural network, RNN). It provides good smoothing performance in the presence of outliers and can be used to construct prediction bands. A Majorization-Minimization (MM) algorithm was developed for optimization. Monte Carlo simulation study is conducted to assess the performance of RNN. Comparison with other nonparametric regression methods (e.g., local linear regression and regression splines) in real data application demonstrate the advantage of the newly proposed procedure.

  2. Experimental Verification of Electric Drive Technologies Based on Artificial Intelligence Tools

    NASA Technical Reports Server (NTRS)

    Rubaai, Ahmed; Ricketts, Daniel; Kotaru, Raj; Thomas, Robert; Noga, Donald F. (Technical Monitor); Kankam, Mark D. (Technical Monitor)

    2000-01-01

    In this report, a fully integrated prototype of a flight servo control system is successfully developed and implemented using brushless dc motors. The control system is developed by the fuzzy logic theory, and implemented with a multilayer neural network. First, a neural network-based architecture is introduced for fuzzy logic control. The characteristic rules and their membership functions of fuzzy systems are represented as the processing nodes in the neural network structure. The network structure and the parameter learning are performed simultaneously and online in the fuzzy-neural network system. The structure learning is based on the partition of input space. The parameter learning is based on the supervised gradient decent method, using a delta adaptation law. Using experimental setup, the performance of the proposed control system is evaluated under various operating conditions. Test results are presented and discussed in the report. The proposed learning control system has several advantages, namely, simple structure and learning capability, robustness and high tracking performance and few nodes at hidden layers. In comparison with the PI controller, the proposed fuzzy-neural network system can yield a better dynamic performance with shorter settling time, and without overshoot. Experimental results have shown that the proposed control system is adaptive and robust in responding to a wide range of operating conditions. In summary, the goal of this study is to design and implement-advanced servosystems to actuate control surfaces for flight vehicles, namely, aircraft and helicopters, missiles and interceptors, and mini- and micro-air vehicles.

  3. Properties of a memory network in psychology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wedemann, Roseli S.; Donangelo, Raul; Carvalho, Luis A. V. de

    We have previously described neurotic psychopathology and psychoanalytic working-through by an associative memory mechanism, based on a neural network model, where memory was modelled by a Boltzmann machine (BM). Since brain neural topology is selectively structured, we simulated known microscopic mechanisms that control synaptic properties, showing that the network self-organizes to a hierarchical, clustered structure. Here, we show some statistical mechanical properties of the complex networks which result from this self-organization. They indicate that a generalization of the BM may be necessary to model memory.

  4. Properties of a memory network in psychology

    NASA Astrophysics Data System (ADS)

    Wedemann, Roseli S.; Donangelo, Raul; de Carvalho, Luís A. V.

    2007-12-01

    We have previously described neurotic psychopathology and psychoanalytic working-through by an associative memory mechanism, based on a neural network model, where memory was modelled by a Boltzmann machine (BM). Since brain neural topology is selectively structured, we simulated known microscopic mechanisms that control synaptic properties, showing that the network self-organizes to a hierarchical, clustered structure. Here, we show some statistical mechanical properties of the complex networks which result from this self-organization. They indicate that a generalization of the BM may be necessary to model memory.

  5. A renaissance of neural networks in drug discovery.

    PubMed

    Baskin, Igor I; Winkler, David; Tetko, Igor V

    2016-08-01

    Neural networks are becoming a very popular method for solving machine learning and artificial intelligence problems. The variety of neural network types and their application to drug discovery requires expert knowledge to choose the most appropriate approach. In this review, the authors discuss traditional and newly emerging neural network approaches to drug discovery. Their focus is on backpropagation neural networks and their variants, self-organizing maps and associated methods, and a relatively new technique, deep learning. The most important technical issues are discussed including overfitting and its prevention through regularization, ensemble and multitask modeling, model interpretation, and estimation of applicability domain. Different aspects of using neural networks in drug discovery are considered: building structure-activity models with respect to various targets; predicting drug selectivity, toxicity profiles, ADMET and physicochemical properties; characteristics of drug-delivery systems and virtual screening. Neural networks continue to grow in importance for drug discovery. Recent developments in deep learning suggests further improvements may be gained in the analysis of large chemical data sets. It's anticipated that neural networks will be more widely used in drug discovery in the future, and applied in non-traditional areas such as drug delivery systems, biologically compatible materials, and regenerative medicine.

  6. Deinterlacing using modular neural network

    NASA Astrophysics Data System (ADS)

    Woo, Dong H.; Eom, Il K.; Kim, Yoo S.

    2004-05-01

    Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.

  7. An evolutionary algorithm that constructs recurrent neural networks.

    PubMed

    Angeline, P J; Saunders, G M; Pollack, J B

    1994-01-01

    Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods.

  8. Predicting protein complex geometries with a neural network.

    PubMed

    Chae, Myong-Ho; Krull, Florian; Lorenzen, Stephan; Knapp, Ernst-Walter

    2010-03-01

    A major challenge of the protein docking problem is to define scoring functions that can distinguish near-native protein complex geometries from a large number of non-native geometries (decoys) generated with noncomplexed protein structures (unbound docking). In this study, we have constructed a neural network that employs the information from atom-pair distance distributions of a large number of decoys to predict protein complex geometries. We found that docking prediction can be significantly improved using two different types of polar hydrogen atoms. To train the neural network, 2000 near-native decoys of even distance distribution were used for each of the 185 considered protein complexes. The neural network normalizes the information from different protein complexes using an additional protein complex identity input neuron for each complex. The parameters of the neural network were determined such that they mimic a scoring funnel in the neighborhood of the native complex structure. The neural network approach avoids the reference state problem, which occurs in deriving knowledge-based energy functions for scoring. We show that a distance-dependent atom pair potential performs much better than a simple atom-pair contact potential. We have compared the performance of our scoring function with other empirical and knowledge-based scoring functions such as ZDOCK 3.0, ZRANK, ITScore-PP, EMPIRE, and RosettaDock. In spite of the simplicity of the method and its functional form, our neural network-based scoring function achieves a reasonable performance in rigid-body unbound docking of proteins. Proteins 2010. (c) 2009 Wiley-Liss, Inc.

  9. Solving differential equations with unknown constitutive relations as recurrent neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learningmore » literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.« less

  10. Development of a computational model on the neural activity patterns of a visual working memory in a hierarchical feedforward Network

    NASA Astrophysics Data System (ADS)

    An, Soyoung; Choi, Woochul; Paik, Se-Bum

    2015-11-01

    Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.

  11. A New Measure for Neural Compensation Is Positively Correlated With Working Memory and Gait Speed.

    PubMed

    Ji, Lanxin; Pearlson, Godfrey D; Hawkins, Keith A; Steffens, David C; Guo, Hua; Wang, Lihong

    2018-01-01

    Neuroimaging studies suggest that older adults may compensate for declines in brain function and cognition through reorganization of neural resources. A limitation of prior research is reliance on between-group comparisons of neural activation (e.g., younger vs. older), which cannot be used to assess compensatory ability quantitatively. It is also unclear about the relationship between compensatory ability with cognitive function or how other factors such as physical exercise modulates compensatory ability. Here, we proposed a data-driven method to semi-quantitatively measure neural compensation under a challenging cognitive task, and we then explored connections between neural compensation to cognitive engagement and cognitive reserve (CR). Functional and structural magnetic resonance imaging scans were acquired for 26 healthy older adults during a face-name memory task. Spatial independent component analysis (ICA) identified visual, attentional and left executive as core networks. Results show that the smaller the volumes of the gray matter (GM) structures within core networks, the more networks were needed to conduct the task ( r = -0.408, p = 0.035). Therefore, the number of task-activated networks controlling for the GM volume within core networks was defined as a measure of neural compensatory ability. We found that compensatory ability correlated with working memory performance ( r = 0.528, p = 0.035). Among subjects with good memory task performance, those with higher CR used fewer networks than subjects with lower CR. Among poor-performance subjects, those using more networks had higher CR. Our results indicated that using a high cognitive-demanding task to measure the number of activated neural networks could be a useful and sensitive measure of neural compensation in older adults.

  12. Applying Gradient Descent in Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  13. Flexible body control using neural networks

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  14. Predicting CYP2C19 Catalytic Parameters for Enantioselective Oxidations Using Artificial Neural Networks and a Chirality Code

    PubMed Central

    Hartman, Jessica H.; Cothren, Steven D.; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A.; Miller, Grover P.

    2013-01-01

    Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (kcat, Km, and kcat/Km), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (kcat and Km) were more consistent with experimental values than those for catalytic efficiency (kcat/Km). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds. PMID:23673224

  15. Self-organization in neural networks - Applications in structural optimization

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat; Fu, B.; Berke, Laszlo

    1993-01-01

    The present paper discusses the applicability of ART (Adaptive Resonance Theory) networks, and the Hopfield and Elastic networks, in problems of structural analysis and design. A characteristic of these network architectures is the ability to classify patterns presented as inputs into specific categories. The categories may themselves represent distinct procedural solution strategies. The paper shows how this property can be adapted in the structural analysis and design problem. A second application is the use of Hopfield and Elastic networks in optimization problems. Of particular interest are problems characterized by the presence of discrete and integer design variables. The parallel computing architecture that is typical of neural networks is shown to be effective in such problems. Results of preliminary implementations in structural design problems are also included in the paper.

  16. Modeling Aircraft Wing Loads from Flight Data Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Allen, Michael J.; Dibley, Ryan P.

    2003-01-01

    Neural networks were used to model wing bending-moment loads, torsion loads, and control surface hinge-moments of the Active Aeroelastic Wing (AAW) aircraft. Accurate loads models are required for the development of control laws designed to increase roll performance through wing twist while not exceeding load limits. Inputs to the model include aircraft rates, accelerations, and control surface positions. Neural networks were chosen to model aircraft loads because they can account for uncharacterized nonlinear effects while retaining the capability to generalize. The accuracy of the neural network models was improved by first developing linear loads models to use as starting points for network training. Neural networks were then trained with flight data for rolls, loaded reversals, wind-up-turns, and individual control surface doublets for load excitation. Generalization was improved by using gain weighting and early stopping. Results are presented for neural network loads models of four wing loads and four control surface hinge moments at Mach 0.90 and an altitude of 15,000 ft. An average model prediction error reduction of 18.6 percent was calculated for the neural network models when compared to the linear models. This paper documents the input data conditioning, input parameter selection, structure, training, and validation of the neural network models.

  17. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Disney, Adam; Reynolds, John

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  18. Nondestructive pavement evaluation using ILLI-PAVE based artificial neural network models.

    DOT National Transportation Integrated Search

    2008-09-01

    The overall objective in this research project is to develop advanced pavement structural analysis models for more accurate solutions with fast computation schemes. Soft computing and modeling approaches, specifically the Artificial Neural Network (A...

  19. Spatial interpolation and radiological mapping of ambient gamma dose rate by using artificial neural networks and fuzzy logic methods.

    PubMed

    Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkın, Halim; Çevik, Uğur

    2017-09-01

    The aim of this study was to determine spatial risk dispersion of ambient gamma dose rate (AGDR) by using both artificial neural network (ANN) and fuzzy logic (FL) methods, compare the performances of methods, make dose estimations for intermediate stations with no previous measurements and create dose rate risk maps of the study area. In order to determine the dose distribution by using artificial neural networks, two main networks and five different network structures were used; feed forward ANN; Multi-layer perceptron (MLP), Radial basis functional neural network (RBFNN), Quantile regression neural network (QRNN) and recurrent ANN; Jordan networks (JN), Elman networks (EN). In the evaluation of estimation performance obtained for the test data, all models appear to give similar results. According to the cross-validation results obtained for explaining AGDR distribution, Pearson's r coefficients were calculated as 0.94, 0.91, 0.89, 0.91, 0.91 and 0.92 and RMSE values were calculated as 34.78, 43.28, 63.92, 44.86, 46.77 and 37.92 for MLP, RBFNN, QRNN, JN, EN and FL, respectively. In addition, spatial risk maps showing distributions of AGDR of the study area were created by all models and results were compared with geological, topological and soil structure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Pattern classification and recognition of invertebrate functional groups using self-organizing neural networks.

    PubMed

    Zhang, WenJun

    2007-07-01

    Self-organizing neural networks can be used to mimic non-linear systems. The main objective of this study is to make pattern classification and recognition on sampling information using two self-organizing neural network models. Invertebrate functional groups sampled in the irrigated rice field were classified and recognized using one-dimensional self-organizing map and self-organizing competitive learning neural networks. Comparisons between neural network models, distance (similarity) measures, and number of neurons were conducted. The results showed that self-organizing map and self-organizing competitive learning neural network models were effective in pattern classification and recognition of sampling information. Overall the performance of one-dimensional self-organizing map neural network was better than self-organizing competitive learning neural network. The number of neurons could determine the number of classes in the classification. Different neural network models with various distance (similarity) measures yielded similar classifications. Some differences, dependent upon the specific network structure, would be found. The pattern of an unrecognized functional group was recognized with the self-organizing neural network. A relative consistent classification indicated that the following invertebrate functional groups, terrestrial blood sucker; terrestrial flyer; tourist (nonpredatory species with no known functional role other than as prey in ecosystem); gall former; collector (gather, deposit feeder); predator and parasitoid; leaf miner; idiobiont (acarine ectoparasitoid), were classified into the same group, and the following invertebrate functional groups, external plant feeder; terrestrial crawler, walker, jumper or hunter; neustonic (water surface) swimmer (semi-aquatic), were classified into another group. It was concluded that reliable conclusions could be drawn from comparisons of different neural network models that use different distance (similarity) measures. Results with the larger consistency will be more reliable.

  1. Equivalent Skin Analysis of Wing Structures Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.

    2000-01-01

    An efficient method of modeling trapezoidal built-up wing structures is developed by coupling. in an indirect way, an Equivalent Plate Analysis (EPA) with Neural Networks (NN). Being assumed to behave like a Mindlin-plate, the wing is solved using the Ritz method with Legendre polynomials employed as the trial functions. This analysis method can be made more efficient by avoiding most of the computational effort spent on calculating contributions to the stiffness and mass matrices from each spar and rib. This is accomplished by replacing the wing inner-structure with an "equivalent" material that combines to the skin and whose properties are simulated by neural networks. The constitutive matrix, which relates the stress vector to the strain vector, and the density of the equivalent material are obtained by enforcing mass and stiffness matrix equities with rec,ard to the EPA in a least-square sense. Neural networks for the material properties are trained in terms of the design variables of the wing structure. Examples show that the present method, which can be called an Equivalent Skin Analysis (ESA) of the wing structure, is more efficient than the EPA and still fairly good results can be obtained. The present ESA is very promising to be used at the early stages of wing structure design.

  2. Neural networks for structural design - An integrated system implementation

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han

    1992-01-01

    The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.

  3. A review and analysis of neural networks for classification of remotely sensed multispectral imagery

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1993-01-01

    A literature survey and analysis of the use of neural networks for the classification of remotely sensed multispectral imagery is presented. As part of a brief mathematical review, the backpropagation algorithm, which is the most common method of training multi-layer networks, is discussed with an emphasis on its application to pattern recognition. The analysis is divided into five aspects of neural network classification: (1) input data preprocessing, structure, and encoding; (2) output encoding and extraction of classes; (3) network architecture, (4) training algorithms; and (5) comparisons to conventional classifiers. The advantages of the neural network method over traditional classifiers are its non-parametric nature, arbitrary decision boundary capabilities, easy adaptation to different types of data and input structures, fuzzy output values that can enhance classification, and good generalization for use with multiple images. The disadvantages of the method are slow training time, inconsistent results due to random initial weights, and the requirement of obscure initialization values (e.g., learning rate and hidden layer size). Possible techniques for ameliorating these problems are discussed. It is concluded that, although the neural network method has several unique capabilities, it will become a useful tool in remote sensing only if it is made faster, more predictable, and easier to use.

  4. High variation subarctic topsoil pollutant concentration prediction using neural network residual kriging

    NASA Astrophysics Data System (ADS)

    Sergeev, A. P.; Tarasov, D. A.; Buevich, A. G.; Subbotina, I. E.; Shichkin, A. V.; Sergeeva, M. V.; Lvova, O. A.

    2017-06-01

    The work deals with the application of neural networks residual kriging (NNRK) to the spatial prediction of the abnormally distributed soil pollutant (Cr). It is known that combination of geostatistical interpolation approaches (kriging) and neural networks leads to significantly better prediction accuracy and productivity. Generalized regression neural networks and multilayer perceptrons are classes of neural networks widely used for the continuous function mapping. Each network has its own pros and cons; however both demonstrated fast training and good mapping possibilities. In the work, we examined and compared two combined techniques: generalized regression neural network residual kriging (GRNNRK) and multilayer perceptron residual kriging (MLPRK). The case study is based on the real data sets on surface contamination by chromium at a particular location of the subarctic Novy Urengoy, Russia, obtained during the previously conducted screening. The proposed models have been built, implemented and validated using ArcGIS and MATLAB environments. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. MLRPK showed the best predictive accuracy comparing to the geostatistical approach (kriging) and even to GRNNRK.

  5. The neural representation of social networks.

    PubMed

    Weaverdyck, Miriam E; Parkinson, Carolyn

    2018-05-24

    The computational demands associated with navigating large, complexly bonded social groups are thought to have significantly shaped human brain evolution. Yet, research on social network representation and cognitive neuroscience have progressed largely independently. Thus, little is known about how the human brain encodes the structure of the social networks in which it is embedded. This review highlights recent work seeking to bridge this gap in understanding. While the majority of research linking social network analysis and neuroimaging has focused on relating neuroanatomy to social network size, researchers have begun to define the neural architecture that encodes social network structure, cognitive and behavioral consequences of encoding this information, and individual differences in how people represent the structure of their social world. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. A Deep Neural Network Model for Rainfall Estimation UsingPolarimetric WSR-88DP Radar Observations

    NASA Astrophysics Data System (ADS)

    Tan, H.; Chandra, C. V.; Chen, H.

    2016-12-01

    Rainfall estimation based on radar measurements has been an important topic for a few decades. Generally, radar rainfall estimation is conducted through parametric algorisms such as reflectivity-rainfall relation (i.e., Z-R relation). On the other hand, neural networks are developed for ground rainfall estimation based on radar measurements. This nonparametric method, which takes into account of both radar observations and rainfall measurements from ground rain gauges, has been demonstrated successfully for rainfall rate estimation. However, the neural network-based rainfall estimation is limited in practice due to the model complexity and structure, data quality, as well as different rainfall microphysics. Recently, the deep learning approach has been introduced in pattern recognition and machine learning areas. Compared to traditional neural networks, the deep learning based methodologies have larger number of hidden layers and more complex structure for data representation. Through a hierarchical learning process, the high level structured information and knowledge can be extracted automatically from low level features of the data. In this paper, we introduce a novel deep neural network model for rainfall estimation based on ground polarimetric radar measurements .The model is designed to capture the complex abstractions of radar measurements at different levels using multiple layers feature identification and extraction. The abstractions at different levels can be used independently or fused with other data resource such as satellite-based rainfall products and/or topographic data to represent the rain characteristics at certain location. In particular, the WSR-88DP radar and rain gauge data collected in Dallas - Fort Worth Metroplex and Florida are used extensively to train the model, and for demonstration purposes. Quantitative evaluation of the deep neural network based rainfall products will also be presented, which is based on an independent rain gauge network.

  7. On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks

    PubMed Central

    Tonelli, Paul; Mouret, Jean-Baptiste

    2013-01-01

    A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities. PMID:24236099

  8. Tutorial: Neural networks and their potential application in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhrig, R.E.

    A neural network is a data processing system consisting of a number of simple, highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks have emerged in the past few years as an area of unusual opportunity for research, development and application to a variety of real world problems. Indeed, neural networks exhibit characteristics and capabilities not provided by any other technology. Examples include reading Japanese Kanjimore » characters and human handwriting, reading a typewritten manuscript aloud, compensating for alignment errors in robots, interpreting very noise'' signals (e.g. electroencephalograms), modeling complex systems that cannot be modelled mathematically, and predicting whether proposed loans will be good or fail. This paper presents a brief tutorial on neural networks and describes research on the potential applications to nuclear power plants.« less

  9. Prediction of protein tertiary structure from sequences using a very large back-propagation neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X.; Wilcox, G.L.

    1993-12-31

    We have implemented large scale back-propagation neural networks on a 544 node Connection Machine, CM-5, using the C language in MIMD mode. The program running on 512 processors performs backpropagation learning at 0.53 Gflops, which provides 76 million connection updates per second. We have applied the network to the prediction of protein tertiary structure from sequence information alone. A neural network with one hidden layer and 40 million connections is trained to learn the relationship between sequence and tertiary structure. The trained network yields predicted structures of some proteins on which it has not been trained given only their sequences.more » Presentation of the Fourier transform of the sequences accentuates periodicity in the sequence and yields good generalization with greatly increased training efficiency. Training simulations with a large, heterologous set of protein structures (111 proteins from CM-5 time) to solutions with under 2% RMS residual error within the training set (random responses give an RMS error of about 20%). Presentation of 15 sequences of related proteins in a testing set of 24 proteins yields predicted structures with less than 8% RMS residual error, indicating good apparent generalization.« less

  10. Computational modeling of neural plasticity for self-organization of neural networks.

    PubMed

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Predicting Item Difficulty in a Reading Comprehension Test with an Artificial Neural Network.

    ERIC Educational Resources Information Center

    Perkins, Kyle; And Others

    1995-01-01

    This article reports the results of using a three-layer back propagation artificial neural network to predict item difficulty in a reading comprehension test. Three classes of variables were examined: text structure, propositional analysis, and cognitive demand. Results demonstrate that the networks can consistently predict item difficulty. (JL)

  12. Global synchronization of memristive neural networks subject to random disturbances via distributed pinning control.

    PubMed

    Guo, Zhenyuan; Yang, Shaofu; Wang, Jun

    2016-12-01

    This paper presents theoretical results on global exponential synchronization of multiple memristive neural networks in the presence of external noise by means of two types of distributed pinning control. The multiple memristive neural networks are coupled in a general structure via a nonlinear function, which consists of a linear diffusive term and a discontinuous sign term. A pinning impulsive control law is introduced in the coupled system to synchronize all neural networks. Sufficient conditions are derived for ascertaining global exponential synchronization in mean square. In addition, a pinning adaptive control law is developed to achieve global exponential synchronization in mean square. Both pinning control laws utilize only partial state information received from the neighborhood of the controlled neural network. Simulation results are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Neural constraints on learning.

    PubMed

    Sadtler, Patrick T; Quick, Kristin M; Golub, Matthew D; Chase, Steven M; Ryu, Stephen I; Tyler-Kabara, Elizabeth C; Yu, Byron M; Batista, Aaron P

    2014-08-28

    Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain-computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain-computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess.

  14. Invariant 2D object recognition using the wavelet transform and structured neural networks

    NASA Astrophysics Data System (ADS)

    Khalil, Mahmoud I.; Bayoumi, Mohamed M.

    1999-03-01

    This paper applies the dyadic wavelet transform and the structured neural networks approach to recognize 2D objects under translation, rotation, and scale transformation. Experimental results are presented and compared with traditional methods. The experimental results showed that this refined technique successfully classified the objects and outperformed some traditional methods especially in the presence of noise.

  15. Artificial Neural Network with Regular Graph for Maximum Air Temperature Forecasting:. the Effect of Decrease in Nodes Degree on Learning

    NASA Astrophysics Data System (ADS)

    Ghaderi, A. H.; Darooneh, A. H.

    The behavior of nonlinear systems can be analyzed by artificial neural networks. Air temperature change is one example of the nonlinear systems. In this work, a new neural network method is proposed for forecasting maximum air temperature in two cities. In this method, the regular graph concept is used to construct some partially connected neural networks that have regular structures. The learning results of fully connected ANN and networks with proposed method are compared. In some case, the proposed method has the better result than conventional ANN. After specifying the best network, the effect of input pattern numbers on the prediction is studied and the results show that the increase of input patterns has a direct effect on the prediction accuracy.

  16. Application of neural networks to prediction of advanced composite structures mechanical response and behavior

    NASA Technical Reports Server (NTRS)

    Cios, K. J.; Vary, A.; Berke, L.; Kautz, H. E.

    1992-01-01

    Two types of neural networks were used to evaluate acousto-ultrasonic (AU) data for material characterization and mechanical reponse prediction. The neural networks included a simple feedforward network (backpropagation) and a radial basis functions network. Comparisons of results in terms of accuracy and training time are given. Acousto-ultrasonic (AU) measurements were performed on a series of tensile specimens composed of eight laminated layers of continuous, SiC fiber reinforced Ti-15-3 matrix. The frequency spectrum was dominated by frequencies of longitudinal wave resonance through the thickness of the specimen at the sending transducer. The magnitude of the frequency spectrum of the AU signal was used for calculating a stress-wave factor based on integrating the spectral distribution function and used for comparison with neural networks results.

  17. Development of the disable software reporting system on the basis of the neural network

    NASA Astrophysics Data System (ADS)

    Gavrylenko, S.; Babenko, O.; Ignatova, E.

    2018-04-01

    The PE structure of malicious and secure software is analyzed, features are highlighted, binary sign vectors are obtained and used as inputs for training the neural network. A software model for detecting malware based on the ART-1 neural network was developed, optimal similarity coefficients were found, and testing was performed. The obtained research results showed the possibility of using the developed system of identifying malicious software in computer systems protection systems

  18. Spontaneous scale-free structure in adaptive networks with synchronously dynamical linking

    NASA Astrophysics Data System (ADS)

    Yuan, Wu-Jie; Zhou, Jian-Fang; Li, Qun; Chen, De-Bao; Wang, Zhen

    2013-08-01

    Inspired by the anti-Hebbian learning rule in neural systems, we study how the feedback from dynamical synchronization shapes network structure by adding new links. Through extensive numerical simulations, we find that an adaptive network spontaneously forms scale-free structure, as confirmed in many real systems. Moreover, the adaptive process produces two nontrivial power-law behaviors of deviation strength from mean activity of the network and negative degree correlation, which exists widely in technological and biological networks. Importantly, these scalings are robust to variation of the adaptive network parameters, which may have meaningful implications in the scale-free formation and manipulation of dynamical networks. Our study thus suggests an alternative adaptive mechanism for the formation of scale-free structure with negative degree correlation, which means that nodes of high degree tend to connect, on average, with others of low degree and vice versa. The relevance of the results to structure formation and dynamical property in neural networks is briefly discussed as well.

  19. Optical Calibration Process Developed for Neural-Network-Based Optical Nondestructive Evaluation Method

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A completely optical calibration process has been developed at Glenn for calibrating a neural-network-based nondestructive evaluation (NDE) method. The NDE method itself detects very small changes in the characteristic patterns or vibration mode shapes of vibrating structures as discussed in many references. The mode shapes or characteristic patterns are recorded using television or electronic holography and change when a structure experiences, for example, cracking, debonds, or variations in fastener properties. An artificial neural network can be trained to be very sensitive to changes in the mode shapes, but quantifying or calibrating that sensitivity in a consistent, meaningful, and deliverable manner has been challenging. The standard calibration approach has been difficult to implement, where the response to damage of the trained neural network is compared with the responses of vibration-measurement sensors. In particular, the vibration-measurement sensors are intrusive, insufficiently sensitive, and not numerous enough. In response to these difficulties, a completely optical alternative to the standard calibration approach was proposed and tested successfully. Specifically, the vibration mode to be monitored for structural damage was intentionally contaminated with known amounts of another mode, and the response of the trained neural network was measured as a function of the peak-to-peak amplitude of the contaminating mode. The neural network calibration technique essentially uses the vibration mode shapes of the undamaged structure as standards against which the changed mode shapes are compared. The published response of the network can be made nearly independent of the contaminating mode, if enough vibration modes are used to train the net. The sensitivity of the neural network can be adjusted for the environment in which the test is to be conducted. The response of a neural network trained with measured vibration patterns for use on a vibration isolation table in the presence of various sources of laboratory noise is shown. The output of the neural network is called the degradable classification index. The curve was generated by a simultaneous comparison of means, and it shows a peak-to-peak sensitivity of about 100 nm. The following graph uses model generated data from a compressor blade to show that much higher sensitivities are possible when the environment can be controlled better. The peak-to-peak sensitivity here is about 20 nm. The training procedure was modified for the second graph, and the data were subjected to an intensity-dependent transformation called folding. All the measurements for this approach to calibration were optical. The peak-to-peak amplitudes of the vibration modes were measured using heterodyne interferometry, and the modes themselves were recorded using television (electronic) holography.

  20. Reconstruction of three-dimensional porous media using generative adversarial neural networks

    NASA Astrophysics Data System (ADS)

    Mosser, Lukas; Dubrule, Olivier; Blunt, Martin J.

    2017-10-01

    To evaluate the variability of multiphase flow properties of porous media at the pore scale, it is necessary to acquire a number of representative samples of the void-solid structure. While modern x-ray computer tomography has made it possible to extract three-dimensional images of the pore space, assessment of the variability in the inherent material properties is often experimentally not feasible. We present a method to reconstruct the solid-void structure of porous media by applying a generative neural network that allows an implicit description of the probability distribution represented by three-dimensional image data sets. We show, by using an adversarial learning approach for neural networks, that this method of unsupervised learning is able to generate representative samples of porous media that honor their statistics. We successfully compare measures of pore morphology, such as the Euler characteristic, two-point statistics, and directional single-phase permeability of synthetic realizations with the calculated properties of a bead pack, Berea sandstone, and Ketton limestone. Results show that generative adversarial networks can be used to reconstruct high-resolution three-dimensional images of porous media at different scales that are representative of the morphology of the images used to train the neural network. The fully convolutional nature of the trained neural network allows the generation of large samples while maintaining computational efficiency. Compared to classical stochastic methods of image reconstruction, the implicit representation of the learned data distribution can be stored and reused to generate multiple realizations of the pore structure very rapidly.

  1. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    PubMed

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  2. A neural network device for on-line particle identification in cosmic ray experiments

    NASA Astrophysics Data System (ADS)

    Scrimaglio, R.; Finetti, N.; D'Altorio, L.; Rantucci, E.; Raso, M.; Segreto, E.; Tassoni, A.; Cardarilli, G. C.

    2004-05-01

    On-line particle identification is one of the main goals of many experiments in space both for rare event studies and for optimizing measurements along the orbital trajectory. Neural networks can be a useful tool for signal processing and real time data analysis in such experiments. In this document we report on the performances of a programmable neural device which was developed in VLSI analog/digital technology. Neurons and synapses were accomplished by making use of Operational Transconductance Amplifier (OTA) structures. In this paper we report on the results of measurements performed in order to verify the agreement of the characteristic curves of each elementary cell with simulations and on the device performances obtained by implementing simple neural structures on the VLSI chip. A feed-forward neural network (Multi-Layer Perceptron, MLP) was implemented on the VLSI chip and trained to identify particles by processing the signals of two-dimensional position-sensitive Si detectors. The radiation monitoring device consisted of three double-sided silicon strip detectors. From the analysis of a set of simulated data it was found that the MLP implemented on the neural device gave results comparable with those obtained with the standard method of analysis confirming that the implemented neural network could be employed for real time particle identification.

  3. Landslide Susceptibility Index Determination Using Aritificial Neural Network

    NASA Astrophysics Data System (ADS)

    Kawabata, D.; Bandibas, J.; Urai, M.

    2004-12-01

    The occurrence of landslide is the result of the interaction of complex and diverse environmental factors. The geomorphic features, rock types and geologic structure are especially important base factors of the landslide occurrence. Generating landslide susceptibility index by defining the relationship between landslide occurrence and that base factors using conventional mathematical and statistical methods is very difficult and inaccurate. This study focuses on generating landslide susceptibility index using artificial neural networks in Southern Japanese Alps. The training data are geomorphic (e.g. altitude, slope and aspect) and geologic parameters (e.g. rock type, distance from geologic boundary and geologic dip-strike angle) and landslides. Artificial neural network structure and training scheme are formulated to generate the index. Data from areas with and without landslide occurrences are used to train the network. The network is trained to output 1 when the input data are from areas with landslides and 0 when no landslide occurred. The trained network generates an output ranging from 0 to 1 reflecting the possibility of landslide occurrence based on the inputted data. Output values nearer to 1 means higher possibility of landslide occurrence. The artificial neural network model is incorporated into the GIS software to generate a landslide susceptibility map.

  4. Density-based clustering: A 'landscape view' of multi-channel neural data for inference and dynamic complexity analysis.

    PubMed

    Baglietto, Gabriel; Gigante, Guido; Del Giudice, Paolo

    2017-01-01

    Two, partially interwoven, hot topics in the analysis and statistical modeling of neural data, are the development of efficient and informative representations of the time series derived from multiple neural recordings, and the extraction of information about the connectivity structure of the underlying neural network from the recorded neural activities. In the present paper we show that state-space clustering can provide an easy and effective option for reducing the dimensionality of multiple neural time series, that it can improve inference of synaptic couplings from neural activities, and that it can also allow the construction of a compact representation of the multi-dimensional dynamics, that easily lends itself to complexity measures. We apply a variant of the 'mean-shift' algorithm to perform state-space clustering, and validate it on an Hopfield network in the glassy phase, in which metastable states are largely uncorrelated from memories embedded in the synaptic matrix. In this context, we show that the neural states identified as clusters' centroids offer a parsimonious parametrization of the synaptic matrix, which allows a significant improvement in inferring the synaptic couplings from the neural activities. Moving to the more realistic case of a multi-modular spiking network, with spike-frequency adaptation inducing history-dependent effects, we propose a procedure inspired by Boltzmann learning, but extending its domain of application, to learn inter-module synaptic couplings so that the spiking network reproduces a prescribed pattern of spatial correlations; we then illustrate, in the spiking network, how clustering is effective in extracting relevant features of the network's state-space landscape. Finally, we show that the knowledge of the cluster structure allows casting the multi-dimensional neural dynamics in the form of a symbolic dynamics of transitions between clusters; as an illustration of the potential of such reduction, we define and analyze a measure of complexity of the neural time series.

  5. Two-Stage Approach to Image Classification by Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Ososkov, Gennady; Goncharov, Pavel

    2018-02-01

    The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  6. Forecasting of the electrical actuators condition using stator’s current signals

    NASA Astrophysics Data System (ADS)

    Kruglova, T. N.; Yaroshenko, I. V.; Rabotalov, N. N.; Melnikov, M. A.

    2017-02-01

    This article describes a forecasting method for electrical actuators realized through the combination of Fourier transformation and neural network techniques. The method allows finding the value of diagnostic functions in the iterating operating cycle and the number of operational cycles in time before the BLDC actuator fails. For forecasting of the condition of the actuator, we propose a hierarchical structure of the neural network aiming to reduce the training time of the neural network and improve estimation accuracy.

  7. Fuzzy Logic Module of Convolutional Neural Network for Handwritten Digits Recognition

    NASA Astrophysics Data System (ADS)

    Popko, E. A.; Weinstein, I. A.

    2016-08-01

    Optical character recognition is one of the important issues in the field of pattern recognition. This paper presents a method for recognizing handwritten digits based on the modeling of convolutional neural network. The integrated fuzzy logic module based on a structural approach was developed. Used system architecture adjusted the output of the neural network to improve quality of symbol identification. It was shown that proposed algorithm was flexible and high recognition rate of 99.23% was achieved.

  8. Neural network modeling of nonlinear systems based on Volterra series extension of a linear model

    NASA Technical Reports Server (NTRS)

    Soloway, Donald I.; Bialasiewicz, Jan T.

    1992-01-01

    A Volterra series approach was applied to the identification of nonlinear systems which are described by a neural network model. A procedure is outlined by which a mathematical model can be developed from experimental data obtained from the network structure. Applications of the results to the control of robotic systems are discussed.

  9. Elements of an algorithm for optimizing a parameter-structural neural network

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.

  10. Standard cell-based implementation of a digital optoelectronic neural-network hardware.

    PubMed

    Maier, K D; Beckstein, C; Blickhan, R; Erhard, W

    2001-03-10

    A standard cell-based implementation of a digital optoelectronic neural-network architecture is presented. The overall structure of the multilayer perceptron network that was used, the optoelectronic interconnection system between the layers, and all components required in each layer are defined. The design process from VHDL-based modeling from synthesis and partly automatic placing and routing to the final editing of one layer of the circuit of the multilayer perceptrons are described. A suitable approach for the standard cell-based design of optoelectronic systems is presented, and shortcomings of the design tool that was used are pointed out. The layout for the microelectronic circuit of one layer in a multilayer perceptron neural network with a performance potential 1 magnitude higher than neural networks that are purely electronic based has been successfully designed.

  11. Applications of self-organizing neural networks in virtual screening and diversity selection.

    PubMed

    Selzer, Paul; Ertl, Peter

    2006-01-01

    Artificial neural networks provide a powerful technique for the analysis and modeling of nonlinear relationships between molecular structures and pharmacological activity. Many network types, including Kohonen and counterpropagation, also provide an intuitive method for the visual assessment of correspondence between the input and output data. This work shows how a combination of neural networks and radial distribution function molecular descriptors can be applied in various areas of industrial pharmaceutical research. These applications include the prediction of biological activity, the selection of screening candidates (cherry picking), and the extraction of representative subsets from large compound collections such as combinatorial libraries. The methods described have also been implemented as an easy-to-use Web tool, allowing chemists to perform interactive neural network experiments on the Novartis intranet.

  12. Artificial Neural Networks for Processing Graphs with Application to Image Understanding: A Survey

    NASA Astrophysics Data System (ADS)

    Bianchini, Monica; Scarselli, Franco

    In graphical pattern recognition, each data is represented as an arrangement of elements, that encodes both the properties of each element and the relations among them. Hence, patterns are modelled as labelled graphs where, in general, labels can be attached to both nodes and edges. Artificial neural networks able to process graphs are a powerful tool for addressing a great variety of real-world problems, where the information is naturally organized in entities and relationships among entities and, in fact, they have been widely used in computer vision, f.i. in logo recognition, in similarity retrieval, and for object detection. In this chapter, we propose a survey of neural network models able to process structured information, with a particular focus on those architectures tailored to address image understanding applications. Starting from the original recursive model (RNNs), we subsequently present different ways to represent images - by trees, forests of trees, multiresolution trees, directed acyclic graphs with labelled edges, general graphs - and, correspondingly, neural network architectures appropriate to process such structures.

  13. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  14. THE CHOICE OF OPTIMAL STRUCTURE OF ARTIFICIAL NEURAL NETWORK CLASSIFIER INTENDED FOR CLASSIFICATION OF WELDING FLAWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sikora, R.; Chady, T.; Baniukiewicz, P.

    2010-02-22

    Nondestructive testing and evaluation are under continuous development. Currently researches are concentrated on three main topics: advancement of existing methods, introduction of novel methods and development of artificial intelligent systems for automatic defect recognition (ADR). Automatic defect classification algorithm comprises of two main tasks: creating a defect database and preparing a defect classifier. Here, the database was built using defect features that describe all geometrical and texture properties of the defect. Almost twenty carefully selected features calculated for flaws extracted from real radiograms were used. The radiograms were obtained from shipbuilding industry and they were verified by qualified operator. Twomore » weld defect's classifiers based on artificial neural networks were proposed and compared. First model consisted of one neural network model, where each output neuron corresponded to different defect group. The second model contained five neural networks. Each neural network had one neuron on output and was responsible for detection of defects from one group. In order to evaluate the effectiveness of the neural networks classifiers, the mean square errors were calculated for test radiograms and compared.« less

  15. The Choice of Optimal Structure of Artificial Neural Network Classifier Intended for Classification of Welding Flaws

    NASA Astrophysics Data System (ADS)

    Sikora, R.; Chady, T.; Baniukiewicz, P.; Caryk, M.; Piekarczyk, B.

    2010-02-01

    Nondestructive testing and evaluation are under continuous development. Currently researches are concentrated on three main topics: advancement of existing methods, introduction of novel methods and development of artificial intelligent systems for automatic defect recognition (ADR). Automatic defect classification algorithm comprises of two main tasks: creating a defect database and preparing a defect classifier. Here, the database was built using defect features that describe all geometrical and texture properties of the defect. Almost twenty carefully selected features calculated for flaws extracted from real radiograms were used. The radiograms were obtained from shipbuilding industry and they were verified by qualified operator. Two weld defect's classifiers based on artificial neural networks were proposed and compared. First model consisted of one neural network model, where each output neuron corresponded to different defect group. The second model contained five neural networks. Each neural network had one neuron on output and was responsible for detection of defects from one group. In order to evaluate the effectiveness of the neural networks classifiers, the mean square errors were calculated for test radiograms and compared.

  16. Performance of Deep and Shallow Neural Networks, the Universal Approximation Theorem, Activity Cliffs, and QSAR.

    PubMed

    Winkler, David A; Le, Tu C

    2017-01-01

    Neural networks have generated valuable Quantitative Structure-Activity/Property Relationships (QSAR/QSPR) models for a wide variety of small molecules and materials properties. They have grown in sophistication and many of their initial problems have been overcome by modern mathematical techniques. QSAR studies have almost always used so-called "shallow" neural networks in which there is a single hidden layer between the input and output layers. Recently, a new and potentially paradigm-shifting type of neural network based on Deep Learning has appeared. Deep learning methods have generated impressive improvements in image and voice recognition, and are now being applied to QSAR and QSAR modelling. This paper describes the differences in approach between deep and shallow neural networks, compares their abilities to predict the properties of test sets for 15 large drug data sets (the kaggle set), discusses the results in terms of the Universal Approximation theorem for neural networks, and describes how DNN may ameliorate or remove troublesome "activity cliffs" in QSAR data sets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Role of local network oscillations in resting-state functional connectivity.

    PubMed

    Cabral, Joana; Hugues, Etienne; Sporns, Olaf; Deco, Gustavo

    2011-07-01

    Spatio-temporally organized low-frequency fluctuations (<0.1 Hz), observed in BOLD fMRI signal during rest, suggest the existence of underlying network dynamics that emerge spontaneously from intrinsic brain processes. Furthermore, significant correlations between distinct anatomical regions-or functional connectivity (FC)-have led to the identification of several widely distributed resting-state networks (RSNs). This slow dynamics seems to be highly structured by anatomical connectivity but the mechanism behind it and its relationship with neural activity, particularly in the gamma frequency range, remains largely unknown. Indeed, direct measurements of neuronal activity have revealed similar large-scale correlations, particularly in slow power fluctuations of local field potential gamma frequency range oscillations. To address these questions, we investigated neural dynamics in a large-scale model of the human brain's neural activity. A key ingredient of the model was a structural brain network defined by empirically derived long-range brain connectivity together with the corresponding conduction delays. A neural population, assumed to spontaneously oscillate in the gamma frequency range, was placed at each network node. When these oscillatory units are integrated in the network, they behave as weakly coupled oscillators. The time-delayed interaction between nodes is described by the Kuramoto model of phase oscillators, a biologically-based model of coupled oscillatory systems. For a realistic setting of axonal conduction speed, we show that time-delayed network interaction leads to the emergence of slow neural activity fluctuations, whose patterns correlate significantly with the empirically measured FC. The best agreement of the simulated FC with the empirically measured FC is found for a set of parameters where subsets of nodes tend to synchronize although the network is not globally synchronized. Inside such clusters, the simulated BOLD signal between nodes is found to be correlated, instantiating the empirically observed RSNs. Between clusters, patterns of positive and negative correlations are observed, as described in experimental studies. These results are found to be robust with respect to a biologically plausible range of model parameters. In conclusion, our model suggests how resting-state neural activity can originate from the interplay between the local neural dynamics and the large-scale structure of the brain. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. An optimally evolved connective ratio of neural networks that maximizes the occurrence of synchronized bursting behavior

    PubMed Central

    2012-01-01

    Background Synchronized bursting activity (SBA) is a remarkable dynamical behavior in both ex vivo and in vivo neural networks. Investigations of the underlying structural characteristics associated with SBA are crucial to understanding the system-level regulatory mechanism of neural network behaviors. Results In this study, artificial pulsed neural networks were established using spike response models to capture fundamental dynamics of large scale ex vivo cortical networks. Network simulations with synaptic parameter perturbations showed the following two findings. (i) In a network with an excitatory ratio (ER) of 80-90%, its connective ratio (CR) was within a range of 10-30% when the occurrence of SBA reached the highest expectation. This result was consistent with the experimental observation in ex vivo neuronal networks, which were reported to possess a matured inhibitory synaptic ratio of 10-20% and a CR of 10-30%. (ii) No SBA occurred when a network does not contain any all-positive-interaction feedback loop (APFL) motif. In a neural network containing APFLs, the number of APFLs presented an optimal range corresponding to the maximal occurrence of SBA, which was very similar to the optimal CR. Conclusions In a neural network, the evolutionarily selected CR (10-30%) optimizes the occurrence of SBA, and APFL serves a pivotal network motif required to maximize the occurrence of SBA. PMID:22462685

  19. Matching algorithm of missile tail flame based on back-propagation neural network

    NASA Astrophysics Data System (ADS)

    Huang, Da; Huang, Shucai; Tang, Yidong; Zhao, Wei; Cao, Wenhuan

    2018-02-01

    This work presents a spectral matching algorithm of missile plume detection that based on neural network. The radiation value of the characteristic spectrum of the missile tail flame is taken as the input of the network. The network's structure including the number of nodes and layers is determined according to the number of characteristic spectral bands and missile types. We can get the network weight matrixes and threshold vectors through training the network using training samples, and we can determine the performance of the network through testing the network using the test samples. A small amount of data cause the network has the advantages of simple structure and practicality. Network structure composed of weight matrix and threshold vector can complete task of spectrum matching without large database support. Network can achieve real-time requirements with a small quantity of data. Experiment results show that the algorithm has the ability to match the precise spectrum and strong robustness.

  20. Predicting Item Difficulty in a Reading Comprehension Test with an Artificial Neural Network.

    ERIC Educational Resources Information Center

    Perkins, Kyle; And Others

    This paper reports the results of using a three-layer backpropagation artificial neural network to predict item difficulty in a reading comprehension test. Two network structures were developed, one with and one without a sigmoid function in the output processing unit. The data set, which consisted of a table of coded test items and corresponding…

  1. Review On Applications Of Neural Network To Computer Vision

    NASA Astrophysics Data System (ADS)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  2. Black Holes as Brains: Neural Networks with Area Law Entropy

    NASA Astrophysics Data System (ADS)

    Dvali, Gia

    2018-04-01

    Motivated by the potential similarities between the underlying mechanisms of the enhanced memory storage capacity in black holes and in brain networks, we construct an artificial quantum neural network based on gravity-like synaptic connections and a symmetry structure that allows to describe the network in terms of geometry of a d-dimensional space. We show that the network possesses a critical state in which the gapless neurons emerge that appear to inhabit a (d-1)-dimensional surface, with their number given by the surface area. In the excitations of these neurons, the network can store and retrieve an exponentially large number of patterns within an arbitrarily narrow energy gap. The corresponding micro-state entropy of the brain network exhibits an area law. The neural network can be described in terms of a quantum field, via identifying the different neurons with the different momentum modes of the field, while identifying the synaptic connections among the neurons with the interactions among the corresponding momentum modes. Such a mapping allows to attribute a well-defined sense of geometry to an intrinsically non-local system, such as the neural network, and vice versa, it allows to represent the quantum field model as a neural network.

  3. Parallel protein secondary structure prediction based on neural networks.

    PubMed

    Zhong, Wei; Altun, Gulsah; Tian, Xinmin; Harrison, Robert; Tai, Phang C; Pan, Yi

    2004-01-01

    Protein secondary structure prediction has a fundamental influence on today's bioinformatics research. In this work, binary and tertiary classifiers of protein secondary structure prediction are implemented on Denoeux belief neural network (DBNN) architecture. Hydrophobicity matrix, orthogonal matrix, BLOSUM62 and PSSM (position specific scoring matrix) are experimented separately as the encoding schemes for DBNN. The experimental results contribute to the design of new encoding schemes. New binary classifier for Helix versus not Helix ( approximately H) for DBNN produces prediction accuracy of 87% when PSSM is used for the input profile. The performance of DBNN binary classifier is comparable to other best prediction methods. The good test results for binary classifiers open a new approach for protein structure prediction with neural networks. Due to the time consuming task of training the neural networks, Pthread and OpenMP are employed to parallelize DBNN in the hyperthreading enabled Intel architecture. Speedup for 16 Pthreads is 4.9 and speedup for 16 OpenMP threads is 4 in the 4 processors shared memory architecture. Both speedup performance of OpenMP and Pthread is superior to that of other research. With the new parallel training algorithm, thousands of amino acids can be processed in reasonable amount of time. Our research also shows that hyperthreading technology for Intel architecture is efficient for parallel biological algorithms.

  4. The C. elegans Connectome Consists of Homogenous Circuits with Defined Functional Roles

    PubMed Central

    Azulay, Aharon; Zaslaver, Alon

    2016-01-01

    A major goal of systems neuroscience is to decipher the structure-function relationship in neural networks. Here we study network functionality in light of the common-neighbor-rule (CNR) in which a pair of neurons is more likely to be connected the more common neighbors it shares. Focusing on the fully-mapped neural network of C. elegans worms, we establish that the CNR is an emerging property in this connectome. Moreover, sets of common neighbors form homogenous structures that appear in defined layers of the network. Simulations of signal propagation reveal their potential functional roles: signal amplification and short-term memory at the sensory/inter-neuron layer, and synchronized activity at the motoneuron layer supporting coordinated movement. A coarse-grained view of the neural network based on homogenous connected sets alone reveals a simple modular network architecture that is intuitive to understand. These findings provide a novel framework for analyzing larger, more complex, connectomes once these become available. PMID:27606684

  5. Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.

    PubMed

    Zhang, Yanjun; Tao, Gang; Chen, Mou

    2016-09-01

    This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method.

  6. Analysis of structural patterns in the brain with the complex network approach

    NASA Astrophysics Data System (ADS)

    Maksimenko, Vladimir A.; Makarov, Vladimir V.; Kharchenko, Alexander A.; Pavlov, Alexey N.; Khramova, Marina V.; Koronovskii, Alexey A.; Hramov, Alexander E.

    2015-03-01

    In this paper we study mechanisms of the phase synchronization in a model network of Van der Pol oscillators and in the neural network of the brain by consideration of macroscopic parameters of these networks. As the macroscopic characteristics of the model network we consider a summary signal produced by oscillators. Similar to the model simulations, we study EEG signals reflecting the macroscopic dynamics of neural network. We show that the appearance of the phase synchronization leads to an increased peak in the wavelet spectrum related to the dynamics of synchronized oscillators. The observed correlation between the phase relations of individual elements and the macroscopic characteristics of the whole network provides a way to detect phase synchronization in the neural networks in the cases of normal and pathological activity.

  7. Nuevas tecnicas basadas en redes neuronales para el diseno de filtros de microondas multicapa apantallados

    NASA Astrophysics Data System (ADS)

    Pascual Garcia, Juan

    In this PhD thesis one method of shielded multilayer circuit neural network based analysis has been developed. One of the most successful analysis procedures of these kind of structures is the Integral Equation technique (IE) solved by the Method of Moments (MoM). In order to solve the IE, in the version which uses the media relevant potentials, it is necessary to have a formulation of the Green's functions associated to the mentioned potentials. The main computational burden in the IE resolution lies on the numerical evaluation of the Green's functions. In this work, the circuit analysis has been drastically accelerated thanks to the approximation of the Green's functions by means of neural networks. Once trained, the neural networks substitute the Green's functions in the IE. Two different types of neural networks have been used: the Radial basis function neural networks (RBFNN) and the Chebyshev neural networks. Thanks mainly to two distinct operations the correct approximation of the Green's functions has been possible. On the one hand, a very effective input space division has been developed. On the other hand, the elimination of the singularity makes feasible the approximation of slow variation functions. Two different singularity elimination strategies have been developed. The first one is based on the multiplication by the source-observation points distance (rho). The second one outperforms the first one. It consists of the extraction of two layers of spatial images from the whole summation of images. With regard to the Chebyshev neural networks, the OLS training algorithm has been applied in a novel fashion. This method allows the optimum design in this kind of neural networks. In this way, the performance of these neural networks outperforms greatly the RBFNNs one. In both networks, the time gain reached makes the neural method profitable. The time invested in the input space division and in the neural training is negligible with only few circuit analysis. To show, in a practical way, the ability of the neural based analysis method, two new design procedures have been developed. The first method uses the Genetic Algorithms to optimize an initial filter which does not fulfill the established specifications. A new fitness function, specially well suited to design filters, has been defined in order to assure the correct convergence of the optimization process. This new function measures the fulfillment of the specifications and it also prevents the appearance of the premature convergence problem. The second method is found on the approximation, by means of neural networks, of the relations between the electrical parameters, which defined the circuit response, and the physical dimensions that synthesize the aforementioned parameters. The neural networks trained with these data can be used in the design of many circuits in a given structure. Both methods had been show their ability in the design of practical filters.

  8. Learning the Relationship between the Primary Structure of HIV Envelope Glycoproteins and Neutralization Activity of Particular Antibodies by Using Artificial Neural Networks

    PubMed Central

    Buiu, Cătălin; Putz, Mihai V.; Avram, Speranta

    2016-01-01

    The dependency between the primary structure of HIV envelope glycoproteins (ENV) and the neutralization data for given antibodies is very complicated and depends on a large number of factors, such as the binding affinity of a given antibody for a given ENV protein, and the intrinsic infection kinetics of the viral strain. This paper presents a first approach to learning these dependencies using an artificial feedforward neural network which is trained to learn from experimental data. The results presented here demonstrate that the trained neural network is able to generalize on new viral strains and to predict reliable values of neutralizing activities of given antibodies against HIV-1. PMID:27727189

  9. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    PubMed

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  10. Force Field for Water Based on Neural Network.

    PubMed

    Wang, Hao; Yang, Weitao

    2018-05-18

    We developed a novel neural network based force field for water based on training with high level ab initio theory. The force field was built based on electrostatically embedded many-body expansion method truncated at binary interactions. Many-body expansion method is a common strategy to partition the total Hamiltonian of large systems into a hierarchy of few-body terms. Neural networks were trained to represent electrostatically embedded one-body and two-body interactions, which require as input only one and two water molecule calculations at the level of ab initio electronic structure method CCSD/aug-cc-pVDZ embedded in the molecular mechanics water environment, making it efficient as a general force field construction approach. Structural and dynamic properties of liquid water calculated with our force field show good agreement with experimental results. We constructed two sets of neural network based force fields: non-polarizable and polarizable force fields. Simulation results show that the non-polarizable force field using fixed TIP3P charges has already behaved well, since polarization effects and many-body effects are implicitly included due to the electrostatic embedding scheme. Our results demonstrate that the electrostatically embedded many-body expansion combined with neural network provides a promising and systematic way to build the next generation force fields at high accuracy and low computational costs, especially for large systems.

  11. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks.

    PubMed

    Honegger, Thibault; Thielen, Moritz I; Feizi, Soheil; Sanjana, Neville E; Voldman, Joel

    2016-06-22

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  12. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks

    NASA Astrophysics Data System (ADS)

    Honegger, Thibault; Thielen, Moritz I.; Feizi, Soheil; Sanjana, Neville E.; Voldman, Joel

    2016-06-01

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  13. An Application Development Platform for Neuromorphic Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dean, Mark; Chan, Jason; Daffron, Christopher

    2016-01-01

    Dynamic Adaptive Neural Network Arrays (DANNAs) are neuromorphic computing systems developed as a hardware based approach to the implementation of neural networks. They feature highly adaptive and programmable structural elements, which model arti cial neural networks with spiking behavior. We design them to solve problems using evolutionary optimization. In this paper, we highlight the current hardware and software implementations of DANNA, including their features, functionalities and performance. We then describe the development of an Application Development Platform (ADP) to support efficient application implementation and testing of DANNA based solutions. We conclude with future directions.

  14. Optimization of Training Sets For Neural-Net Processing of Characteristic Patterns From Vibrating Solids

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J. (Inventor)

    2006-01-01

    An artificial neural network is disclosed that processes holography generated characteristic pattern of vibrating structures along with finite-element models. The present invention provides for a folding operation for conditioning training sets for optimally training forward-neural networks to process characteristic fringe pattern. The folding pattern increases the sensitivity of the feed-forward network for detecting changes in the characteristic pattern The folding routine manipulates input pixels so as to be scaled according to the location in an intensity range rather than the position in the characteristic pattern.

  15. ER fluid applications to vibration control devices and an adaptive neural-net controller

    NASA Astrophysics Data System (ADS)

    Morishita, Shin; Ura, Tamaki

    1993-07-01

    Four applications of electrorheological (ER) fluid to vibration control actuators and an adaptive neural-net control system suitable for the controller of ER actuators are described: a shock absorber system for automobiles, a squeeze film damper bearing for rotational machines, a dynamic damper for multidegree-of-freedom structures, and a vibration isolator. An adaptive neural-net control system composed of a forward model network for structural identification and a controller network is introduced for the control system of these ER actuators. As an example study of intelligent vibration control systems, an experiment was performed in which the ER dynamic damper was attached to a beam structure and controlled by the present neural-net controller so that the vibration in several modes of the beam was reduced with a single dynamic damper.

  16. A Wavelet Neural Network Optimal Control Model for Traffic-Flow Prediction in Intelligent Transport Systems

    NASA Astrophysics Data System (ADS)

    Huang, Darong; Bai, Xing-Rong

    Based on wavelet transform and neural network theory, a traffic-flow prediction model, which was used in optimal control of Intelligent Traffic system, is constructed. First of all, we have extracted the scale coefficient and wavelet coefficient from the online measured raw data of traffic flow via wavelet transform; Secondly, an Artificial Neural Network model of Traffic-flow Prediction was constructed and trained using the coefficient sequences as inputs and raw data as outputs; Simultaneous, we have designed the running principium of the optimal control system of traffic-flow Forecasting model, the network topological structure and the data transmitted model; Finally, a simulated example has shown that the technique is effectively and exactly. The theoretical results indicated that the wavelet neural network prediction model and algorithms have a broad prospect for practical application.

  17. Fuzzy-neural control of an aircraft tracking camera platform

    NASA Technical Reports Server (NTRS)

    Mcgrath, Dennis

    1994-01-01

    A fuzzy-neural control system simulation was developed for the control of a camera platform used to observe aircraft on final approach to an aircraft carrier. The fuzzy-neural approach to control combines the structure of a fuzzy knowledge base with a supervised neural network's ability to adapt and improve. The performance characteristics of this hybrid system were compared to those of a fuzzy system and a neural network system developed independently to determine if the fusion of these two technologies offers any advantage over the use of one or the other. The results of this study indicate that the fuzzy-neural approach to control offers some advantages over either fuzzy or neural control alone.

  18. A hypercube compact neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rostykus, P.L.; Somani, A.K.

    1988-09-01

    A major problem facing implementation of neural networks is the connection problem. One popular tradeoff is to remove connections. Random disconnection severely degrades the capabilities. The hypercube based Compact Neural Network (CNN) has structured architecture combined with a rearrangement of the memory vectors gives a larger input space and better degradation than a cost equivalent network with more connections. The CNNs are based on a Hopfield network. The changes from the Hopfield net include states of -1 and +1 and when a node was evaluated to 0, it was not biased either positive or negative, instead it resumed its previousmore » state. L = PEs, N = memories and t/sub ij/s is the weights between i and j.« less

  19. Uncovering the neuroanatomical correlates of cognitive, affective and conative theory of mind in paediatric traumatic brain injury: a neural systems perspective.

    PubMed

    Ryan, Nicholas P; Catroppa, Cathy; Beare, Richard; Silk, Timothy J; Hearps, Stephen J; Beauchamp, Miriam H; Yeates, Keith O; Anderson, Vicki A

    2017-09-01

    Deficits in theory of mind (ToM) are common after neurological insult acquired in the first and second decade of life, however the contribution of large-scale neural networks to ToM deficits in children with brain injury is unclear. Using paediatric traumatic brain injury (TBI) as a model, this study investigated the sub-acute effect of paediatric traumatic brain injury on grey-matter volume of three large-scale, domain-general brain networks (the Default Mode Network, DMN; the Central Executive Network, CEN; and the Salience Network, SN), as well as two domain-specific neural networks implicated in social-affective processes (the Cerebro-Cerebellar Mentalizing Network, CCMN and the Mirror Neuron/Empathy Network, MNEN). We also evaluated prospective structure-function relationships between these large-scale neural networks and cognitive, affective and conative ToM. 3D T1- weighted magnetic resonance imaging sequences were acquired sub-acutely in 137 children [TBI: n = 103; typically developing (TD) children: n = 34]. All children were assessed on measures of ToM at 24-months post-injury. Children with severe TBI showed sub-acute volumetric reductions in the CCMN, SN, MNEN, CEN and DMN, as well as reduced grey-matter volumes of several hub regions of these neural networks. Volumetric reductions in the CCMN and several of its hub regions, including the cerebellum, predicted poorer cognitive ToM. In contrast, poorer affective and conative ToM were predicted by volumetric reductions in the SN and MNEN, respectively. Overall, results suggest that cognitive, affective and conative ToM may be prospectively predicted by individual differences in structure of different neural systems-the CCMN, SN and MNEN, respectively. The prospective relationship between cerebellar volume and cognitive ToM outcomes is a novel finding in our paediatric brain injury sample and suggests that the cerebellum may play a role in the neural networks important for ToM. These findings are discussed in relation to neurocognitive models of ToM. We conclude that detection of sub-acute volumetric abnormalities of large-scale neural networks and their hub regions may aid in the early identification of children at risk for chronic social-cognitive impairment. © The Author (2017). Published by Oxford University Press.

  20. Effect of synapse dilution on the memory retrieval in structured attractor neural networks

    NASA Astrophysics Data System (ADS)

    Brunel, N.

    1993-08-01

    We investigate a simple model of structured attractor neural network (ANN). In this network a module codes for the category of the stored information, while another group of neurons codes for the remaining information. The probability distribution of stabilities of the patterns and the prototypes of the categories are calculated, for two different synaptic structures. The stability of the prototypes is shown to increase when the fraction of neurons coding for the category goes down. Then the effect of synapse destruction on the retrieval is studied in two opposite situations : first analytically in sparsely connected networks, then numerically in completely connected ones. In both cases the behaviour of the structured network and that of the usual homogeneous networks are compared. When lesions increase, two transitions are shown to appear in the behaviour of the structured network when one of the patterns is presented to the network. After the first transition the network recognizes the category of the pattern but not the individual pattern. After the second transition the network recognizes nothing. These effects are similar to syndromes caused by lesions in the central visual system, namely prosopagnosia and agnosia. In both types of networks (structured or homogeneous) the stability of the prototype is greater than the stability of individual patterns, however the first transition, for completely connected networks, occurs only when the network is structured.

  1. Structure and function of complex brain networks

    PubMed Central

    Sporns, Olaf

    2013-01-01

    An increasing number of theoretical and empirical studies approach the function of the human brain from a network perspective. The analysis of brain networks is made feasible by the development of new imaging acquisition methods as well as new tools from graph theory and dynamical systems. This review surveys some of these methodological advances and summarizes recent findings on the architecture of structural and functional brain networks. Studies of the structural connectome reveal several modules or network communities that are interlinked by hub regions mediating communication processes between modules. Recent network analyses have shown that network hubs form a densely linked collective called a “rich club,” centrally positioned for attracting and dispersing signal traffic. In parallel, recordings of resting and task-evoked neural activity have revealed distinct resting-state networks that contribute to functions in distinct cognitive domains. Network methods are increasingly applied in a clinical context, and their promise for elucidating neural substrates of brain and mental disorders is discussed. PMID:24174898

  2. Neural signal registration and analysis of axons grown in microchannels

    NASA Astrophysics Data System (ADS)

    Pigareva, Y.; Malishev, E.; Gladkov, A.; Kolpakov, V.; Bukatin, A.; Mukhina, I.; Kazantsev, V.; Pimashkin, A.

    2016-08-01

    Registration of neuronal bioelectrical signals remains one of the main physical tools to study fundamental mechanisms of signal processing in the brain. Neurons generate spiking patterns which propagate through complex map of neural network connectivity. Extracellular recording of isolated axons grown in microchannels provides amplification of the signal for detailed study of spike propagation. In this study we used neuronal hippocampal cultures grown in microfluidic devices combined with microelectrode arrays to investigate a changes of electrical activity during neural network development. We found that after 5 days in vitro after culture plating the spiking activity appears first in microchannels and on the next 2-3 days appears on the electrodes of overall neural network. We conclude that such approach provides a convenient method to study neural signal processing and functional structure development on a single cell and network level of the neuronal culture.

  3. Geometric Bioinspired Networks for Recognition of 2-D and 3-D Low-Level Structures and Transformations.

    PubMed

    Bayro-Corrochano, Eduardo; Vazquez-Santacruz, Eduardo; Moya-Sanchez, Eduardo; Castillo-Munis, Efrain

    2016-10-01

    This paper presents the design of radial basis function geometric bioinspired networks and their applications. Until now, the design of neural networks has been inspired by the biological models of neural networks but mostly using vector calculus and linear algebra. However, these designs have never shown the role of geometric computing. The question is how biological neural networks handle complex geometric representations involving Lie group operations like rotations. Even though the actual artificial neural networks are biologically inspired, they are just models which cannot reproduce a plausible biological process. Until now researchers have not shown how, using these models, one can incorporate them into the processing of geometric computing. Here, for the first time in the artificial neural networks domain, we address this issue by designing a kind of geometric RBF using the geometric algebra framework. As a result, using our artificial networks, we show how geometric computing can be carried out by the artificial neural networks. Such geometric neural networks have a great potential in robot vision. This is the most important aspect of this contribution to propose artificial geometric neural networks for challenging tasks in perception and action. In our experimental analysis, we show the applicability of our geometric designs, and present interesting experiments using 2-D data of real images and 3-D screw axis data. In general, our models should be used to process different types of inputs, such as visual cues, touch (texture, elasticity, temperature), taste, and sound. One important task of a perception-action system is to fuse a variety of cues coming from the environment and relate them via a sensor-motor manifold with motor modules to carry out diverse reasoned actions.

  4. Neural-Network Quantum States, String-Bond States, and Chiral Topological States

    NASA Astrophysics Data System (ADS)

    Glasser, Ivan; Pancotti, Nicola; August, Moritz; Rodriguez, Ivan D.; Cirac, J. Ignacio

    2018-01-01

    Neural-network quantum states have recently been introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between neural-network quantum states in the form of restricted Boltzmann machines and some classes of tensor-network states in arbitrary dimensions. In particular, we demonstrate that short-range restricted Boltzmann machines are entangled plaquette states, while fully connected restricted Boltzmann machines are string-bond states with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of restricted Boltzmann machines and their efficiency at representing many-body quantum states. String-bond states also provide a generic way of enhancing the power of neural-network quantum states and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of tensor networks and the efficiency of neural-network quantum states into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional tensor networks, we show that, because of their nonlocal geometry, neural-network quantum states and their string-bond-state extension can describe a lattice fractional quantum Hall state exactly. In addition, we provide numerical evidence that neural-network quantum states can approximate a chiral spin liquid with better accuracy than entangled plaquette states and local string-bond states. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of string-bond states as a tool in more traditional machine-learning applications.

  5. The comparison of performance by using alternative refrigerant R152a in automobile climate system with different artificial neural network models

    NASA Astrophysics Data System (ADS)

    Kalkisim, A. T.; Hasiloglu, A. S.; Bilen, K.

    2016-04-01

    Due to the refrigerant gas R134a which is used in automobile air conditioning systems and has greater global warming impact will be phased out gradually, an alternative gas is being desired to be used without much change on existing air conditioning systems. It is aimed to obtain the easier solution for intermediate values on the performance by creating a Neural Network Model in case of using a fluid (R152a) in automobile air conditioning systems that has the thermodynamic properties close to each other and near-zero global warming impact. In this instance, a network structure giving the most accurate result has been established by identifying which model provides the best education with which network structure and makes the most accurate predictions in the light of the data obtained after five different ANN models was trained with three different network structures. During training of Artificial Neural Network, Quick Propagation, Quasi-Newton, Levenberg-Marquardt and Conjugate Gradient Descent Batch Back Propagation methodsincluding five inputs and one output were trained with various network structures. Over 1500 iterations have been evaluated and the most appropriate model was identified by determining minimum error rates. The accuracy of the determined ANN model was revealed by comparing with estimates made by the Multi-Regression method.

  6. Deep learning for computational chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less

  7. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    PubMed

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  8. Assessing the Liquidity of Firms: Robust Neural Network Regression as an Alternative to the Current Ratio

    NASA Astrophysics Data System (ADS)

    de Andrés, Javier; Landajo, Manuel; Lorca, Pedro; Labra, Jose; Ordóñez, Patricia

    Artificial neural networks have proven to be useful tools for solving financial analysis problems such as financial distress prediction and audit risk assessment. In this paper we focus on the performance of robust (least absolute deviation-based) neural networks on measuring liquidity of firms. The problem of learning the bivariate relationship between the components (namely, current liabilities and current assets) of the so-called current ratio is analyzed, and the predictive performance of several modelling paradigms (namely, linear and log-linear regressions, classical ratios and neural networks) is compared. An empirical analysis is conducted on a representative data base from the Spanish economy. Results indicate that classical ratio models are largely inadequate as a realistic description of the studied relationship, especially when used for predictive purposes. In a number of cases, especially when the analyzed firms are microenterprises, the linear specification is improved by considering the flexible non-linear structures provided by neural networks.

  9. A neural network with modular hierarchical learning

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)

    1994-01-01

    This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.

  10. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  11. Proceedings of the Government Neural Network Applications Workshop Held at Wright-Patterson AFB, Ohio on August 24-26, 1992. Volume 1

    DTIC Science & Technology

    1992-08-01

    history trace of input u(t). (b) A common network struc- 1 ture makes use of the feedforward tapped delay line. For this structure the memory depth D...theories and analyses that will be used world- wide for a long time to come. The reason for this contribution has generally been the government’s need to...that emulate the neural reasoning behavior of biological neural systems (e.g. the human brain). As such, they are loosely based on biological neural

  12. Optical computing and neural networks; Proceedings of the Meeting, National Chiao Tung Univ., Hsinchu, Taiwan, Dec. 16, 17, 1992

    NASA Technical Reports Server (NTRS)

    Hsu, Ken-Yuh (Editor); Liu, Hua-Kuang (Editor)

    1992-01-01

    The present conference discusses optical neural networks, photorefractive nonlinear optics, optical pattern recognition, digital and analog processors, and holography and its applications. Attention is given to bifurcating optical information processing, neural structures in digital halftoning, an exemplar-based optical neural net classifier for color pattern recognition, volume storage in photorefractive disks, and microlaser-based compact optical neuroprocessors. Also treated are the optical implementation of a feature-enhanced optical interpattern-associative neural network model and its optical implementation, an optical pattern binary dual-rail logic gate module, a theoretical analysis for holographic associative memories, joint transform correlators, image addition and subtraction via the Talbot effect, and optical wavelet-matched filters. (No individual items are abstracted in this volume)

  13. Optical computing and neural networks; Proceedings of the Meeting, National Chiao Tung Univ., Hsinchu, Taiwan, Dec. 16, 17, 1992

    NASA Astrophysics Data System (ADS)

    Hsu, Ken-Yuh; Liu, Hua-Kuang

    The present conference discusses optical neural networks, photorefractive nonlinear optics, optical pattern recognition, digital and analog processors, and holography and its applications. Attention is given to bifurcating optical information processing, neural structures in digital halftoning, an exemplar-based optical neural net classifier for color pattern recognition, volume storage in photorefractive disks, and microlaser-based compact optical neuroprocessors. Also treated are the optical implementation of a feature-enhanced optical interpattern-associative neural network model and its optical implementation, an optical pattern binary dual-rail logic gate module, a theoretical analysis for holographic associative memories, joint transform correlators, image addition and subtraction via the Talbot effect, and optical wavelet-matched filters. (No individual items are abstracted in this volume)

  14. Markov models for fMRI correlation structure: Is brain functional connectivity small world, or decomposable into networks?

    PubMed

    Varoquaux, G; Gramfort, A; Poline, J B; Thirion, B

    2012-01-01

    Correlations in the signal observed via functional Magnetic Resonance Imaging (fMRI), are expected to reveal the interactions in the underlying neural populations through hemodynamic response. In particular, they highlight distributed set of mutually correlated regions that correspond to brain networks related to different cognitive functions. Yet graph-theoretical studies of neural connections give a different picture: that of a highly integrated system with small-world properties: local clustering but with short pathways across the complete structure. We examine the conditional independence properties of the fMRI signal, i.e. its Markov structure, to find realistic assumptions on the connectivity structure that are required to explain the observed functional connectivity. In particular we seek a decomposition of the Markov structure into segregated functional networks using decomposable graphs: a set of strongly-connected and partially overlapping cliques. We introduce a new method to efficiently extract such cliques on a large, strongly-connected graph. We compare methods learning different graph structures from functional connectivity by testing the goodness of fit of the model they learn on new data. We find that summarizing the structure as strongly-connected networks can give a good description only for very large and overlapping networks. These results highlight that Markov models are good tools to identify the structure of brain connectivity from fMRI signals, but for this purpose they must reflect the small-world properties of the underlying neural systems. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Bilingual Lexical Interactions in an Unsupervised Neural Network Model

    ERIC Educational Resources Information Center

    Zhao, Xiaowei; Li, Ping

    2010-01-01

    In this paper we present an unsupervised neural network model of bilingual lexical development and interaction. We focus on how the representational structures of the bilingual lexicons can emerge, develop, and interact with each other as a function of the learning history. The results show that: (1) distinct representations for the two lexicons…

  16. Deep learning and the electronic structure problem

    NASA Astrophysics Data System (ADS)

    Mills, Kyle; Spanner, Michael; Tamblyn, Isaac

    In the past decade, the fields of artificial intelligence and computer vision have progressed remarkably. Supported by the enthusiasm of large tech companies, as well as significant hardware advances and the utilization of graphical processing units to accelerate computations, deep neural networks (DNN) are gaining momentum as a robust choice for many diverse machine learning applications. We have demonstrated the ability of a DNN to solve a quantum mechanical eigenvalue equation directly, without the need to compute a wavefunction, and without knowledge of the underlying physics. We have trained a convolutional neural network to predict the total energy of an electron in a confining, 2-dimensional electrostatic potential. We numerically solved the one-electron Schrödinger equation for millions of electrostatic potentials, and used this as training data for our neural network. Four classes of potentials were assessed: the canonical cases of the harmonic oscillator and infinite well, and two types of randomly generated potentials for which no analytic solution is known. We compare the performance of the neural network and consider how these results could lead to future advances in electronic structure theory.

  17. QSRR using evolved artificial neural network for 52 common pharmaceuticals and drugs of abuse in hair from UPLC-TOF-MS.

    PubMed

    Noorizadeh, Hadi; Farmany, Abbas; Narimani, Hojat; Noorizadeh, Mehrab

    2013-05-01

    A quantitative structure-retention relationship (QSRR) study based on an artificial neural network (ANN) was carried out for the prediction of the ultra-performance liquid chromatography-Time-of-Flight mass spectrometry (UPLC-TOF-MS) retention time (RT) of a set of 52 pharmaceuticals and drugs of abuse in hair. The genetic algorithm was used as a variable selection tool. A partial least squares (PLS) method was used to select the best descriptors which were used as input neurons in neural network model. For choosing the best predictive model from among comparable models, square correlation coefficient R(2) for the whole set calculated based on leave-group-out predicted values of the training set and model-derived predicted values for the test set compounds is suggested to be a good criterion. Finally, to improve the results, structure-retention relationships were followed by a non-linear approach using artificial neural networks and consequently better results were obtained. This also demonstrates the advantages of ANN. Copyright © 2011 John Wiley & Sons, Ltd.

  18. TopologyNet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions

    PubMed Central

    2017-01-01

    Although deep learning approaches have had tremendous success in image, video and audio processing, computer vision, and speech recognition, their applications to three-dimensional (3D) biomolecular structural data sets have been hindered by the geometric and biological complexity. To address this problem we introduce the element-specific persistent homology (ESPH) method. ESPH represents 3D complex geometry by one-dimensional (1D) topological invariants and retains important biological information via a multichannel image-like representation. This representation reveals hidden structure-function relationships in biomolecules. We further integrate ESPH and deep convolutional neural networks to construct a multichannel topological neural network (TopologyNet) for the predictions of protein-ligand binding affinities and protein stability changes upon mutation. To overcome the deep learning limitations from small and noisy training sets, we propose a multi-task multichannel topological convolutional neural network (MM-TCNN). We demonstrate that TopologyNet outperforms the latest methods in the prediction of protein-ligand binding affinities, mutation induced globular protein folding free energy changes, and mutation induced membrane protein folding free energy changes. Availability: weilab.math.msu.edu/TDL/ PMID:28749969

  19. Prediction of stock market characteristics using neural networks

    NASA Astrophysics Data System (ADS)

    Pandya, Abhijit S.; Kondo, Tadashi; Shah, Trupti U.; Gandhi, Viraf R.

    1999-03-01

    International stocks trading, currency and derivative contracts play an increasingly important role for many investors. Neural network is playing a dominant role in predicting the trends in stock markets and in currency speculation. In most economic applications, the success rate using neural networks is limited to 70 - 80%. By means of the new approach of GMDH (Group Method of Data Handling) neural network predictions can be improved further by 10 - 15%. It was observed in our study, that using GMDH for short, noisy or inaccurate data sample resulted in the best-simplified model. In the GMDH model accuracy of prediction is higher and the structure is simpler than that of the usual full physical model. As an example, prediction of the activity on the stock exchange in New York was considered. On the basis of observations in the period of Jan '95 to July '98, several variables of the stock market (S&P 500, Small Cap, Dow Jones, etc.) were predicted. A model portfolio using various stocks (Amgen, Merck, Office Depot, etc.) was built and its performance was evaluated based on neural network forecasting of the closing prices. Comparison of results was made with various neural network models such as Multilayer Perceptrons with Back Propagation, and the GMDH neural network. Variations of GMDH were studied and analysis of their performance is reported in the paper.

  20. Study on algorithm of process neural network for soft sensing in sewage disposal system

    NASA Astrophysics Data System (ADS)

    Liu, Zaiwen; Xue, Hong; Wang, Xiaoyi; Yang, Bin; Lu, Siying

    2006-11-01

    A new method of soft sensing based on process neural network (PNN) for sewage disposal system is represented in the paper. PNN is an extension of traditional neural network, in which the inputs and outputs are time-variation. An aggregation operator is introduced to process neuron, and it makes the neuron network has the ability to deal with the information of space-time two dimensions at the same time, so the data processing enginery of biological neuron is imitated better than traditional neuron. Process neural network with the structure of three layers in which hidden layer is process neuron and input and output are common neurons for soft sensing is discussed. The intelligent soft sensing based on PNN may be used to fulfill measurement of the effluent BOD (Biochemical Oxygen Demand) from sewage disposal system, and a good training result of soft sensing was obtained by the method.

  1. A neural network for the identification of measured helicopter noise

    NASA Technical Reports Server (NTRS)

    Cabell, R. H.; Fuller, C. R.; O'Brien, W. F.

    1991-01-01

    The results of a preliminary study of the components of a novel acoustic helicopter identification system are described. The identification system uses the relationship between the amplitudes of the first eight harmonics in the main rotor noise spectrum to distinguish between helicopter types. Two classification algorithms are tested; a statistically optimal Bayes classifier, and a neural network adaptive classifier. The performance of these classifiers is tested using measured noise of three helicopters. The statistical classifier can correctly identify the helicopter an average of 67 percent of the time, while the neural network is correct an average of 65 percent of the time. These results indicate the need for additional study of the envelope of harmonic amplitudes as a component of a helicopter identification system. Issues concerning the implementation of the neural network classifier, such as training time and structure of the network, are discussed.

  2. Use of Savitzky-Golay Filter for Performances Improvement of SHM Systems Based on Neural Networks and Distributed PZT Sensors.

    PubMed

    de Oliveira, Mario A; Araujo, Nelcileno V S; da Silva, Rodolfo N; da Silva, Tony I; Epaarachchi, Jayantha

    2018-01-08

    A considerable amount of research has focused on monitoring structural damage using Structural Health Monitoring (SHM) technologies, which has had recent advances. However, it is important to note the challenges and unresolved problems that disqualify currently developed monitoring systems. One of the frontline SHM technologies, the Electromechanical Impedance (EMI) technique, has shown its potential to overcome remaining problems and challenges. Unfortunately, the recently developed neural network algorithms have not shown significant improvements in the accuracy of rate and the required processing time. In order to fill this gap in advanced neural networks used with EMI techniques, this paper proposes an enhanced and reliable strategy for improving the structural damage detection via: (1) Savitzky-Golay (SG) filter, using both first and second derivatives; (2) Probabilistic Neural Network (PNN); and, (3) Simplified Fuzzy ARTMAP Network (SFAN). Those three methods were employed to analyze the EMI data experimentally obtained from an aluminum plate containing three attached PZT (Lead Zirconate Titanate) patches. In this present study, the damage scenarios were simulated by attaching a small metallic nut at three different positions in the aluminum plate. We found that the proposed method achieves a hit rate of more than 83%, which is significantly higher than current state-of-the-art approaches. Furthermore, this approach results in an improvement of 93% when considering the best case scenario.

  3. Use of Savitzky–Golay Filter for Performances Improvement of SHM Systems Based on Neural Networks and Distributed PZT Sensors

    PubMed Central

    Araujo, Nelcileno V. S.; da Silva, Rodolfo N.; da Silva, Tony I.; Epaarachchi, Jayantha

    2018-01-01

    A considerable amount of research has focused on monitoring structural damage using Structural Health Monitoring (SHM) technologies, which has had recent advances. However, it is important to note the challenges and unresolved problems that disqualify currently developed monitoring systems. One of the frontline SHM technologies, the Electromechanical Impedance (EMI) technique, has shown its potential to overcome remaining problems and challenges. Unfortunately, the recently developed neural network algorithms have not shown significant improvements in the accuracy of rate and the required processing time. In order to fill this gap in advanced neural networks used with EMI techniques, this paper proposes an enhanced and reliable strategy for improving the structural damage detection via: (1) Savitzky–Golay (SG) filter, using both first and second derivatives; (2) Probabilistic Neural Network (PNN); and, (3) Simplified Fuzzy ARTMAP Network (SFAN). Those three methods were employed to analyze the EMI data experimentally obtained from an aluminum plate containing three attached PZT (Lead Zirconate Titanate) patches. In this present study, the damage scenarios were simulated by attaching a small metallic nut at three different positions in the aluminum plate. We found that the proposed method achieves a hit rate of more than 83%, which is significantly higher than current state-of-the-art approaches. Furthermore, this approach results in an improvement of 93% when considering the best case scenario. PMID:29316693

  4. Probing many-body localization with neural networks

    NASA Astrophysics Data System (ADS)

    Schindler, Frank; Regnault, Nicolas; Neupert, Titus

    2017-06-01

    We show that a simple artificial neural network trained on entanglement spectra of individual states of a many-body quantum system can be used to determine the transition between a many-body localized and a thermalizing regime. Specifically, we study the Heisenberg spin-1/2 chain in a random external field. We employ a multilayer perceptron with a single hidden layer, which is trained on labeled entanglement spectra pertaining to the fully localized and fully thermal regimes. We then apply this network to classify spectra belonging to states in the transition region. For training, we use a cost function that contains, in addition to the usual error and regularization parts, a term that favors a confident classification of the transition region states. The resulting phase diagram is in good agreement with the one obtained by more conventional methods and can be computed for small systems. In particular, the neural network outperforms conventional methods in classifying individual eigenstates pertaining to a single disorder realization. It allows us to map out the structure of these eigenstates across the transition with spatial resolution. Furthermore, we analyze the network operation using the dreaming technique to show that the neural network correctly learns by itself the power-law structure of the entanglement spectra in the many-body localized regime.

  5. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    PubMed

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  6. Temporal neural networks and transient analysis of complex engineering systems

    NASA Astrophysics Data System (ADS)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  7. Damage Detection Using Holography and Interferometry

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2003-01-01

    This paper reviews classical approaches to damage detection using laser holography and interferometry. The paper then details the modern uses of electronic holography and neural-net-processed characteristic patterns to detect structural damage. The design of the neural networks and the preparation of the training sets are discussed. The use of a technique to optimize the training sets, called folding, is explained. Then a training procedure is detailed that uses the holography-measured vibration modes of the undamaged structures to impart damage-detection sensitivity to the neural networks. The inspections of an optical strain gauge mounting plate and an International Space Station cold plate are presented as examples.

  8. A model for integrating elementary neural functions into delayed-response behavior.

    PubMed

    Gisiger, Thomas; Kerszberg, Michel

    2006-04-01

    It is well established that various cortical regions can implement a wide array of neural processes, yet the mechanisms which integrate these processes into behavior-producing, brain-scale activity remain elusive. We propose that an important role in this respect might be played by executive structures controlling the traffic of information between the cortical regions involved. To illustrate this hypothesis, we present a neural network model comprising a set of interconnected structures harboring stimulus-related activity (visual representation, working memory, and planning), and a group of executive units with task-related activity patterns that manage the information flowing between them. The resulting dynamics allows the network to perform the dual task of either retaining an image during a delay (delayed-matching to sample task), or recalling from this image another one that has been associated with it during training (delayed-pair association task). The model reproduces behavioral and electrophysiological data gathered on the inferior temporal and prefrontal cortices of primates performing these same tasks. It also makes predictions on how neural activity coding for the recall of the image associated with the sample emerges and becomes prospective during the training phase. The network dynamics proves to be very stable against perturbations, and it exhibits signs of scale-invariant organization and cooperativity. The present network represents a possible neural implementation for active, top-down, prospective memory retrieval in primates. The model suggests that brain activity leading to performance of cognitive tasks might be organized in modular fashion, simple neural functions becoming integrated into more complex behavior by executive structures harbored in prefrontal cortex and/or basal ganglia.

  9. Inversion of quasi-3D DC resistivity imaging data using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Neyamadpour, Ahmad; Wan Abdullah, W. A. T.; Taib, Samsudin

    2010-02-01

    The objective of this paper is to investigate the applicability of artificial neural networks in inverting quasi-3D DC resistivity imaging data. An electrical resistivity imaging survey was carried out along seven parallel lines using a dipole-dipole array to confirm the validation of the results of an inversion using an artificial neural network technique. The model used to produce synthetic data to train the artificial neural network was a homogeneous medium of 100Ωm resistivity with an embedded anomalous body of 1000Ωm resistivity. The network was trained using 21 datasets (comprising 12159 data points) and tested on another 11 synthetic datasets (comprising 6369 data points) and on real field data. Another 24 test datasets (comprising 13896 data points) consisting of different resistivities for the background and the anomalous bodies were used in order to test the interpolation and extrapolation of network properties. Different learning paradigms were tried in the training process of the neural network, with the resilient propagation paradigm being the most efficient. The number of nodes, hidden layers, and efficient values for learning rate and momentum coefficient have been studied. Although a significant correlation between results of the neural network and the conventional robust inversion technique was found, the ANN results show more details of the subsurface structure, and the RMS misfits for the results of the neural network are less than seen with conventional methods. The interpreted results show that the trained network was able to invert quasi-3D electrical resistivity imaging data obtained by dipole-dipole configuration both rapidly and accurately.

  10. Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1997-01-01

    A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.

  11. A recurrent self-organizing neural fuzzy inference network.

    PubMed

    Juang, C F; Lin, C T

    1999-01-01

    A recurrent self-organizing neural fuzzy inference network (RSONFIN) is proposed in this paper. The RSONFIN is inherently a recurrent multilayered connectionist network for realizing the basic elements and functions of dynamic fuzzy inference, and may be considered to be constructed from a series of dynamic fuzzy rules. The temporal relations embedded in the network are built by adding some feedback connections representing the memory elements to a feedforward neural fuzzy network. Each weight as well as node in the RSONFIN has its own meaning and represents a special element in a fuzzy rule. There are no hidden nodes (i.e., no membership functions and fuzzy rules) initially in the RSONFIN. They are created on-line via concurrent structure identification (the construction of dynamic fuzzy if-then rules) and parameter identification (the tuning of the free parameters of membership functions). The structure learning together with the parameter learning forms a fast learning algorithm for building a small, yet powerful, dynamic neural fuzzy network. Two major characteristics of the RSONFIN can thus be seen: 1) the recurrent property of the RSONFIN makes it suitable for dealing with temporal problems and 2) no predetermination, like the number of hidden nodes, must be given, since the RSONFIN can find its optimal structure and parameters automatically and quickly. Moreover, to reduce the number of fuzzy rules generated, a flexible input partition method, the aligned clustering-based algorithm, is proposed. Various simulations on temporal problems are done and performance comparisons with some existing recurrent networks are also made. Efficiency of the RSONFIN is verified from these results.

  12. Neural networks and traditional time series methods: a synergistic combination in state economic forecasts.

    PubMed

    Hansen, J V; Nelson, R D

    1997-01-01

    Ever since the initial planning for the 1997 Utah legislative session, neural-network forecasting techniques have provided valuable insights for analysts forecasting tax revenues. These revenue estimates are critically important since agency budgets, support for education, and improvements to infrastructure all depend on their accuracy. Underforecasting generates windfalls that concern taxpayers, whereas overforecasting produces budget shortfalls that cause inadequately funded commitments. The pattern finding ability of neural networks gives insightful and alternative views of the seasonal and cyclical components commonly found in economic time series data. Two applications of neural networks to revenue forecasting clearly demonstrate how these models complement traditional time series techniques. In the first, preoccupation with a potential downturn in the economy distracts analysis based on traditional time series methods so that it overlooks an emerging new phenomenon in the data. In this case, neural networks identify the new pattern that then allows modification of the time series models and finally gives more accurate forecasts. In the second application, data structure found by traditional statistical tools allows analysts to provide neural networks with important information that the networks then use to create more accurate models. In summary, for the Utah revenue outlook, the insights that result from a portfolio of forecasts that includes neural networks exceeds the understanding generated from strictly statistical forecasting techniques. In this case, the synergy clearly results in the whole of the portfolio of forecasts being more accurate than the sum of the individual parts.

  13. Reservoir characterization using core, well log, and seismic data and intelligent software

    NASA Astrophysics Data System (ADS)

    Soto Becerra, Rodolfo

    We have developed intelligent software, Oilfield Intelligence (OI), as an engineering tool to improve the characterization of oil and gas reservoirs. OI integrates neural networks and multivariate statistical analysis. It is composed of five main subsystems: data input, preprocessing, architecture design, graphics design, and inference engine modules. More than 1,200 lines of programming code as M-files using the language MATLAB been written. The degree of success of many oil and gas drilling, completion, and production activities depends upon the accuracy of the models used in a reservoir description. Neural networks have been applied for identification of nonlinear systems in almost all scientific fields of humankind. Solving reservoir characterization problems is no exception. Neural networks have a number of attractive features that can help to extract and recognize underlying patterns, structures, and relationships among data. However, before developing a neural network model, we must solve the problem of dimensionality such as determining dominant and irrelevant variables. We can apply principal components and factor analysis to reduce the dimensionality and help the neural networks formulate more realistic models. We validated OI by obtaining confident models in three different oil field problems: (1) A neural network in-situ stress model using lithology and gamma ray logs for the Travis Peak formation of east Texas, (2) A neural network permeability model using porosity and gamma ray and a neural network pseudo-gamma ray log model using 3D seismic attributes for the reservoir VLE 196 Lamar field located in Block V of south-central Lake Maracaibo (Venezuela), and (3) Neural network primary ultimate oil recovery (PRUR), initial waterflooding ultimate oil recovery (IWUR), and infill drilling ultimate oil recovery (IDUR) models using reservoir parameters for San Andres and Clearfork carbonate formations in west Texas. In all cases, we compared the results from the neural network models with the results from regression statistical and non-parametric approach models. The results show that it is possible to obtain the highest cross-correlation coefficient between predicted and actual target variables, and the lowest average absolute errors using the integrated techniques of multivariate statistical analysis and neural networks in our intelligent software.

  14. A neural-network approach to robotic control

    NASA Technical Reports Server (NTRS)

    Graham, D. P. W.; Deleuterio, G. M. T.

    1993-01-01

    An artificial neural-network paradigm for the control of robotic systems is presented. The approach is based on the Cerebellar Model Articulation Controller created by James Albus and incorporates several extensions. First, recognizing the essential structure of multibody equations of motion, two parallel modules are used that directly reflect the dynamical characteristics of multibody systems. Second, the architecture of the proposed network is imbued with a self-organizational capability which improves efficiency and accuracy. Also, the networks can be arranged in hierarchical fashion with each subsequent network providing finer and finer resolution.

  15. Neural network-based preprocessing to estimate the parameters of the X-ray emission of a single-temperature thermal plasma

    NASA Astrophysics Data System (ADS)

    Ichinohe, Y.; Yamada, S.; Miyazaki, N.; Saito, S.

    2018-04-01

    We present data preprocessing based on an artificial neural network to estimate the parameters of the X-ray emission spectra of a single-temperature thermal plasma. The method finds appropriate parameters close to the global optimum. The neural network is designed to learn the parameters of the thermal plasma (temperature, abundance, normalization and redshift) of the input spectra. After training using 9000 simulated X-ray spectra, the network has grown to predict all the unknown parameters with uncertainties of about a few per cent. The performance dependence on the network structure has been studied. We applied the neural network to an actual high-resolution spectrum obtained with Hitomi. The predicted plasma parameters agree with the known best-fitting parameters of the Perseus cluster within uncertainties of ≲10 per cent. The result shows that neural networks trained by simulated data might possibly be used to extract a feature built in the data. This would reduce human-intensive preprocessing costs before detailed spectral analysis, and would help us make the best use of the large quantities of spectral data that will be available in the coming decades.

  16. Wavelets and Elman Neural Networks for monitoring environmental variables

    NASA Astrophysics Data System (ADS)

    Ciarlini, Patrizia; Maniscalco, Umberto

    2008-11-01

    An application in cultural heritage is introduced. Wavelet decomposition and Neural Networks like virtual sensors are jointly used to simulate physical and chemical measurements in specific locations of a monument. Virtual sensors, suitably trained and tested, can substitute real sensors in monitoring the monument surface quality, while the real ones should be installed for a long time and at high costs. The application of the wavelet decomposition to the environmental data series allows getting the treatment of underlying temporal structure at low frequencies. Consequently a separate training of suitable Elman Neural Networks for high/low components can be performed, thus improving the networks convergence in learning time and measurement accuracy in working time.

  17. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security.

    PubMed

    Kang, Min-Joo; Kang, Je-Won

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.

  18. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security

    PubMed Central

    Kang, Min-Joo

    2016-01-01

    A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus. PMID:27271802

  19. Disrupted Topological Patterns of Large-Scale Network in Conduct Disorder

    PubMed Central

    Jiang, Yali; Liu, Weixiang; Ming, Qingsen; Gao, Yidian; Ma, Ren; Zhang, Xiaocui; Situ, Weijun; Wang, Xiang; Yao, Shuqiao; Huang, Bingsheng

    2016-01-01

    Regional abnormalities in brain structure and function, as well as disrupted connectivity, have been found repeatedly in adolescents with conduct disorder (CD). Yet, the large-scale brain topology associated with CD is not well characterized, and little is known about the systematic neural mechanisms of CD. We employed graphic theory to investigate systematically the structural connectivity derived from cortical thickness correlation in a group of patients with CD (N = 43) and healthy controls (HCs, N = 73). Nonparametric permutation tests were applied for between-group comparisons of graphical metrics. Compared with HCs, network measures including global/local efficiency and modularity all pointed to hypo-functioning in CD, despite of preserved small-world organization in both groups. The hubs distribution is only partially overlapped with each other. These results indicate that CD is accompanied by both impaired integration and segregation patterns of brain networks, and the distribution of highly connected neural network ‘hubs’ is also distinct between groups. Such misconfiguration extends our understanding regarding how structural neural network disruptions may underlie behavioral disturbances in adolescents with CD, and potentially, implicates an aberrant cytoarchitectonic profiles in the brain of CD patients. PMID:27841320

  20. Architecture and biological applications of artificial neural networks: a tuberculosis perspective.

    PubMed

    Darsey, Jerry A; Griffin, William O; Joginipelli, Sravanthi; Melapu, Venkata Kiran

    2015-01-01

    Advancement of science and technology has prompted researchers to develop new intelligent systems that can solve a variety of problems such as pattern recognition, prediction, and optimization. The ability of the human brain to learn in a fashion that tolerates noise and error has attracted many researchers and provided the starting point for the development of artificial neural networks: the intelligent systems. Intelligent systems can acclimatize to the environment or data and can maximize the chances of success or improve the efficiency of a search. Due to massive parallelism with large numbers of interconnected processers and their ability to learn from the data, neural networks can solve a variety of challenging computational problems. Neural networks have the ability to derive meaning from complicated and imprecise data; they are used in detecting patterns, and trends that are too complex for humans, or other computer systems. Solutions to the toughest problems will not be found through one narrow specialization; therefore we need to combine interdisciplinary approaches to discover the solutions to a variety of problems. Many researchers in different disciplines such as medicine, bioinformatics, molecular biology, and pharmacology have successfully applied artificial neural networks. This chapter helps the reader in understanding the basics of artificial neural networks, their applications, and methodology; it also outlines the network learning process and architecture. We present a brief outline of the application of neural networks to medical diagnosis, drug discovery, gene identification, and protein structure prediction. We conclude with a summary of the results from our study on tuberculosis data using neural networks, in diagnosing active tuberculosis, and predicting chronic vs. infiltrative forms of tuberculosis.

  1. A novel neural network for variational inequalities with linear and nonlinear constraints.

    PubMed

    Gao, Xing-Bao; Liao, Li-Zhi; Qi, Liqun

    2005-11-01

    Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on the sufficient and necessary conditions of the solution, this paper presents a novel neural network model for solving variational inequalities with linear and nonlinear constraints. Three sufficient conditions are provided to ensure that the proposed network with an asymmetric mapping is stable in the sense of Lyapunov and converges to an exact solution of the original problem. Meanwhile, the proposed network with a gradient mapping is also proved to be stable in the sense of Lyapunov and to have a finite-time convergence under some mild condition by using a new energy function. Compared with the existing neural networks, the new model can be applied to solve some nonmonotone problems, has no adjustable parameter, and has lower complexity. Thus, the structure of the proposed network is very simple. Since the proposed network can be used to solve a broad class of optimization problems, it has great application potential. The validity and transient behavior of the proposed neural network are demonstrated by several numerical examples.

  2. Ads' click-through rates predicting based on gated recurrent unit neural networks

    NASA Astrophysics Data System (ADS)

    Chen, Qiaohong; Guo, Zixuan; Dong, Wen; Jin, Lingzi

    2018-05-01

    In order to improve the effect of online advertising and to increase the revenue of advertising, the gated recurrent unit neural networks(GRU) model is used as the ads' click through rates(CTR) predicting. Combined with the characteristics of gated unit structure and the unique of time sequence in data, using BPTT algorithm to train the model. Furthermore, by optimizing the step length algorithm of the gated unit recurrent neural networks, making the model reach optimal point better and faster in less iterative rounds. The experiment results show that the model based on the gated recurrent unit neural networks and its optimization of step length algorithm has the better effect on the ads' CTR predicting, which helps advertisers, media and audience achieve a win-win and mutually beneficial situation in Three-Side Game.

  3. An evaluation of Bayesian techniques for controlling model complexity and selecting inputs in a neural network for short-term load forecasting.

    PubMed

    Hippert, Henrique S; Taylor, James W

    2010-04-01

    Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation. Copyright 2009 Elsevier Ltd. All rights reserved.

  4. Simulation of short-term electric load using an artificial neural network

    NASA Astrophysics Data System (ADS)

    Ivanin, O. A.

    2018-01-01

    While solving the task of optimizing operation modes and equipment composition of small energy complexes or other tasks connected with energy planning, it is necessary to have data on energy loads of a consumer. Usually, there is a problem with obtaining real load charts and detailed information about the consumer, because a method of load-charts simulation on the basis of minimal information should be developed. The analysis of work devoted to short-term loads prediction allows choosing artificial neural networks as a most suitable mathematical instrument for solving this problem. The article provides an overview of applied short-term load simulation methods; it describes the advantages of artificial neural networks and offers a neural network structure for electric loads of residential buildings simulation. The results of modeling loads with proposed method and the estimation of its error are presented.

  5. Prediction of β-turns in proteins from multiple alignment using neural network

    PubMed Central

    Kaur, Harpreet; Raghava, Gajendra Pal Singh

    2003-01-01

    A neural network-based method has been developed for the prediction of β-turns in proteins by using multiple sequence alignment. Two feed-forward back-propagation networks with a single hidden layer are used where the first-sequence structure network is trained with the multiple sequence alignment in the form of PSI-BLAST–generated position-specific scoring matrices. The initial predictions from the first network and PSIPRED-predicted secondary structure are used as input to the second structure-structure network to refine the predictions obtained from the first net. A significant improvement in prediction accuracy has been achieved by using evolutionary information contained in the multiple sequence alignment. The final network yields an overall prediction accuracy of 75.5% when tested by sevenfold cross-validation on a set of 426 nonhomologous protein chains. The corresponding Qpred, Qobs, and Matthews correlation coefficient values are 49.8%, 72.3%, and 0.43, respectively, and are the best among all the previously published β-turn prediction methods. The Web server BetaTPred2 (http://www.imtech.res.in/raghava/betatpred2/) has been developed based on this approach. PMID:12592033

  6. Relationships between cortical myeloarchitecture and electrophysiological networks

    PubMed Central

    Hunt, Benjamin A. E.; Tewarie, Prejaas K.; Mougin, Olivier E.; Geades, Nicolas; Singh, Krish D.; Morris, Peter G.; Gowland, Penny A.; Brookes, Matthew J.

    2016-01-01

    The human brain relies upon the dynamic formation and dissolution of a hierarchy of functional networks to support ongoing cognition. However, how functional connectivities underlying such networks are supported by cortical microstructure remains poorly understood. Recent animal work has demonstrated that electrical activity promotes myelination. Inspired by this, we test a hypothesis that gray-matter myelin is related to electrophysiological connectivity. Using ultra-high field MRI and the principle of structural covariance, we derive a structural network showing how myelin density differs across cortical regions and how separate regions can exhibit similar myeloarchitecture. Building upon recent evidence that neural oscillations mediate connectivity, we use magnetoencephalography to elucidate networks that represent the major electrophysiological pathways of communication in the brain. Finally, we show that a significant relationship exists between our functional and structural networks; this relationship differs as a function of neural oscillatory frequency and becomes stronger when integrating oscillations over frequency bands. Our study sheds light on the way in which cortical microstructure supports functional networks. Further, it paves the way for future investigations of the gray-matter structure/function relationship and its breakdown in pathology. PMID:27830650

  7. Modeling a Neural Network as a Teaching Tool for the Learning of the Structure-Function Relationship

    ERIC Educational Resources Information Center

    Salinas, Dino G.; Acevedo, Cristian; Gomez, Christian R.

    2010-01-01

    The authors describe an activity they have created in which students can visualize a theoretical neural network whose states evolve according to a well-known simple law. This activity provided an uncomplicated approach to a paradigm commonly represented through complex mathematical formulation. From their observations, students learned many basic…

  8. Are Student Evaluations of Teaching Effectiveness Valid for Measuring Student Learning Outcomes in Business Related Classes? A Neural Network and Bayesian Analyses

    ERIC Educational Resources Information Center

    Galbraith, Craig S.; Merrill, Gregory B.; Kline, Doug M.

    2012-01-01

    In this study we investigate the underlying relational structure between student evaluations of teaching effectiveness (SETEs) and achievement of student learning outcomes in 116 business related courses. Utilizing traditional statistical techniques, a neural network analysis and a Bayesian data reduction and classification algorithm, we find…

  9. Application of complex discrete wavelet transform in classification of Doppler signals using complex-valued artificial neural network.

    PubMed

    Ceylan, Murat; Ceylan, Rahime; Ozbay, Yüksel; Kara, Sadik

    2008-09-01

    In biomedical signal classification, due to the huge amount of data, to compress the biomedical waveform data is vital. This paper presents two different structures formed using feature extraction algorithms to decrease size of feature set in training and test data. The proposed structures, named as wavelet transform-complex-valued artificial neural network (WT-CVANN) and complex wavelet transform-complex-valued artificial neural network (CWT-CVANN), use real and complex discrete wavelet transform for feature extraction. The aim of using wavelet transform is to compress data and to reduce training time of network without decreasing accuracy rate. In this study, the presented structures were applied to the problem of classification in carotid arterial Doppler ultrasound signals. Carotid arterial Doppler ultrasound signals were acquired from left carotid arteries of 38 patients and 40 healthy volunteers. The patient group included 22 males and 16 females with an established diagnosis of the early phase of atherosclerosis through coronary or aortofemoropopliteal (lower extremity) angiographies (mean age, 59 years; range, 48-72 years). Healthy volunteers were young non-smokers who seem to not bear any risk of atherosclerosis, including 28 males and 12 females (mean age, 23 years; range, 19-27 years). Sensitivity, specificity and average detection rate were calculated for comparison, after training and test phases of all structures finished. These parameters have demonstrated that training times of CVANN and real-valued artificial neural network (RVANN) were reduced using feature extraction algorithms without decreasing accuracy rate in accordance to our aim.

  10. An overview on development of neural network technology

    NASA Technical Reports Server (NTRS)

    Lin, Chun-Shin

    1993-01-01

    The study has been to obtain a bird's-eye view of the current neural network technology and the neural network research activities in NASA. The purpose was two fold. One was to provide a reference document for NASA researchers who want to apply neural network techniques to solve their problems. Another one was to report out survey results regarding NASA research activities and provide a view on what NASA is doing, what potential difficulty exists and what NASA can/should do. In a ten week study period, we interviewed ten neural network researchers in the Langley Research Center and sent out 36 survey forms to researchers at the Johnson Space Center, Lewis Research Center, Ames Research Center and Jet Propulsion Laboratory. We also sent out 60 similar forms to educators and corporation researchers to collect general opinions regarding this field. Twenty-eight survey forms, 11 from NASA researchers and 17 from outside, were returned. Survey results were reported in our final report. In the final report, we first provided an overview on the neural network technology. We reviewed ten neural network structures, discussed the applications in five major areas, and compared the analog, digital and hybrid electronic implementation of neural networks. In the second part, we summarized known NASA neural network research studies and reported the results of the questionnaire survey. Survey results show that most studies are still in the development and feasibility study stage. We compared the techniques, application areas, researchers' opinions on this technology, and many aspects between NASA and non-NASA groups. We also summarized their opinions on difficulties encountered. Applications are considered the top research priority by most researchers. Hardware development and learning algorithm improvement are the next. The lack of financial and management support is among the difficulties in research study. All researchers agree that the use of neural networks could result in cost saving. Fault tolerance has been claimed as one important feature of neural computing. However, the survey indicates that very few studies address this issue. Fault tolerance is important in space mission and aircraft control. We believe that it is worthy for NASA to devote more efforts into the utilization of this feature.

  11. Applications of Artificial Neural Networks in Structural Engineering with Emphasis on Continuum Models

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Liu, Youhua

    1998-01-01

    The use of continuum models for the analysis of discrete built-up complex aerospace structures is an attractive idea especially at the conceptual and preliminary design stages. But the diversity of available continuum models and hard-to-use qualities of these models have prevented them from finding wide applications. In this regard, Artificial Neural Networks (ANN or NN) may have a great potential as these networks are universal approximators that can realize any continuous mapping, and can provide general mechanisms for building models from data whose input-output relationship can be highly nonlinear. The ultimate aim of the present work is to be able to build high fidelity continuum models for complex aerospace structures using the ANN. As a first step, the concepts and features of ANN are familiarized through the MATLAB NN Toolbox by simulating some representative mapping examples, including some problems in structural engineering. Then some further aspects and lessons learned about the NN training are discussed, including the performances of Feed-Forward and Radial Basis Function NN when dealing with noise-polluted data and the technique of cross-validation. Finally, as an example of using NN in continuum models, a lattice structure with repeating cells is represented by a continuum beam whose properties are provided by neural networks.

  12. Modeling level of urban taxi services using neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J.; Wong, S.C.; Tong, C.O.

    1999-05-01

    This paper is concerned with the modeling of the complex demand-supply relationship in urban taxi services. A neural network model is developed, based on a taxi service situation observed in the urban area of Hong Kong. The input consists of several exogenous variables including number of licensed taxis, incremental charge of taxi fare, average occupied taxi journey time, average disposable income, and population and customer price index; the output consists of a set of endogenous variables including daily taxi passenger demand, passenger waiting time, vacant taxi headway, average percentage of occupied taxis, taxi utilization, and average taxi waiting time. Comparisonsmore » of the estimation accuracy are made between the neural network model and the simultaneous equations model. The results show that the neural network-based macro taxi model can obtain much more accurate information of the taxi services than the simultaneous equations model does. Although the data set used for training the neural network is small, the results obtained thus far are very encouraging. The neural network model can be used as a policy tool by regulator to assist with the decisions concerning the restriction over the number of taxi licenses and the fixing of the taxi fare structure as well as a range of service quality control.« less

  13. Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks

    NASA Astrophysics Data System (ADS)

    Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li

    2016-06-01

    Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.

  14. Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System

    NASA Technical Reports Server (NTRS)

    Williams-Hayes, Peggy S.

    2004-01-01

    The NASA F-15 Intelligent Flight Control System project team developed a series of flight control concepts designed to demonstrate neural network-based adaptive controller benefits, with the objective to develop and flight-test control systems using neural network technology to optimize aircraft performance under nominal conditions and stabilize the aircraft under failure conditions. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to baseline aerodynamic derivatives in flight. This open-loop flight test set was performed in preparation for a future phase in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed - pitch frequency sweep and automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. Flight data examination shows that addition of flight-identified aerodynamic derivative increments into the simulation improved aircraft pitch handling qualities.

  15. Biological modelling of a computational spiking neural network with neuronal avalanches.

    PubMed

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-06-28

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).

  16. Biological modelling of a computational spiking neural network with neuronal avalanches

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  17. From neural-based object recognition toward microelectronic eyes

    NASA Technical Reports Server (NTRS)

    Sheu, Bing J.; Bang, Sa Hyun

    1994-01-01

    Engineering neural network systems are best known for their abilities to adapt to the changing characteristics of the surrounding environment by adjusting system parameter values during the learning process. Rapid advances in analog current-mode design techniques have made possible the implementation of major neural network functions in custom VLSI chips. An electrically programmable analog synapse cell with large dynamic range can be realized in a compact silicon area. New designs of the synapse cells, neurons, and analog processor are presented. A synapse cell based on Gilbert multiplier structure can perform the linear multiplication for back-propagation networks. A double differential-pair synapse cell can perform the Gaussian function for radial-basis network. The synapse cells can be biased in the strong inversion region for high-speed operation or biased in the subthreshold region for low-power operation. The voltage gain of the sigmoid-function neurons is externally adjustable which greatly facilitates the search of optimal solutions in certain networks. Various building blocks can be intelligently connected to form useful industrial applications. Efficient data communication is a key system-level design issue for large-scale networks. We also present analog neural processors based on perceptron architecture and Hopfield network for communication applications. Biologically inspired neural networks have played an important role towards the creation of powerful intelligent machines. Accuracy, limitations, and prospects of analog current-mode design of the biologically inspired vision processing chips and cellular neural network chips are key design issues.

  18. Edge-preserving image compression for magnetic-resonance images using dynamic associative neural networks (DANN)-based neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Tat C.; Kabuka, Mansur R.

    1994-05-01

    With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.

  19. Localizing Tortoise Nests by Neural Networks.

    PubMed

    Barbuti, Roberto; Chessa, Stefano; Micheli, Alessio; Pucci, Rita

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  20. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent LER Calculations

    NASA Astrophysics Data System (ADS)

    Fasnacht, Z.; Qin, W.; Haffner, D. P.; Loyola, D. G.; Joiner, J.; Krotkov, N. A.; Vasilkov, A. P.; Spurr, R. J. D.

    2017-12-01

    In order to estimate surface reflectance used in trace gas retrieval algorithms, radiative transfer models (RTM) such as the Vector Linearized Discrete Ordinate Radiative Transfer Model (VLIDORT) can be used to simulate the top of the atmosphere (TOA) radiances with advanced models of surface properties. With large volumes of satellite data, these model simulations can become computationally expensive. Look up table interpolation can improve the computational cost of the calculations, but the non-linear nature of the radiances requires a dense node structure if interpolation errors are to be minimized. In order to reduce our computational effort and improve the performance of look-up tables, neural networks can be trained to predict these radiances. We investigate the impact of using look-up table interpolation versus a neural network trained using the smart sampling technique, and show that neural networks can speed up calculations and reduce errors while using significantly less memory and RTM calls. In future work we will implement a neural network in operational processing to meet growing demands for reflectance modeling in support of high spatial resolution satellite missions.

  1. Multistability of neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing

    2015-05-01

    This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Learning polynomial feedforward neural networks by genetic programming and backpropagation.

    PubMed

    Nikolaev, N Y; Iba, H

    2003-01-01

    This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.

  3. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  4. Synchronization and long-time memory in neural networks with inhibitory hubs and synaptic plasticity

    NASA Astrophysics Data System (ADS)

    Bertolotti, Elena; Burioni, Raffaella; di Volo, Matteo; Vezzani, Alessandro

    2017-01-01

    We investigate the dynamical role of inhibitory and highly connected nodes (hub) in synchronization and input processing of leaky-integrate-and-fire neural networks with short term synaptic plasticity. We take advantage of a heterogeneous mean-field approximation to encode the role of network structure and we tune the fraction of inhibitory neurons fI and their connectivity level to investigate the cooperation between hub features and inhibition. We show that, depending on fI, highly connected inhibitory nodes strongly drive the synchronization properties of the overall network through dynamical transitions from synchronous to asynchronous regimes. Furthermore, a metastable regime with long memory of external inputs emerges for a specific fraction of hub inhibitory neurons, underlining the role of inhibition and connectivity also for input processing in neural networks.

  5. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network.

    PubMed

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-12-12

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.

  6. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network

    PubMed Central

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-01-01

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868

  7. Neural network based chemical structure indexing.

    PubMed

    Rughooputh, S D; Rughooputh, H C

    2001-01-01

    Searches on chemical databases are presently dominated by the text-based content of a paper which can be indexed into a keyword searchable form. Such traditional searches can prove to be very time-consuming and discouraging to the less frequent scientist. We report a simple chemical indexing based on the molecular structure alone. The method used is based on a one-to-one correspondence between the chemical structure presented as an image to a neural network and the corresponding binary output. The method is direct and less cumbersome (compared with traditional methods) and proves to be robust, elegant, and very versatile.

  8. NNvPDB: Neural Network based Protein Secondary Structure Prediction with PDB Validation.

    PubMed

    Sakthivel, Seethalakshmi; S K M, Habeeb

    2015-01-01

    The predicted secondary structural states are not cross validated by any of the existing servers. Hence, information on the level of accuracy for every sequence is not reported by the existing servers. This was overcome by NNvPDB, which not only reported greater Q3 but also validates every prediction with the homologous PDB entries. NNvPDB is based on the concept of Neural Network, with a new and different approach of training the network every time with five PDB structures that are similar to query sequence. The average accuracy for helix is 76%, beta sheet is 71% and overall (helix, sheet and coil) is 66%. http://bit.srmuniv.ac.in/cgi-bin/bit/cfpdb/nnsecstruct.pl.

  9. Localization and identification of structural nonlinearities using cascaded optimization and neural networks

    NASA Astrophysics Data System (ADS)

    Koyuncu, A.; Cigeroglu, E.; Özgüven, H. N.

    2017-10-01

    In this study, a new approach is proposed for identification of structural nonlinearities by employing cascaded optimization and neural networks. Linear finite element model of the system and frequency response functions measured at arbitrary locations of the system are used in this approach. Using the finite element model, a training data set is created, which appropriately spans the possible nonlinear configurations space of the system. A classification neural network trained on these data sets then localizes and determines the types of all nonlinearities associated with the nonlinear degrees of freedom in the system. A new training data set spanning the parametric space associated with the determined nonlinearities is created to facilitate parametric identification. Utilizing this data set, initially, a feed forward regression neural network is trained, which parametrically identifies the classified nonlinearities. Then, the results obtained are further improved by carrying out an optimization which uses network identified values as starting points. Unlike identification methods available in literature, the proposed approach does not require data collection from the degrees of freedoms where nonlinear elements are attached, and furthermore, it is sufficiently accurate even in the presence of measurement noise. The application of the proposed approach is demonstrated on an example system with nonlinear elements and on a real life experimental setup with a local nonlinearity.

  10. A solution for two-dimensional mazes with use of chaotic dynamics in a recurrent neural network model.

    PubMed

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2004-09-01

    Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.

  11. DCS-Neural-Network Program for Aircraft Control and Testing

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    2006-01-01

    A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.

  12. Neural Schematics as a unified formal graphical representation of large-scale Neural Network Structures.

    PubMed

    Ehrlich, Matthias; Schüffny, René

    2013-01-01

    One of the major outcomes of neuroscientific research are models of Neural Network Structures (NNSs). Descriptions of these models usually consist of a non-standardized mixture of text, figures, and other means of visual information communication in print media. However, as neuroscience is an interdisciplinary domain by nature, a standardized way of consistently representing models of NNSs is required. While generic descriptions of such models in textual form have recently been developed, a formalized way of schematically expressing them does not exist to date. Hence, in this paper we present Neural Schematics as a concept inspired by similar approaches from other disciplines for a generic two dimensional representation of said structures. After introducing NNSs in general, a set of current visualizations of models of NNSs is reviewed and analyzed for what information they convey and how their elements are rendered. This analysis then allows for the definition of general items and symbols to consistently represent these models as Neural Schematics on a two dimensional plane. We will illustrate the possibilities an agreed upon standard can yield on sampled diagrams transformed into Neural Schematics and an example application for the design and modeling of large-scale NNSs.

  13. Neural activation toward erotic stimuli in homosexual and heterosexual males.

    PubMed

    Kagerer, Sabine; Klucken, Tim; Wehrum, Sina; Zimmermann, Mark; Schienle, Anne; Walter, Bertram; Vaitl, Dieter; Stark, Rudolf

    2011-11-01

    Studies investigating sexual arousal exist, yet there are diverging findings on the underlying neural mechanisms with regard to sexual orientation. Moreover, sexual arousal effects have often been confounded with general arousal effects. Hence, it is still unclear which structures underlie the sexual arousal response in homosexual and heterosexual men. Neural activity and subjective responses were investigated in order to disentangle sexual from general arousal. Considering sexual orientation, differential and conjoint neural activations were of interest. The functional magnetic resonance imaging (fMRI) study focused on the neural networks involved in the processing of sexual stimuli in 21 male participants (11 homosexual, 10 heterosexual). Both groups viewed pictures with erotic content as well as aversive and neutral stimuli. The erotic pictures were subdivided into three categories (most sexually arousing, least sexually arousing, and rest) based on the individual subjective ratings of each participant. Blood oxygen level-dependent responses measured by fMRI and subjective ratings. A conjunction analysis revealed conjoint neural activation related to sexual arousal in thalamus, hypothalamus, occipital cortex, and nucleus accumbens. Increased insula, amygdala, and anterior cingulate gyrus activation could be linked to general arousal. Group differences emerged neither when viewing the most sexually arousing pictures compared with highly arousing aversive pictures nor compared with neutral pictures. Results suggest that a widespread neural network is activated by highly sexually arousing visual stimuli. A partly distinct network of structures underlies sexual and general arousal effects. The processing of preferred, highly sexually arousing stimuli recruited similar structures in homosexual and heterosexual males. © 2011 International Society for Sexual Medicine.

  14. Incorporation of varying types of temporal data in a neural network

    NASA Technical Reports Server (NTRS)

    Cohen, M. E.; Hudson, D. L.

    1992-01-01

    Most neural network models do not specifically deal with temporal data. Handling of these variables is complicated by the different uses to which temporal data are put, depending on the application. Even within the same application, temporal variables are often used in a number of different ways. In this paper, types of temporal data are discussed, along with their implications for approximate reasoning. Methods for integrating approximate temporal reasoning into existing neural network structures are presented. These methods are illustrated in a medical application for diagnosis of graft-versus-host disease which requires the use of several types of temporal data.

  15. Recognition and classification of oscillatory patterns of electric brain activity using artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Pchelintseva, Svetlana V.; Runnova, Anastasia E.; Musatov, Vyacheslav Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we study the problem of recognition type of the observed object, depending on the generated pattern and the registered EEG data. EEG recorded at the time of displaying cube Necker characterizes appropriate state of brain activity. As an image we use bistable image Necker cube. Subject selects the type of cube and interpret it either as aleft cube or as the right cube. To solve the problem of recognition, we use artificial neural networks. In our paper to create a classifier we have considered a multilayer perceptron. We examine the structure of the artificial neural network and define cubes recognition accuracy.

  16. Relationships between music training, speech processing, and word learning: a network perspective.

    PubMed

    Elmer, Stefan; Jäncke, Lutz

    2018-03-15

    Numerous studies have documented the behavioral advantages conferred on professional musicians and children undergoing music training in processing speech sounds varying in the spectral and temporal dimensions. These beneficial effects have previously often been associated with local functional and structural changes in the auditory cortex (AC). However, this perspective is oversimplified, in that it does not take into account the intrinsic organization of the human brain, namely, neural networks and oscillatory dynamics. Therefore, we propose a new framework for extending these previous findings to a network perspective by integrating multimodal imaging, electrophysiology, and neural oscillations. In particular, we provide concrete examples of how functional and structural connectivity can be used to model simple neural circuits exerting a modulatory influence on AC activity. In addition, we describe how such a network approach can be used for better comprehending the beneficial effects of music training on more complex speech functions, such as word learning. © 2018 New York Academy of Sciences.

  17. Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks

    NASA Astrophysics Data System (ADS)

    Dana, Hod; Marom, Anat; Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-06-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bioengineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes per second of structures with mm-scale dimensions containing a network of over 1,000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances.

  18. Network, cellular, and molecular mechanisms underlying long-term memory formation.

    PubMed

    Carasatorre, Mariana; Ramírez-Amaya, Víctor

    2013-01-01

    The neural network stores information through activity-dependent synaptic plasticity that occurs in populations of neurons. Persistent forms of synaptic plasticity may account for long-term memory storage, and the most salient forms are the changes in the structure of synapses. The theory proposes that encoding should use a sparse code and evidence suggests that this can be achieved through offline reactivation or by sparse initial recruitment of the network units. This idea implies that in some cases the neurons that underwent structural synaptic plasticity might be a subpopulation of those originally recruited; However, it is not yet clear whether all the neurons recruited during acquisition are the ones that underwent persistent forms of synaptic plasticity and responsible for memory retrieval. To determine which neural units underlie long-term memory storage, we need to characterize which are the persistent forms of synaptic plasticity occurring in these neural ensembles and the best hints so far are the molecular signals underlying structural modifications of the synapses. Structural synaptic plasticity can be achieved by the activity of various signal transduction pathways, including the NMDA-CaMKII and ACh-MAPK. These pathways converge with the Rho family of GTPases and the consequent ERK 1/2 activation, which regulates multiple cellular functions such as protein translation, protein trafficking, and gene transcription. The most detailed explanation may come from models that allow us to determine the contribution of each piece of this fascinating puzzle that is the neuron and the neural network.

  19. KNT-artificial neural network model for flux prediction of ultrafiltration membrane producing drinking water.

    PubMed

    Oh, H K; Yu, M J; Gwon, E M; Koo, J Y; Kim, S G; Koizumi, A

    2004-01-01

    This paper describes the prediction of flux behavior in an ultrafiltration (UF) membrane system using a Kalman neuro training (KNT) network model. The experimental data was obtained from operating a pilot plant of hollow fiber UF membrane with groundwater for 7 months. The network was trained using operating conditions such as inlet pressure, filtration duration, and feed water quality parameters including turbidity, temperature and UV254. Pre-processing of raw data allowed the normalized input data to be used in sigmoid activation functions. A neural network architecture was structured by modifying the number of hidden layers, neurons and learning iterations. The structure of KNT-neural network with 3 layers and 5 neurons allowed a good prediction of permeate flux by 0.997 of correlation coefficient during the learning phase. Also the validity of the designed model was evaluated with other experimental data not used during the training phase and nonlinear flux behavior was accurately estimated with 0.999 of correlation coefficient and a lower error of prediction in the testing phase. This good flux prediction can provide preliminary criteria in membrane design and set up the proper cleaning cycle in membrane operation. The KNT-artificial neural network is also expected to predict the variation of transmembrane pressure during filtration cycles and can be applied to automation and control of full scale treatment plants.

  20. Collision detection in complex dynamic scenes using an LGMD-based visual neural network with feature enhancement.

    PubMed

    Yue, Shigang; Rind, F Claire

    2006-05-01

    The lobula giant movement detector (LGMD) is an identified neuron in the locust brain that responds most strongly to the images of an approaching object such as a predator. Its computational model can cope with unpredictable environments without using specific object recognition algorithms. In this paper, an LGMD-based neural network is proposed with a new feature enhancement mechanism to enhance the expanded edges of colliding objects via grouped excitation for collision detection with complex backgrounds. The isolated excitation caused by background detail will be filtered out by the new mechanism. Offline tests demonstrated the advantages of the presented LGMD-based neural network in complex backgrounds. Real time robotics experiments using the LGMD-based neural network as the only sensory system showed that the system worked reliably in a wide range of conditions; in particular, the robot was able to navigate in arenas with structured surrounds and complex backgrounds.

  1. A Comparison of Neural Networks and Fuzzy Logic Methods for Process Modeling

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Sala, Dorel M.; Berke, Laszlo

    1996-01-01

    The goal of this work was to analyze the potential of neural networks and fuzzy logic methods to develop approximate response surfaces as process modeling, that is for mapping of input into output. Structural response was chosen as an example. Each of the many methods surveyed are explained and the results are presented. Future research directions are also discussed.

  2. Neural Networks In Mining Sciences - General Overview And Some Representative Examples

    NASA Astrophysics Data System (ADS)

    Tadeusiewicz, Ryszard

    2015-12-01

    The many difficult problems that must now be addressed in mining sciences make us search for ever newer and more efficient computer tools that can be used to solve those problems. Among the numerous tools of this type, there are neural networks presented in this article - which, although not yet widely used in mining sciences, are certainly worth consideration. Neural networks are a technique which belongs to so called artificial intelligence, and originates from the attempts to model the structure and functioning of biological nervous systems. Initially constructed and tested exclusively out of scientific curiosity, as computer models of parts of the human brain, neural networks have become a surprisingly effective calculation tool in many areas: in technology, medicine, economics, and even social sciences. Unfortunately, they are relatively rarely used in mining sciences and mining technology. The article is intended to convince the readers that neural networks can be very useful also in mining sciences. It contains information how modern neural networks are built, how they operate and how one can use them. The preliminary discussion presented in this paper can help the reader gain an opinion whether this is a tool with handy properties, useful for him, and what it might come in useful for. Of course, the brief introduction to neural networks contained in this paper will not be enough for the readers who get convinced by the arguments contained here, and want to use neural networks. They will still need a considerable portion of detailed knowledge so that they can begin to independently create and build such networks, and use them in practice. However, an interested reader who decides to try out the capabilities of neural networks will also find here links to references that will allow him to start exploration of neural networks fast, and then work with this handy tool efficiently. This will be easy, because there are currently quite a few ready-made computer programs, easily available, which allow their user to quickly and effortlessly create artificial neural networks, run them, train and use in practice. The key issue is the question how to use these networks in mining sciences. The fact that this is possible and desirable is shown by convincing examples included in the second part of this study. From the very rich literature on the various applications of neural networks, we have selected several works that show how and what neural networks are used in the mining industry, and what has been achieved thanks to their use. The review of applications will continue in the next article, filed already for publication in the journal "Archives of Mining Sciences". Only studying these two articles will provide sufficient knowledge for initial guidance in the area of issues under consideration here.

  3. Kinetic Energy of Hydrocarbons as a Function of Electron Density and Convolutional Neural Networks.

    PubMed

    Yao, Kun; Parkhill, John

    2016-03-08

    We demonstrate a convolutional neural network trained to reproduce the Kohn-Sham kinetic energy of hydrocarbons from an input electron density. The output of the network is used as a nonlocal correction to conventional local and semilocal kinetic functionals. We show that this approximation qualitatively reproduces Kohn-Sham potential energy surfaces when used with conventional exchange correlation functionals. The density which minimizes the total energy given by the functional is examined in detail. We identify several avenues to improve on this exploratory work, by reducing numerical noise and changing the structure of our functional. Finally we examine the features in the density learned by the neural network to anticipate the prospects of generalizing these models.

  4. Hybrid neural network for density limit disruption prediction and avoidance on J-TEXT tokamak

    NASA Astrophysics Data System (ADS)

    Zheng, W.; Hu, F. R.; Zhang, M.; Chen, Z. Y.; Zhao, X. Q.; Wang, X. L.; Shi, P.; Zhang, X. L.; Zhang, X. Q.; Zhou, Y. N.; Wei, Y. N.; Pan, Y.; J-TEXT team

    2018-05-01

    Increasing the plasma density is one of the key methods in achieving an efficient fusion reaction. High-density operation is one of the hot topics in tokamak plasmas. Density limit disruptions remain an important issue for safe operation. An effective density limit disruption prediction and avoidance system is the key to avoid density limit disruptions for long pulse steady state operations. An artificial neural network has been developed for the prediction of density limit disruptions on the J-TEXT tokamak. The neural network has been improved from a simple multi-layer design to a hybrid two-stage structure. The first stage is a custom network which uses time series diagnostics as inputs to predict plasma density, and the second stage is a three-layer feedforward neural network to predict the probability of density limit disruptions. It is found that hybrid neural network structure, combined with radiation profile information as an input can significantly improve the prediction performance, especially the average warning time ({{T}warn} ). In particular, the {{T}warn} is eight times better than that in previous work (Wang et al 2016 Plasma Phys. Control. Fusion 58 055014) (from 5 ms to 40 ms). The success rate for density limit disruptive shots is above 90%, while, the false alarm rate for other shots is below 10%. Based on the density limit disruption prediction system and the real-time density feedback control system, the on-line density limit disruption avoidance system has been implemented on the J-TEXT tokamak.

  5. Examples of Current and Future Uses of Neural-Net Image Processing for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    Feed forward artificial neural networks are very convenient for performing correlated interpolation of pairs of complex noisy data sets as well as detecting small changes in image data. Image-to-image, image-to-variable and image-to-index applications have been tested at Glenn. Early demonstration applications are summarized including image-directed alignment of optics, tomography, flow-visualization control of wind-tunnel operations and structural-model-trained neural networks. A practical application is reviewed that employs neural-net detection of structural damage from interference fringe patterns. Both sensor-based and optics-only calibration procedures are available for this technique. These accomplishments have generated the knowledge necessary to suggest some other applications for NASA and Government programs. A tomography application is discussed to support Glenn's Icing Research tomography effort. The self-regularizing capability of a neural net is shown to predict the expected performance of the tomography geometry and to augment fast data processing. Other potential applications involve the quantum technologies. It may be possible to use a neural net as an image-to-image controller of an optical tweezers being used for diagnostics of isolated nano structures. The image-to-image transformation properties also offer the potential for simulating quantum computing. Computer resources are detailed for implementing the black box calibration features of the neural nets.

  6. Metastability and Inter-Band Frequency Modulation in Networks of Oscillating Spiking Neuron Populations

    PubMed Central

    Bhowmik, David; Shanahan, Murray

    2013-01-01

    Groups of neurons firing synchronously are hypothesized to underlie many cognitive functions such as attention, associative learning, memory, and sensory selection. Recent theories suggest that transient periods of synchronization and desynchronization provide a mechanism for dynamically integrating and forming coalitions of functionally related neural areas, and that at these times conditions are optimal for information transfer. Oscillating neural populations display a great amount of spectral complexity, with several rhythms temporally coexisting in different structures and interacting with each other. This paper explores inter-band frequency modulation between neural oscillators using models of quadratic integrate-and-fire neurons and Hodgkin-Huxley neurons. We vary the structural connectivity in a network of neural oscillators, assess the spectral complexity, and correlate the inter-band frequency modulation. We contrast this correlation against measures of metastable coalition entropy and synchrony. Our results show that oscillations in different neural populations modulate each other so as to change frequency, and that the interaction of these fluctuating frequencies in the network as a whole is able to drive different neural populations towards episodes of synchrony. Further to this, we locate an area in the connectivity space in which the system directs itself in this way so as to explore a large repertoire of synchronous coalitions. We suggest that such dynamics facilitate versatile exploration, integration, and communication between functionally related neural areas, and thereby supports sophisticated cognitive processing in the brain. PMID:23614040

  7. Prediction of hearing loss among the noise-exposed workers in a steel factory using artificial intelligence approach.

    PubMed

    Aliabadi, Mohsen; Farhadian, Maryam; Darvishi, Ebrahim

    2015-08-01

    Prediction of hearing loss in noisy workplaces is considered to be an important aspect of hearing conservation program. Artificial intelligence, as a new approach, can be used to predict the complex phenomenon such as hearing loss. Using artificial neural networks, this study aims to present an empirical model for the prediction of the hearing loss threshold among noise-exposed workers. Two hundred and ten workers employed in a steel factory were chosen, and their occupational exposure histories were collected. To determine the hearing loss threshold, the audiometric test was carried out using a calibrated audiometer. The personal noise exposure was also measured using a noise dosimeter in the workstations of workers. Finally, data obtained five variables, which can influence the hearing loss, were used for the development of the prediction model. Multilayer feed-forward neural networks with different structures were developed using MATLAB software. Neural network structures had one hidden layer with the number of neurons being approximately between 5 and 15 neurons. The best developed neural networks with one hidden layer and ten neurons could accurately predict the hearing loss threshold with RMSE = 2.6 dB and R(2) = 0.89. The results also confirmed that neural networks could provide more accurate predictions than multiple regressions. Since occupational hearing loss is frequently non-curable, results of accurate prediction can be used by occupational health experts to modify and improve noise exposure conditions.

  8. A neural-visualization IDS for honeynet data.

    PubMed

    Herrero, Álvaro; Zurutuza, Urko; Corchado, Emilio

    2012-04-01

    Neural intelligent systems can provide a visualization of the network traffic for security staff, in order to reduce the widely known high false-positive rate associated with misuse-based Intrusion Detection Systems (IDSs). Unlike previous work, this study proposes an unsupervised neural models that generate an intuitive visualization of the captured traffic, rather than network statistics. These snapshots of network events are immensely useful for security personnel that monitor network behavior. The system is based on the use of different neural projection and unsupervised methods for the visual inspection of honeypot data, and may be seen as a complementary network security tool that sheds light on internal data structures through visual inspection of the traffic itself. Furthermore, it is intended to facilitate verification and assessment of Snort performance (a well-known and widely-used misuse-based IDS), through the visualization of attack patterns. Empirical verification and comparison of the proposed projection methods are performed in a real domain, where two different case studies are defined and analyzed.

  9. Evolving RBF neural networks for adaptive soft-sensor design.

    PubMed

    Alexandridis, Alex

    2013-12-01

    This work presents an adaptive framework for building soft-sensors based on radial basis function (RBF) neural network models. The adaptive fuzzy means algorithm is utilized in order to evolve an RBF network, which approximates the unknown system based on input-output data from it. The methodology gradually builds the RBF network model, based on two separate levels of adaptation: On the first level, the structure of the hidden layer is modified by adding or deleting RBF centers, while on the second level, the synaptic weights are adjusted with the recursive least squares with exponential forgetting algorithm. The proposed approach is tested on two different systems, namely a simulated nonlinear DC Motor and a real industrial reactor. The results show that the produced soft-sensors can be successfully applied to model the two nonlinear systems. A comparison with two different adaptive modeling techniques, namely a dynamic evolving neural-fuzzy inference system (DENFIS) and neural networks trained with online backpropagation, highlights the advantages of the proposed methodology.

  10. A novel prosodic-information synthesizer based on recurrent fuzzy neural network for the Chinese TTS system.

    PubMed

    Lin, Chin-Teng; Wu, Rui-Cheng; Chang, Jyh-Yeong; Liang, Sheng-Fu

    2004-02-01

    In this paper, a new technique for the Chinese text-to-speech (TTS) system is proposed. Our major effort focuses on the prosodic information generation. New methodologies for constructing fuzzy rules in a prosodic model simulating human's pronouncing rules are developed. The proposed Recurrent Fuzzy Neural Network (RFNN) is a multilayer recurrent neural network (RNN) which integrates a Self-cOnstructing Neural Fuzzy Inference Network (SONFIN) into a recurrent connectionist structure. The RFNN can be functionally divided into two parts. The first part adopts the SONFIN as a prosodic model to explore the relationship between high-level linguistic features and prosodic information based on fuzzy inference rules. As compared to conventional neural networks, the SONFIN can always construct itself with an economic network size in high learning speed. The second part employs a five-layer network to generate all prosodic parameters by directly using the prosodic fuzzy rules inferred from the first part as well as other important features of syllables. The TTS system combined with the proposed method can behave not only sandhi rules but also the other prosodic phenomena existing in the traditional TTS systems. Moreover, the proposed scheme can even find out some new rules about prosodic phrase structure. The performance of the proposed RFNN-based prosodic model is verified by imbedding it into a Chinese TTS system with a Chinese monosyllable database based on the time-domain pitch synchronous overlap add (TD-PSOLA) method. Our experimental results show that the proposed RFNN can generate proper prosodic parameters including pitch means, pitch shapes, maximum energy levels, syllable duration, and pause duration. Some synthetic sounds are online available for demonstration.

  11. The Topographical Mapping in Drosophila Central Complex Network and Its Signal Routing

    PubMed Central

    Chang, Po-Yen; Su, Ta-Shun; Shih, Chi-Tin; Lo, Chung-Chuan

    2017-01-01

    Neural networks regulate brain functions by routing signals. Therefore, investigating the detailed organization of a neural circuit at the cellular levels is a crucial step toward understanding the neural mechanisms of brain functions. To study how a complicated neural circuit is organized, we analyzed recently published data on the neural circuit of the Drosophila central complex, a brain structure associated with a variety of functions including sensory integration and coordination of locomotion. We discovered that, except for a small number of “atypical” neuron types, the network structure formed by the identified 194 neuron types can be described by only a few simple mathematical rules. Specifically, the topological mapping formed by these neurons can be reconstructed by applying a generation matrix on a small set of initial neurons. By analyzing how information flows propagate with or without the atypical neurons, we found that while the general pattern of signal propagation in the central complex follows the simple topological mapping formed by the “typical” neurons, some atypical neurons can substantially re-route the signal pathways, implying specific roles of these neurons in sensory signal integration. The present study provides insights into the organization principle and signal integration in the central complex. PMID:28443014

  12. Forecasting Flare Activity Using Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Hernandez, T.

    2017-12-01

    Current operational flare forecasting relies on human morphological analysis of active regions and the persistence of solar flare activity through time (i.e. that the Sun will continue to do what it is doing right now: flaring or remaining calm). In this talk we present the results of applying deep Convolutional Neural Networks (CNNs) to the problem of solar flare forecasting. CNNs operate by training a set of tunable spatial filters that, in combination with neural layer interconnectivity, allow CNNs to automatically identify significant spatial structures predictive for classification and regression problems. We will start by discussing the applicability and success rate of the approach, the advantages it has over non-automated forecasts, and how mining our trained neural network provides a fresh look into the mechanisms behind magnetic energy storage and release.

  13. Applications of artificial neural network in AIDS research and therapy.

    PubMed

    Sardari, S; Sardari, D

    2002-01-01

    In recent years considerable effort has been devoted to applying pattern recognition techniques to the complex task of data analysis in drug research. Artificial neural networks (ANN) methodology is a modeling method with great ability to adapt to a new situation, or control an unknown system, using data acquired in previous experiments. In this paper, a brief history of ANN and the basic concepts behind the computing, the mathematical and algorithmic formulation of each of the techniques, and their developmental background is presented. Based on the abilities of ANNs in pattern recognition and estimation of system outputs from the known inputs, the neural network can be considered as a tool for molecular data analysis and interpretation. Analysis by neural networks improves the classification accuracy, data quantification and reduces the number of analogues necessary for correct classification of biologically active compounds. Conformational analysis and quantifying the components in mixtures using NMR spectra, aqueous solubility prediction and structure-activity correlation are among the reported applications of ANN as a new modeling method. Ranging from drug design and discovery to structure and dosage form design, the potential pharmaceutical applications of the ANN methodology are significant. In the areas of clinical monitoring, utilization of molecular simulation and design of bioactive structures, ANN would make the study of the status of the health and disease possible and brings their predicted chemotherapeutic response closer to reality.

  14. Protein backbone and sidechain torsion angles predicted from NMR chemical shifts using artificial neural networks

    PubMed Central

    Shen, Yang; Bax, Ad

    2013-01-01

    A new program, TALOS-N, is introduced for predicting protein backbone torsion angles from NMR chemical shifts. The program relies far more extensively on the use of trained artificial neural networks than its predecessor, TALOS+. Validation on an independent set of proteins indicates that backbone torsion angles can be predicted for a larger, ≥ 90% fraction of the residues, with an error rate smaller than ca 3.5%, using an acceptance criterion that is nearly two-fold tighter than that used previously, and a root mean square difference between predicted and crystallographically observed (φ,ψ) torsion angles of ca 12°. TALOS-N also reports sidechain χ1 rotameric states for about 50% of the residues, and a consistency with reference structures of 89%. The program includes a neural network trained to identify secondary structure from residue sequence and chemical shifts. PMID:23728592

  15. A neural-network potential through charge equilibration for WS2: From clusters to sheets

    NASA Astrophysics Data System (ADS)

    Hafizi, Roohollah; Ghasemi, S. Alireza; Hashemifar, S. Javad; Akbarzadeh, Hadi

    2017-12-01

    In the present work, we use a machine learning method to construct a high-dimensional potential for tungsten disulfide using a charge equilibration neural-network technique. A training set of stoichiometric WS2 clusters is prepared in the framework of density functional theory. After training the neural-network potential, the reliability and transferability of the potential are verified by performing a crystal structure search on bulk phases of WS2 and by plotting energy-area curves of two different monolayers. Then, we use the potential to investigate various triangular nano-clusters and nanotubes of WS2. In the case of nano-structures, we argue that 2H atomic configurations with sulfur rich edges are thermodynamically more stable than the other investigated configurations. We also studied a number of WS2 nanotubes which revealed that 1T tubes with armchair chirality exhibit lower bending stiffness.

  16. Baseline estimation in flame's spectra by using neural networks and robust statistics

    NASA Astrophysics Data System (ADS)

    Garces, Hugo; Arias, Luis; Rojas, Alejandro

    2014-09-01

    This work presents a baseline estimation method in flame spectra based on artificial intelligence structure as a neural network, combining robust statistics with multivariate analysis to automatically discriminate measured wavelengths belonging to continuous feature for model adaptation, surpassing restriction of measuring target baseline for training. The main contributions of this paper are: to analyze a flame spectra database computing Jolliffe statistics from Principal Components Analysis detecting wavelengths not correlated with most of the measured data corresponding to baseline; to systematically determine the optimal number of neurons in hidden layers based on Akaike's Final Prediction Error; to estimate baseline in full wavelength range sampling measured spectra; and to train an artificial intelligence structure as a Neural Network which allows to generalize the relation between measured and baseline spectra. The main application of our research is to compute total radiation with baseline information, allowing to diagnose combustion process state for optimization in early stages.

  17. Intelligent-based Structural Damage Detection Model

    NASA Astrophysics Data System (ADS)

    Lee, Eric Wai Ming; Yu, Kin Fung

    2010-05-01

    This paper presents the application of a novel Artificial Neural Network (ANN) model for the diagnosis of structural damage. The ANN model, denoted as the GRNNFA, is a hybrid model combining the General Regression Neural Network Model (GRNN) and the Fuzzy ART (FA) model. It not only retains the important features of the GRNN and FA models (i.e. fast and stable network training and incremental growth of network structure) but also facilitates the removal of the noise embedded in the training samples. Structural damage alters the stiffness distribution of the structure and so as to change the natural frequencies and mode shapes of the system. The measured modal parameter changes due to a particular damage are treated as patterns for that damage. The proposed GRNNFA model was trained to learn those patterns in order to detect the possible damage location of the structure. Simulated data is employed to verify and illustrate the procedures of the proposed ANN-based damage diagnosis methodology. The results of this study have demonstrated the feasibility of applying the GRNNFA model to structural damage diagnosis even when the training samples were noise contaminated.

  18. Use of artificial neural networks to identify the origin of green macroalgae

    NASA Astrophysics Data System (ADS)

    Żbikowski, Radosław

    2011-08-01

    This study demonstrates application of artificial neural networks (ANNs) for identifying the origin of green macroalgae ( Enteromorpha sp. and Cladophora sp.) according to their concentrations of Cd, Cu, Ni, Zn, Mn, Pb, Na, Ca, K and Mg. Earlier studies confirmed that algae can be used for biomonitoring surveys of metal contaminants in coastal areas of the Southern Baltic. The same data sets were classified with the use of different structures of radial basis function (RBF) and multilayer perceptron (MLP) networks. The selected networks were able to classify the samples according to their geographical origin, i.e. Southern Baltic, Gulf of Gdańsk and Vistula Lagoon. Additionally in the case of macroalgae from the Gulf of Gdańsk, the networks enabled the discrimination of samples according to areas of contrasting levels of pollution. Hence this study shows that artificial neural networks can be a valuable tool in biomonitoring studies.

  19. Reinforced Adversarial Neural Computer for de Novo Molecular Design.

    PubMed

    Putin, Evgeny; Asadulaev, Arip; Ivanenkov, Yan; Aladinskiy, Vladimir; Sanchez-Lengeling, Benjamin; Aspuru-Guzik, Alán; Zhavoronkov, Alex

    2018-06-12

    In silico modeling is a crucial milestone in modern drug design and development. Although computer-aided approaches in this field are well-studied, the application of deep learning methods in this research area is at the beginning. In this work, we present an original deep neural network (DNN) architecture named RANC (Reinforced Adversarial Neural Computer) for the de novo design of novel small-molecule organic structures based on the generative adversarial network (GAN) paradigm and reinforcement learning (RL). As a generator RANC uses a differentiable neural computer (DNC), a category of neural networks, with increased generation capabilities due to the addition of an explicit memory bank, which can mitigate common problems found in adversarial settings. The comparative results have shown that RANC trained on the SMILES string representation of the molecules outperforms its first DNN-based counterpart ORGANIC by several metrics relevant to drug discovery: the number of unique structures, passing medicinal chemistry filters (MCFs), Muegge criteria, and high QED scores. RANC is able to generate structures that match the distributions of the key chemical features/descriptors (e.g., MW, logP, TPSA) and lengths of the SMILES strings in the training data set. Therefore, RANC can be reasonably regarded as a promising starting point to develop novel molecules with activity against different biological targets or pathways. In addition, this approach allows scientists to save time and covers a broad chemical space populated with novel and diverse compounds.

  20. Robust hepatic vessel segmentation using multi deep convolution network

    NASA Astrophysics Data System (ADS)

    Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei

    2017-03-01

    Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.

  1. fMRI Syntactic and Lexical Repetition Effects Reveal the Initial Stages of Learning a New Language.

    PubMed

    Weber, Kirsten; Christiansen, Morten H; Petersson, Karl Magnus; Indefrey, Peter; Hagoort, Peter

    2016-06-29

    When learning a new language, we build brain networks to process and represent the acquired words and syntax and integrate these with existing language representations. It is an open question whether the same or different neural mechanisms are involved in learning and processing a novel language compared with the native language(s). Here we investigated the neural repetition effects of repeating known and novel word orders while human subjects were in the early stages of learning a new language. Combining a miniature language with a syntactic priming paradigm, we examined the neural correlates of language learning on-line using functional magnetic resonance imaging. In left inferior frontal gyrus and posterior temporal cortex, the repetition of novel syntactic structures led to repetition enhancement, whereas repetition of known structures resulted in repetition suppression. Additional verb repetition led to an increase in the syntactic repetition enhancement effect in language-related brain regions. Similarly, the repetition of verbs led to repetition enhancement effects in areas related to lexical and semantic processing, an effect that continued to increase in a subset of these regions. Repetition enhancement might reflect a mechanism to build and strengthen a neural network to process novel syntactic structures and lexical items. By contrast, the observed repetition suppression points to overlapping neural mechanisms for native and new language constructions when these have sufficient structural similarities. Acquiring a second language entails learning how to interpret novel words and relations between words, and to integrate them with existing language knowledge. To investigate the brain mechanisms involved in this particularly human skill, we combined an artificial language learning task with a syntactic repetition paradigm. We show that the repetition of novel syntactic structures, as well as words in contexts, leads to repetition enhancement, whereas repetition of known structures results in repetition suppression. We thus propose that repetition enhancement might reflect a brain mechanism to build and strengthen a neural network to process novel syntactic regularities and novel words. Importantly, the results also indicate an overlap in neural mechanisms for native and new language constructions with sufficient structural similarities. Copyright © 2016 the authors 0270-6474/16/366872-09$15.00/0.

  2. Optimization of Training Sets for Neural-Net Processing of Characteristic Patterns from Vibrating Solids

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2001-01-01

    Artificial neural networks have been used for a number of years to process holography-generated characteristic patterns of vibrating structures. This technology depends critically on the selection and the conditioning of the training sets. A scaling operation called folding is discussed for conditioning training sets optimally for training feed-forward neural networks to process characteristic fringe patterns. Folding allows feed-forward nets to be trained easily to detect damage-induced vibration-displacement-distribution changes as small as 10 nm. A specific application to aerospace of neural-net processing of characteristic patterns is presented to motivate the conditioning and optimization effort.

  3. Artificial neural networks as quantum associative memory

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Schrock, Jonathan; Imam, Neena; Humble, Travis

    We present results related to the recall accuracy and capacity of Hopfield networks implemented on commercially available quantum annealers. The use of Hopfield networks and artificial neural networks as content-addressable memories offer robust storage and retrieval of classical information, however, implementation of these models using currently available quantum annealers faces several challenges: the limits of precision when setting synaptic weights, the effects of spurious spin-glass states and minor embedding of densely connected graphs into fixed-connectivity hardware. We consider neural networks which are less than fully-connected, and also consider neural networks which contain multiple sparsely connected clusters. We discuss the effect of weak edge dilution on the accuracy of memory recall, and discuss how the multiple clique structure affects the storage capacity. Our work focuses on storage of patterns which can be embedded into physical hardware containing n < 1000 qubits. This work was supported by the United States Department of Defense and used resources of the Computational Research and Development Programs as Oak Ridge National Laboratory under Contract No. DE-AC0500OR22725 with the U. S. Department of Energy.

  4. Sustained Activity in Hierarchical Modular Neural Networks: Self-Organized Criticality and Oscillations

    PubMed Central

    Wang, Sheng-Jun; Hilgetag, Claus C.; Zhou, Changsong

    2010-01-01

    Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. In particular, they are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality (SOC). We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. Previously, it was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We found that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and SOC, which are not present in the respective random networks. The mechanism underlying the sustained activity is that each dense module cannot sustain activity on its own, but displays SOC in the presence of weak perturbations. Therefore, the hierarchical modular networks provide the coupling among subsystems with SOC. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivity of critical states and the predictability and timing of oscillations for efficient information processing. PMID:21852971

  5. Development of dielectrophoresis MEMS device for PC12 cell patterning to elucidate nerve-network generation

    NASA Astrophysics Data System (ADS)

    Nakamachi, Eiji; Koga, Hirotaka; Morita, Yusuke; Yamamoto, Koji; Sakamoto, Hidetoshi

    2018-01-01

    We developed a PC12 cell trapping and patterning device by combining the dielectrophoresis (DEP) methodology and the micro electro mechanical systems (MEMS) technology for time-lapse observation of morphological change of nerve network to elucidate the generation mechanism of neural network. We succeeded a neural network generation, which consisted of cell body, axon and dendrites by using tetragonal and hexagonal cell patterning. Further, the time laps observations was carried out to evaluate the axonal extension rate. The axon extended in the channel and reached to the target cell body. We found that the shorter the PC12 cell distance, the less the axonal connection time in both tetragonal and hexagonal structures. After 48 hours culture, a maximum success rate of network formation was 85% in the case of 40 μm distance tetragonal structure.

  6. Comparison of universal approximators incorporating partial monotonicity by structure.

    PubMed

    Minin, Alexey; Velikova, Marina; Lang, Bernhard; Daniels, Hennie

    2010-05-01

    Neural networks applied in control loops and safety-critical domains have to meet more requirements than just the overall best function approximation. On the one hand, a small approximation error is required; on the other hand, the smoothness and the monotonicity of selected input-output relations have to be guaranteed. Otherwise, the stability of most of the control laws is lost. In this article we compare two neural network-based approaches incorporating partial monotonicity by structure, namely the Monotonic Multi-Layer Perceptron (MONMLP) network and the Monotonic MIN-MAX (MONMM) network. We show the universal approximation capabilities of both types of network for partially monotone functions. On a number of datasets, we investigate the advantages and disadvantages of these approaches related to approximation performance, training of the model and convergence. 2009 Elsevier Ltd. All rights reserved.

  7. A system of IAC neural networks as the basis for self-organization in a sociological dynamical system simulation.

    PubMed

    Duong, D V; Reilly, K D

    1995-10-01

    This sociological simulation uses the ideas of semiotics and symbolic interactionism to demonstrate how an appropriately developed associative memory in the minds of individuals on the microlevel can self-organize into macrolevel dissipative structures of societies such as racial cultural/economic classes, status symbols and fads. The associative memory used is based on an extension of the IAC neural network (the Interactive Activation and Competition network). Several IAC networks act together to form a society by virtue of their human-like properties of intuition and creativity. These properties give them the ability to create and understand signs, which lead to the macrolevel structures of society. This system is implemented in hierarchical object oriented container classes which facilitate change in deep structure. Graphs of general trends and an historical account of a simulation run of this dynamical system are presented.

  8. Self-organizing linear output map (SOLO): An artificial neural network suitable for hydrologic modeling and analysis

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Lin; Gupta, Hoshin V.; Gao, Xiaogang; Sorooshian, Soroosh; Imam, Bisher

    2002-12-01

    Artificial neural networks (ANNs) can be useful in the prediction of hydrologic variables, such as streamflow, particularly when the underlying processes have complex nonlinear interrelationships. However, conventional ANN structures suffer from network training issues that significantly limit their widespread application. This paper presents a multivariate ANN procedure entitled self-organizing linear output map (SOLO), whose structure has been designed for rapid, precise, and inexpensive estimation of network structure/parameters and system outputs. More important, SOLO provides features that facilitate insight into the underlying processes, thereby extending its usefulness beyond forecast applications as a tool for scientific investigations. These characteristics are demonstrated using a classic rainfall-runoff forecasting problem. Various aspects of model performance are evaluated in comparison with other commonly used modeling approaches, including multilayer feedforward ANNs, linear time series modeling, and conceptual rainfall-runoff modeling.

  9. Artificial Neural Network Based Group Contribution Method for Estimating Cetane and Octane Numbers of Hydrocarbons and Oxygenated Organic Compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubic, William Louis; Jenkins, Rhodri W.; Moore, Cameron M.

    Chemical pathways for converting biomass into fuels produce compounds for which key physical and chemical property data are unavailable. We developed an artificial neural network based group contribution method for estimating cetane and octane numbers that captures the complex dependence of fuel properties of pure compounds on chemical structure and is statistically superior to current methods.

  10. Artificial Neural Network Based Group Contribution Method for Estimating Cetane and Octane Numbers of Hydrocarbons and Oxygenated Organic Compounds

    DOE PAGES

    Kubic, William Louis; Jenkins, Rhodri W.; Moore, Cameron M.; ...

    2017-09-28

    Chemical pathways for converting biomass into fuels produce compounds for which key physical and chemical property data are unavailable. We developed an artificial neural network based group contribution method for estimating cetane and octane numbers that captures the complex dependence of fuel properties of pure compounds on chemical structure and is statistically superior to current methods.

  11. An alternative approach for neural network evolution with a genetic algorithm: crossover by combinatorial optimization.

    PubMed

    García-Pedrajas, Nicolás; Ortiz-Boyer, Domingo; Hervás-Martínez, César

    2006-05-01

    In this work we present a new approach to crossover operator in the genetic evolution of neural networks. The most widely used evolutionary computation paradigm for neural network evolution is evolutionary programming. This paradigm is usually preferred due to the problems caused by the application of crossover to neural network evolution. However, crossover is the most innovative operator within the field of evolutionary computation. One of the most notorious problems with the application of crossover to neural networks is known as the permutation problem. This problem occurs due to the fact that the same network can be represented in a genetic coding by many different codifications. Our approach modifies the standard crossover operator taking into account the special features of the individuals to be mated. We present a new model for mating individuals that considers the structure of the hidden layer and redefines the crossover operator. As each hidden node represents a non-linear projection of the input variables, we approach the crossover as a problem on combinatorial optimization. We can formulate the problem as the extraction of a subset of near-optimal projections to create the hidden layer of the new network. This new approach is compared to a classical crossover in 25 real-world problems with an excellent performance. Moreover, the networks obtained are much smaller than those obtained with classical crossover operator.

  12. Efficient Cancer Detection Using Multiple Neural Networks.

    PubMed

    Shell, John; Gregory, William D

    2017-01-01

    The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings.

  13. Efficient Cancer Detection Using Multiple Neural Networks

    PubMed Central

    Gregory, William D.

    2017-01-01

    The inspection of live excised tissue specimens to ascertain malignancy is a challenging task in dermatopathology and generally in histopathology. We introduce a portable desktop prototype device that provides highly accurate neural network classification of malignant and benign tissue. The handheld device collects 47 impedance data samples from 1 Hz to 32 MHz via tetrapolar blackened platinum electrodes. The data analysis was implemented with six different backpropagation neural networks (BNN). A data set consisting of 180 malignant and 180 benign breast tissue data files in an approved IRB study at the Aurora Medical Center, Milwaukee, WI, USA, were utilized as a neural network input. The BNN structure consisted of a multi-tiered consensus approach autonomously selecting four of six neural networks to determine a malignant or benign classification. The BNN analysis was then compared with the histology results with consistent sensitivity of 100% and a specificity of 100%. This implementation successfully relied solely on statistical variation between the benign and malignant impedance data and intricate neural network configuration. This device and BNN implementation provides a novel approach that could be a valuable tool to augment current medical practice assessment of the health of breast, squamous, and basal cell carcinoma and other excised tissue without requisite tissue specimen expertise. It has the potential to provide clinical management personnel with a fast non-invasive accurate assessment of biopsied or sectioned excised tissue in various clinical settings. PMID:29282435

  14. Neural control and transient analysis of the LCL-type resonant converter

    NASA Astrophysics Data System (ADS)

    Zouggar, S.; Nait Charif, H.; Azizi, M.

    2000-07-01

    This paper proposes a generalised inverse learning structure to control the LCL converter. A feedforward neural network is trained to act as an inverse model of the LCL converter then both are cascaded such that the composed system results in an identity mapping between desired response and the LCL output voltage. Using the large signal model, we analyse the transient output response of the controlled LCL converter in the case of large variation of the load. The simulation results show the efficiency of using neural networks to regulate the LCL converter.

  15. Real-Time Adaptive Color Segmentation by Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2004-01-01

    Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural network and algorithm is that each update of synaptic weights takes place in conjunction with the addition of another hidden unit, which then remains in place as still other hidden units are added on subsequent iterations. For a given training pattern, the synaptic weight between (1) the inputs and the previously added hidden units and (2) the newly added hidden unit is updated by an amount proportional to the partial derivative of a quadratic error function with respect to the synaptic weight. The synaptic weight between the newly added hidden unit and each output unit is given by a more complex function that involves the errors between the outputs and their target values, the transfer functions (hyperbolic tangents) of the neural units, and the derivatives of the transfer functions.

  16. Aeroelasticity of morphing wings using neural networks

    NASA Astrophysics Data System (ADS)

    Natarajan, Anand

    In this dissertation, neural networks are designed to effectively model static non-linear aeroelastic problems in adaptive structures and linear dynamic aeroelastic systems with time varying stiffness. The use of adaptive materials in aircraft wings allows for the change of the contour or the configuration of a wing (morphing) in flight. The use of smart materials, to accomplish these deformations, can imply that the stiffness of the wing with a morphing contour changes as the contour changes. For a rapidly oscillating body in a fluid field, continuously adapting structural parameters may render the wing to behave as a time variant system. Even the internal spars/ribs of the aircraft wing which define the wing stiffness can be made adaptive, that is, their stiffness can be made to vary with time. The immediate effect on the structural dynamics of the wing, is that, the wing motion is governed by a differential equation with time varying coefficients. The study of this concept of a time varying torsional stiffness, made possible by the use of active materials and adaptive spars, in the dynamic aeroelastic behavior of an adaptable airfoil is performed here. Another type of aeroelastic problem of an adaptive structure that is investigated here, is the shape control of an adaptive bump situated on the leading edge of an airfoil. Such a bump is useful in achieving flow separation control for lateral directional maneuverability of the aircraft. Since actuators are being used to create this bump on the wing surface, the energy required to do so needs to be minimized. The adverse pressure drag as a result of this bump needs to be controlled so that the loss in lift over the wing is made minimal. The design of such a "spoiler bump" on the surface of the airfoil is an optimization problem of maximizing pressure drag due to flow separation while minimizing the loss in lift and energy required to deform the bump. One neural network is trained using the CFD code FLUENT to represent the aerodynamic loading over the bump. A second neural network is trained for calculating the actuator loads, bump displacement and lift, drag forces over the airfoil using the finite element solver, ANSYS and the previously trained neural network. This non-linear aeroelastic model of the deforming bump on an airfoil surface using neural networks can serve as a fore-runner for other non-linear aeroelastic problems.

  17. A Neural Network Aero Design System for Advanced Turbo-Engines

    NASA Technical Reports Server (NTRS)

    Sanz, Jose M.

    1999-01-01

    An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a Neural Network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. Neural network systems have been attempted in the context of direct design methods. From properties ascribed to a set of blades the neural network is trained to infer the properties of an 'interpolated' blade shape. The problem is that, especially in transonic regimes where we deal with intrinsically non linear and ill posed problems, small perturbations of the blade shape can produce very large variations of the flow parameters. It is very unlikely that, under these circumstances, a neural network will be able to find the proper solution. The unique situation in the present method is that the neural network can be trained to extract the required input pressure distribution from a database of pressure distributions while the inverse method will still compute the exact blade shape that corresponds to this 'interpolated' input pressure distribution. In other words, the interpolation process is transferred to a smoother problem, namely, finding what pressure distribution would produce the required flow conditions and, once this is done, the inverse method will compute the exact solution for this problem. The use of neural network is, in this context, highly related to the use of proper optimization techniques. The optimization is used essentially as an automation procedure to force the input pressure distributions to achieve the required aero and structural design parameters. A multilayered feed forward network with back-propagation is used to train the system for pattern association and classification.

  18. Extraction of texture features with a multiresolution neural network

    NASA Astrophysics Data System (ADS)

    Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.

    1992-09-01

    Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.

  19. A class of convergent neural network dynamics

    NASA Astrophysics Data System (ADS)

    Fiedler, Bernold; Gedeon, Tomáš

    1998-01-01

    We consider a class of systems of differential equations in Rn which exhibits convergent dynamics. We find a Lyapunov function and show that every bounded trajectory converges to the set of equilibria. Our result generalizes the results of Cohen and Grossberg (1983) for convergent neural networks. It replaces the symmetry assumption on the matrix of weights by the assumption on the structure of the connections in the neural network. We prove the convergence result also for a large class of Lotka-Volterra systems. These are naturally defined on the closed positive orthant. We show that there are no heteroclinic cycles on the boundary of the positive orthant for the systems in this class.

  20. Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network

    NASA Astrophysics Data System (ADS)

    Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao

    2018-03-01

    Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.

  1. Transversal homoclinic orbits in a transiently chaotic neural network.

    PubMed

    Chen, Shyan-Shiou; Shih, Chih-Wen

    2002-09-01

    We study the existence of snap-back repellers, hence the existence of transversal homoclinic orbits in a discrete-time neural network. Chaotic behaviors for the network system in the sense of Li and Yorke or Marotto can then be concluded. The result is established by analyzing the structures of the system and allocating suitable parameters in constructing the fixed points and their pre-images for the system. The investigation provides a theoretical confirmation on the scenario of transient chaos for the system. All the parameter conditions for the theory can be examined numerically. The numerical ranges for the parameters which yield chaotic dynamics and convergent dynamics provide significant information in the annealing process in solving combinatorial optimization problems using this transiently chaotic neural network. (c) 2002 American Institute of Physics.

  2. Integration of Online Parameter Identification and Neural Network for In-Flight Adaptive Control

    NASA Technical Reports Server (NTRS)

    Hageman, Jacob J.; Smith, Mark S.; Stachowiak, Susan

    2003-01-01

    An indirect adaptive system has been constructed for robust control of an aircraft with uncertain aerodynamic characteristics. This system consists of a multilayer perceptron pre-trained neural network, online stability and control derivative identification, a dynamic cell structure online learning neural network, and a model following control system based on the stochastic optimal feedforward and feedback technique. The pre-trained neural network and model following control system have been flight-tested, but the online parameter identification and online learning neural network are new additions used for in-flight adaptation of the control system model. A description of the modification and integration of these two stand-alone software packages into the complete system in preparation for initial flight tests is presented. Open-loop results using both simulation and flight data, as well as closed-loop performance of the complete system in a nonlinear, six-degree-of-freedom, flight validated simulation, are analyzed. Results show that this online learning system, in contrast to the nonlearning system, has the ability to adapt to changes in aerodynamic characteristics in a real-time, closed-loop, piloted simulation, resulting in improved flying qualities.

  3. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions

    PubMed Central

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage. PMID:27468262

  4. Sentiment analysis: a comparison of deep learning neural network algorithm with SVM and naϊve Bayes for Indonesian text

    NASA Astrophysics Data System (ADS)

    Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia

    2018-03-01

    Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.

  5. Identification of Stimulated Sites Using Artificial Neural Networks Based on Transcranial Magnetic Stimulation-Elicited Motor Evoked Potentials and Finger Forces

    NASA Astrophysics Data System (ADS)

    Fukuda, Hiroshi; Odagaki, Masato; Hiwaki, Osamu

    Motor evoked potentials (MEPs) elicited by transcranial magnetic stimulation (TMS) over the primary motor cortex (M1) vary in their amplitude from trial to trial. To investigate the functions of motor cortex by TMS, it is necessary to confirm the causal relationship between stimulated sites and variable MEPs. We created artificial neural networks to classify sets of variable MEP signals and finger forces into the corresponding stimulated sites. We conducted TMS at three different positions over M1 and measured MEPs of hand and forearm muscles and forces of the index finger in four subjects. We estimated the sites within motor cortex stimulated by TMS based on cortical columnar structure and nerve excitation properties. Finally, we tried to classify the various MEPs and finger forces into three groups using artificial neural networks. MEPs and finger forces varied from trial to trial, even if the stimulating coil was fixed on the subject's head. Our proposed neural network was able to identify the MEPs and finger forces with the corresponding stimulated sites in M1. We proposed the artificial neural networks to confirm the TMS-stimulated sites using various MEPs and evoked finger forces.

  6. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions.

    PubMed

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.

  7. Detecting event-related changes in organizational networks using optimized neural network models.

    PubMed

    Li, Ze; Sun, Duoyong; Zhu, Renqi; Lin, Zihan

    2017-01-01

    Organizational external behavior changes are caused by the internal structure and interactions. External behaviors are also known as the behavioral events of an organization. Detecting event-related changes in organizational networks could efficiently be used to monitor the dynamics of organizational behaviors. Although many different methods have been used to detect changes in organizational networks, these methods usually ignore the correlation between the internal structure and external events. Event-related change detection considers the correlation and could be used for event recognition based on social network modeling and supervised classification. Detecting event-related changes could be effectively useful in providing early warnings and faster responses to both positive and negative organizational activities. In this study, event-related change in an organizational network was defined, and artificial neural network models were used to quantitatively determine whether and when a change occurred. To achieve a higher accuracy, Back Propagation Neural Networks (BPNNs) were optimized using Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). We showed the feasibility of the proposed method by comparing its performance with that of other methods using two cases. The results suggested that the proposed method could identify organizational events based on a correlation between the organizational networks and events. The results also suggested that the proposed method not only has a higher precision but also has a better robustness than the previously used techniques.

  8. Detecting event-related changes in organizational networks using optimized neural network models

    PubMed Central

    Sun, Duoyong; Zhu, Renqi; Lin, Zihan

    2017-01-01

    Organizational external behavior changes are caused by the internal structure and interactions. External behaviors are also known as the behavioral events of an organization. Detecting event-related changes in organizational networks could efficiently be used to monitor the dynamics of organizational behaviors. Although many different methods have been used to detect changes in organizational networks, these methods usually ignore the correlation between the internal structure and external events. Event-related change detection considers the correlation and could be used for event recognition based on social network modeling and supervised classification. Detecting event-related changes could be effectively useful in providing early warnings and faster responses to both positive and negative organizational activities. In this study, event-related change in an organizational network was defined, and artificial neural network models were used to quantitatively determine whether and when a change occurred. To achieve a higher accuracy, Back Propagation Neural Networks (BPNNs) were optimized using Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). We showed the feasibility of the proposed method by comparing its performance with that of other methods using two cases. The results suggested that the proposed method could identify organizational events based on a correlation between the organizational networks and events. The results also suggested that the proposed method not only has a higher precision but also has a better robustness than the previously used techniques. PMID:29190799

  9. Human Fetal Brain Connectome: Structural Network Development from Middle Fetal Stage to Birth

    PubMed Central

    Song, Limei; Mishra, Virendra; Ouyang, Minhui; Peng, Qinmu; Slinger, Michelle; Liu, Shuwei; Huang, Hao

    2017-01-01

    Complicated molecular and cellular processes take place in a spatiotemporally heterogeneous and precisely regulated pattern in the human fetal brain, yielding not only dramatic morphological and microstructural changes, but also macroscale connectomic transitions. As the underlying substrate of the fetal brain structural network, both dynamic neuronal migration pathways and rapid developing fetal white matter (WM) fibers could fundamentally reshape early fetal brain connectome. Quantifying structural connectome development can not only shed light on the brain reconfiguration in this critical yet rarely studied developmental period, but also reveal alterations of the connectome under neuropathological conditions. However, transition of the structural connectome from the mid-fetal stage to birth is not yet known. The contribution of different types of neural fibers to the structural network in the mid-fetal brain is not known, either. In this study, diffusion tensor magnetic resonance imaging (DT-MRI or DTI) of 10 fetal brain specimens at the age of 20 postmenstrual weeks (PMW), 12 in vivo brains at 35 PMW, and 12 in vivo brains at term (40 PMW) were acquired. The structural connectome of each brain was established with evenly parcellated cortical regions as network nodes and traced fiber pathways based on DTI tractography as network edges. Two groups of fibers were categorized based on the fiber terminal locations in the cerebral wall in the 20 PMW fetal brains. We found that fetal brain networks become stronger and more efficient during 20–40 PMW. Furthermore, network strength and global efficiency increase more rapidly during 20–35 PMW than during 35–40 PMW. Visualization of the whole brain fiber distribution by the lengths suggested that the network reconfiguration in this developmental period could be associated with a significant increase of major long association WM fibers. In addition, non-WM neural fibers could be a major contributor to the structural network configuration at 20 PMW and small-world network organization could exist as early as 20 PMW. These findings offer a preliminary record of the fetal brain structural connectome maturation from the middle fetal stage to birth and reveal the critical role of non-WM neural fibers in structural network configuration in the middle fetal stage. PMID:29081731

  10. Deep neural models for ICD-10 coding of death certificates and autopsy reports in free-text.

    PubMed

    Duarte, Francisco; Martins, Bruno; Pinto, Cátia Sousa; Silva, Mário J

    2018-04-01

    We address the assignment of ICD-10 codes for causes of death by analyzing free-text descriptions in death certificates, together with the associated autopsy reports and clinical bulletins, from the Portuguese Ministry of Health. We leverage a deep neural network that combines word embeddings, recurrent units, and neural attention, for the generation of intermediate representations of the textual contents. The neural network also explores the hierarchical nature of the input data, by building representations from the sequences of words within individual fields, which are then combined according to the sequences of fields that compose the inputs. Moreover, we explore innovative mechanisms for initializing the weights of the final nodes of the network, leveraging co-occurrences between classes together with the hierarchical structure of ICD-10. Experimental results attest to the contribution of the different neural network components. Our best model achieves accuracy scores over 89%, 81%, and 76%, respectively for ICD-10 chapters, blocks, and full-codes. Through examples, we also show that our method can produce interpretable results, useful for public health surveillance. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Parameter estimation in spiking neural networks: a reverse-engineering approach.

    PubMed

    Rostro-Gonzalez, H; Cessac, B; Vieville, T

    2012-04-01

    This paper presents a reverse engineering approach for parameter estimation in spiking neural networks (SNNs). We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate and fire type. Our approach aims at by-passing the fact that the parameter estimation in SNN results in a non-deterministic polynomial-time hard problem when delays are to be considered. Here, this assumption has been reformulated as a linear programming (LP) problem in order to perform the solution in a polynomial time. Besides, the LP problem formulation makes the fact that the reverse engineering of a neural network can be performed from the observation of the spike times explicit. Furthermore, we point out how the LP adjustment mechanism is local to each neuron and has the same structure as a 'Hebbian' rule. Finally, we present a generalization of this approach to the design of input-output (I/O) transformations as a practical method to 'program' a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.

  12. Synchronization of heteroclinic circuits through learning in coupled neural networks

    NASA Astrophysics Data System (ADS)

    Selskii, Anton; Makarov, Valeri A.

    2016-01-01

    The synchronization of oscillatory activity in neural networks is usually implemented by coupling the state variables describing neuronal dynamics. Here we study another, but complementary mechanism based on a learning process with memory. A driver network, acting as a teacher, exhibits winner-less competition (WLC) dynamics, while a driven network, a learner, tunes its internal couplings according to the oscillations observed in the teacher. We show that under appropriate training the learner can "copy" the coupling structure and thus synchronize oscillations with the teacher. The replication of the WLC dynamics occurs for intermediate memory lengths only, consequently, the learner network exhibits a phenomenon of learning resonance.

  13. Proposed health state awareness of helicopter blades using an artificial neural network strategy

    NASA Astrophysics Data System (ADS)

    Lee, Andrew; Habtour, Ed; Gadsden, S. A.

    2016-05-01

    Structural health prognostics and diagnosis strategies can be classified as either model or signal-based. Artificial neural network strategies are popular signal-based techniques. This paper proposes the use of helicopter blades in order to study the sensitivity of an artificial neural network to structural fatigue. The experimental setup consists of a scale aluminum helicopter blade exposed to transverse vibratory excitation at the hub using single axis electrodynamic shaker. The intent of this study is to optimize an algorithm for processing high-dimensional data while retaining important information content in an effort to select input features and weights, as well as health parameters, for training a neural network. Data from accelerometers and piezoelectric transducers is collected from a known system designated as healthy. Structural damage will be introduced to different blades, which they will be designated as unhealthy. A variety of different tests will be performed to track the evolution and severity of the damage. A number of damage detection and diagnosis strategies will be implemented. A preliminary experiment was performed on aluminum cantilever beams providing a simpler model for implementation and proof of concept. Future work will look at utilizing the detection information as part of a hierarchical control system in order to mitigate structural damage and fatigue. The proposed approach may eliminate massive data storage on board of an aircraft through retaining relevant information only. The control system can then employ the relevant information to intelligently reconfigure adaptive maneuvers to avoid harmful regimes, thus, extending the life of the aircraft.

  14. Adaptive nonlinear polynomial neural networks for control of boundary layer/structural interaction

    NASA Technical Reports Server (NTRS)

    Parker, B. Eugene, Jr.; Cellucci, Richard L.; Abbott, Dean W.; Barron, Roger L.; Jordan, Paul R., III; Poor, H. Vincent

    1993-01-01

    The acoustic pressures developed in a boundary layer can interact with an aircraft panel to induce significant vibration in the panel. Such vibration is undesirable due to the aerodynamic drag and structure-borne cabin noises that result. The overall objective of this work is to develop effective and practical feedback control strategies for actively reducing this flow-induced structural vibration. This report describes the results of initial evaluations using polynomial, neural network-based, feedback control to reduce flow induced vibration in aircraft panels due to turbulent boundary layer/structural interaction. Computer simulations are used to develop and analyze feedback control strategies to reduce vibration in a beam as a first step. The key differences between this work and that going on elsewhere are as follows: that turbulent and transitional boundary layers represent broadband excitation and thus present a more complex stochastic control scenario than that of narrow band (e.g., laminar boundary layer) excitation; and secondly, that the proposed controller structures are adaptive nonlinear infinite impulse response (IIR) polynomial neural network, as opposed to the traditional adaptive linear finite impulse response (FIR) filters used in most studies to date. The controllers implemented in this study achieved vibration attenuation of 27 to 60 dB depending on the type of boundary layer established by laminar, turbulent, and intermittent laminar-to-turbulent transitional flows. Application of multi-input, multi-output, adaptive, nonlinear feedback control of vibration in aircraft panels based on polynomial neural networks appears to be feasible today. Plans are outlined for Phase 2 of this study, which will include extending the theoretical investigation conducted in Phase 2 and verifying the results in a series of laboratory experiments involving both bum and plate models.

  15. Neural network modeling of associative memory: Beyond the Hopfield model

    NASA Astrophysics Data System (ADS)

    Dasgupta, Chandan

    1992-07-01

    A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying dynamics are used to store and associatively recall information, are described. In the first class of models, a hierarchical structure is used to store an exponentially large number of strongly correlated memories. The second class of models uses limit cycles to store and retrieve individual memories. A neurobiologically plausible network that generates low-amplitude periodic variations of activity, similar to the oscillations observed in electroencephalographic recordings, is also described. Results obtained from analytic and numerical studies of the properties of these networks are discussed.

  16. Distributed synaptic weights in a LIF neural network and learning rules

    NASA Astrophysics Data System (ADS)

    Perthame, Benoît; Salort, Delphine; Wainrib, Gilles

    2017-09-01

    Leaky integrate-and-fire (LIF) models are mean-field limits, with a large number of neurons, used to describe neural networks. We consider inhomogeneous networks structured by a connectivity parameter (strengths of the synaptic weights) with the effect of processing the input current with different intensities. We first study the properties of the network activity depending on the distribution of synaptic weights and in particular its discrimination capacity. Then, we consider simple learning rules and determine the synaptic weight distribution it generates. We outline the role of noise as a selection principle and the capacity to memorize a learned signal.

  17. Experimental Modal Analysis and Dynaic Strain Fiber Bragg Gratings for Structural Health Monitoring of Composite Aerospace Structures

    NASA Astrophysics Data System (ADS)

    Panopoulou, A.; Fransen, S.; Gomez Molinero, V.; Kostopoulos, V.

    2012-07-01

    The objective of this work is to develop a new structural health monitoring system for composite aerospace structures based on dynamic response strain measurements and experimental modal analysis techniques. Fibre Bragg Grating (FBG) optical sensors were used for monitoring the dynamic response of the composite structure. The structural dynamic behaviour has been numerically simulated and experimentally verified by means of vibration testing. The hypothesis of all vibration tests was that actual damage in composites reduces their stiffness and produces the same result as mass increase produces. Thus, damage was simulated by slightly varying locally the mass of the structure at different zones. Experimental modal analysis based on the strain responses was conducted and the extracted strain mode shapes were the input for the damage detection expert system. A feed-forward back propagation neural network was the core of the damage detection system. The features-input to the neural network consisted of the strain mode shapes, extracted from the experimental modal analysis. Dedicated training and validation activities were carried out based on the experimental results. The system showed high reliability, confirmed by the ability of the neural network to recognize the size and the position of damage on the structure. The experiments were performed on a real structure i.e. a lightweight antenna sub- reflector, manufactured and tested at EADS CASA ESPACIO. An integrated FBG sensor network, based on the advantage of multiplexing, was mounted on the structure with optimum topology. Numerical simulation of both structures was used as a support tool at all the steps of the work. Potential applications for the proposed system are during ground qualification extensive tests of space structures and during the mission as modal analysis tool on board, being able via the FBG responses to identify a potential failure.

  18. Effect of inhibitory firing pattern on coherence resonance in random neural networks

    NASA Astrophysics Data System (ADS)

    Yu, Haitao; Zhang, Lianghao; Guo, Xinmeng; Wang, Jiang; Cao, Yibin; Liu, Jing

    2018-01-01

    The effect of inhibitory firing patterns on coherence resonance (CR) in random neuronal network is systematically studied. Spiking and bursting are two main types of firing pattern considered in this work. Numerical results show that, irrespective of the inhibitory firing patterns, the regularity of network is maximized by an optimal intensity of external noise, indicating the occurrence of coherence resonance. Moreover, the firing pattern of inhibitory neuron indeed has a significant influence on coherence resonance, but the efficacy is determined by network property. In the network with strong coupling strength but weak inhibition, bursting neurons largely increase the amplitude of resonance, while they can decrease the noise intensity that induced coherence resonance within the neural system of strong inhibition. Different temporal windows of inhibition induced by different inhibitory neurons may account for the above observations. The network structure also plays a constructive role in the coherence resonance. There exists an optimal network topology to maximize the regularity of the neural systems.

  19. Structural Covariance of the Prefrontal-Amygdala Pathways Associated with Heart Rate Variability.

    PubMed

    Wei, Luqing; Chen, Hong; Wu, Guo-Rong

    2018-01-01

    The neurovisceral integration model has shown a key role of the amygdala in neural circuits underlying heart rate variability (HRV) modulation, and suggested that reciprocal connections from amygdala to brain regions centered on the central autonomic network (CAN) are associated with HRV. To provide neuroanatomical evidence for these theoretical perspectives, the current study used covariance analysis of MRI-based gray matter volume (GMV) to map structural covariance network of the amygdala, and then determined whether the interregional structural correlations related to individual differences in HRV. The results showed that covariance patterns of the amygdala encompassed large portions of cortical (e.g., prefrontal, cingulate, and insula) and subcortical (e.g., striatum, hippocampus, and midbrain) regions, lending evidence from structural covariance analysis to the notion that the amygdala was a pivotal node in neural pathways for HRV modulation. Importantly, participants with higher resting HRV showed increased covariance of amygdala to dorsal medial prefrontal cortex and anterior cingulate cortex (dmPFC/dACC) extending into adjacent medial motor regions [i.e., pre-supplementary motor area (pre-SMA)/SMA], demonstrating structural covariance of the prefrontal-amygdala pathways implicated in HRV, and also implying that resting HRV may reflect the function of neural circuits underlying cognitive regulation of emotion as well as facilitation of adaptive behaviors to emotion. Our results, thus, provide anatomical substrates for the neurovisceral integration model that resting HRV may index an integrative neural network which effectively organizes emotional, cognitive, physiological and behavioral responses in the service of goal-directed behavior and adaptability.

  20. Augmented neural networks and problem structure-based heuristics for the bin-packing problem

    NASA Astrophysics Data System (ADS)

    Kasap, Nihat; Agarwal, Anurag

    2012-08-01

    In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.

  1. Six networks on a universal neuromorphic computing substrate.

    PubMed

    Pfeil, Thomas; Grübl, Andreas; Jeltsch, Sebastian; Müller, Eric; Müller, Paul; Petrovici, Mihai A; Schmuker, Michael; Brüderle, Daniel; Schemmel, Johannes; Meier, Karlheinz

    2013-01-01

    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.

  2. Six Networks on a Universal Neuromorphic Computing Substrate

    PubMed Central

    Pfeil, Thomas; Grübl, Andreas; Jeltsch, Sebastian; Müller, Eric; Müller, Paul; Petrovici, Mihai A.; Schmuker, Michael; Brüderle, Daniel; Schemmel, Johannes; Meier, Karlheinz

    2013-01-01

    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality. PMID:23423583

  3. Dynamic decomposition of spatiotemporal neural signals

    PubMed Central

    2017-01-01

    Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals. PMID:28558039

  4. Pattern Storage, Bifurcations, and Groupwise Correlation Structure of an Exactly Solvable Asymmetric Neural Network Model.

    PubMed

    Fasoli, Diego; Cattani, Anna; Panzeri, Stefano

    2018-05-01

    Despite their biological plausibility, neural network models with asymmetric weights are rarely solved analytically, and closed-form solutions are available only in some limiting cases or in some mean-field approximations. We found exact analytical solutions of an asymmetric spin model of neural networks with arbitrary size without resorting to any approximation, and we comprehensively studied its dynamical and statistical properties. The network had discrete time evolution equations and binary firing rates, and it could be driven by noise with any distribution. We found analytical expressions of the conditional and stationary joint probability distributions of the membrane potentials and the firing rates. By manipulating the conditional probability distribution of the firing rates, we extend to stochastic networks the associating learning rule previously introduced by Personnaz and coworkers. The new learning rule allowed the safe storage, under the presence of noise, of point and cyclic attractors, with useful implications for content-addressable memories. Furthermore, we studied the bifurcation structure of the network dynamics in the zero-noise limit. We analytically derived examples of the codimension 1 and codimension 2 bifurcation diagrams of the network, which describe how the neuronal dynamics changes with the external stimuli. This showed that the network may undergo transitions among multistable regimes, oscillatory behavior elicited by asymmetric synaptic connections, and various forms of spontaneous symmetry breaking. We also calculated analytically groupwise correlations of neural activity in the network in the stationary regime. This revealed neuronal regimes where, statistically, the membrane potentials and the firing rates are either synchronous or asynchronous. Our results are valid for networks with any number of neurons, although our equations can be realistically solved only for small networks. For completeness, we also derived the network equations in the thermodynamic limit of infinite network size and we analytically studied their local bifurcations. All the analytical results were extensively validated by numerical simulations.

  5. Attenuation correction for brain PET imaging using deep neural network based on dixon and ZTE MR images.

    PubMed

    Gong, Kuang; Yang, Jaewon; Kim, Kyungsang; El Fakhri, Georges; Seo, Youngho; Li, Quanzheng

    2018-05-23

    Positron Emission Tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure. © 2018 Institute of Physics and Engineering in Medicine.

  6. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  7. Comparing success levels of different neural network structures in extracting discriminative information from the response patterns of a temperature-modulated resistive gas sensor

    NASA Astrophysics Data System (ADS)

    Hosseini-Golgoo, S. M.; Bozorgi, H.; Saberkari, A.

    2015-06-01

    Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively.

  8. An application of neural network for Structural Health Monitoring of an adaptive wing with an array of FBG sensors

    NASA Astrophysics Data System (ADS)

    Mieloszyk, Magdalena; Krawczuk, Marek; Skarbek, Lukasz; Ostachowicz, Wieslaw

    2011-07-01

    This paper presents an application of neural networks to determinate the level of activation of shape memory alloy actuators of an adaptive wing. In this concept the shape of the wing can be controlled and altered thanks to the wing design and the use of integrated shape memory alloy actuators. The wing is assumed as assembled from a number of wing sections that relative positions can be controlled independently by thermal activation of shape memory actuators. The investigated wing is employed with an array of Fibre Bragg Grating sensors. The Fibre Bragg Grating sensors with combination of a neural network have been used to Structural Health Monitoring of the wing condition. The FBG sensors are a great tool to control the condition of composite structures due to their immunity to electromagnetic fields as well as their small size and weight. They can be mounted onto the surface or embedded into the wing composite material without any significant influence on the wing strength. The paper concentrates on analysis of the determination of the twisting moment produced by an activated shape memory alloy actuator. This has been analysed both numerically using the finite element method by a commercial code ABAQUS® and experimentally using Fibre Bragg Grating sensor measurements. The results of the analysis have been then used by a neural network to determine twisting moments produced by each shape memory alloy actuator.

  9. Mutual connectivity analysis (MCA) using generalized radial basis function neural networks for nonlinear functional connectivity network recovery in resting-state functional MRI

    NASA Astrophysics Data System (ADS)

    D'Souza, Adora M.; Abidin, Anas Zainul; Nagarajan, Mahesh B.; Wismüller, Axel

    2016-03-01

    We investigate the applicability of a computational framework, called mutual connectivity analysis (MCA), for directed functional connectivity analysis in both synthetic and resting-state functional MRI data. This framework comprises of first evaluating non-linear cross-predictability between every pair of time series prior to recovering the underlying network structure using community detection algorithms. We obtain the non-linear cross-prediction score between time series using Generalized Radial Basis Functions (GRBF) neural networks. These cross-prediction scores characterize the underlying functionally connected networks within the resting brain, which can be extracted using non-metric clustering approaches, such as the Louvain method. We first test our approach on synthetic models with known directional influence and network structure. Our method is able to capture the directional relationships between time series (with an area under the ROC curve = 0.92 +/- 0.037) as well as the underlying network structure (Rand index = 0.87 +/- 0.063) with high accuracy. Furthermore, we test this method for network recovery on resting-state fMRI data, where results are compared to the motor cortex network recovered from a motor stimulation sequence, resulting in a strong agreement between the two (Dice coefficient = 0.45). We conclude that our MCA approach is effective in analyzing non-linear directed functional connectivity and in revealing underlying functional network structure in complex systems.

  10. Mutual Connectivity Analysis (MCA) Using Generalized Radial Basis Function Neural Networks for Nonlinear Functional Connectivity Network Recovery in Resting-State Functional MRI.

    PubMed

    DSouza, Adora M; Abidin, Anas Zainul; Nagarajan, Mahesh B; Wismüller, Axel

    2016-03-29

    We investigate the applicability of a computational framework, called mutual connectivity analysis (MCA), for directed functional connectivity analysis in both synthetic and resting-state functional MRI data. This framework comprises of first evaluating non-linear cross-predictability between every pair of time series prior to recovering the underlying network structure using community detection algorithms. We obtain the non-linear cross-prediction score between time series using Generalized Radial Basis Functions (GRBF) neural networks. These cross-prediction scores characterize the underlying functionally connected networks within the resting brain, which can be extracted using non-metric clustering approaches, such as the Louvain method. We first test our approach on synthetic models with known directional influence and network structure. Our method is able to capture the directional relationships between time series (with an area under the ROC curve = 0.92 ± 0.037) as well as the underlying network structure (Rand index = 0.87 ± 0.063) with high accuracy. Furthermore, we test this method for network recovery on resting-state fMRI data, where results are compared to the motor cortex network recovered from a motor stimulation sequence, resulting in a strong agreement between the two (Dice coefficient = 0.45). We conclude that our MCA approach is effective in analyzing non-linear directed functional connectivity and in revealing underlying functional network structure in complex systems.

  11. Cascade process modeling with mechanism-based hierarchical neural networks.

    PubMed

    Cong, Qiumei; Yu, Wen; Chai, Tianyou

    2010-02-01

    Cascade process, such as wastewater treatment plant, includes many nonlinear sub-systems and many variables. When the number of sub-systems is big, the input-output relation in the first block and the last block cannot represent the whole process. In this paper we use two techniques to overcome the above problem. Firstly we propose a new neural model: hierarchical neural networks to identify the cascade process; then we use serial structural mechanism model based on the physical equations to connect with neural model. A stable learning algorithm and theoretical analysis are given. Finally, this method is used to model a wastewater treatment plant. Real operational data of wastewater treatment plant is applied to illustrate the modeling approach.

  12. Predicting wettability behavior of fluorosilica coated metal surface using optimum neural network

    NASA Astrophysics Data System (ADS)

    Taghipour-Gorjikolaie, Mehran; Valipour Motlagh, Naser

    2018-02-01

    The interaction between variables, which are effective on the surface wettability, is very complex to predict the contact angles and sliding angles of liquid drops. In this paper, in order to solve this complexity, artificial neural network was used to develop reliable models for predicting the angles of liquid drops. Experimental data are divided into training data and testing data. By using training data and feed forward structure for the neural network and using particle swarm optimization for training the neural network based models, the optimum models were developed. The obtained results showed that regression index for the proposed models for the contact angles and sliding angles are 0.9874 and 0.9920, respectively. As it can be seen, these values are close to unit and it means the reliable performance of the models. Also, it can be inferred from the results that the proposed model have more reliable performance than multi-layer perceptron and radial basis function based models.

  13. Kannada character recognition system using neural network

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  14. Adaptively combined FIR and functional link artificial neural network equalizer for nonlinear communication channel.

    PubMed

    Zhao, Haiquan; Zhang, Jiashu

    2009-04-01

    This paper proposes a novel computational efficient adaptive nonlinear equalizer based on combination of finite impulse response (FIR) filter and functional link artificial neural network (CFFLANN) to compensate linear and nonlinear distortions in nonlinear communication channel. This convex nonlinear combination results in improving the speed while retaining the lower steady-state error. In addition, since the CFFLANN needs not the hidden layers, which exist in conventional neural-network-based equalizers, it exhibits a simpler structure than the traditional neural networks (NNs) and can require less computational burden during the training mode. Moreover, appropriate adaptation algorithm for the proposed equalizer is derived by the modified least mean square (MLMS). Results obtained from the simulations clearly show that the proposed equalizer using the MLMS algorithm can availably eliminate various intensity linear and nonlinear distortions, and be provided with better anti-jamming performance. Furthermore, comparisons of the mean squared error (MSE), the bit error rate (BER), and the effect of eigenvalue ratio (EVR) of input correlation matrix are presented.

  15. Phase synchronization motion and neural coding in dynamic transmission of neural information.

    PubMed

    Wang, Rubin; Zhang, Zhikang; Qu, Jingyi; Cao, Jianting

    2011-07-01

    In order to explore the dynamic characteristics of neural coding in the transmission of neural information in the brain, a model of neural network consisting of three neuronal populations is proposed in this paper using the theory of stochastic phase dynamics. Based on the model established, the neural phase synchronization motion and neural coding under spontaneous activity and stimulation are examined, for the case of varying network structure. Our analysis shows that, under the condition of spontaneous activity, the characteristics of phase neural coding are unrelated to the number of neurons participated in neural firing within the neuronal populations. The result of numerical simulation supports the existence of sparse coding within the brain, and verifies the crucial importance of the magnitudes of the coupling coefficients in neural information processing as well as the completely different information processing capability of neural information transmission in both serial and parallel couplings. The result also testifies that under external stimulation, the bigger the number of neurons in a neuronal population, the more the stimulation influences the phase synchronization motion and neural coding evolution in other neuronal populations. We verify numerically the experimental result in neurobiology that the reduction of the coupling coefficient between neuronal populations implies the enhancement of lateral inhibition function in neural networks, with the enhancement equivalent to depressing neuronal excitability threshold. Thus, the neuronal populations tend to have a stronger reaction under the same stimulation, and more neurons get excited, leading to more neurons participating in neural coding and phase synchronization motion.

  16. A method of optimized neural network by L-M algorithm to transformer winding hot spot temperature forecasting

    NASA Astrophysics Data System (ADS)

    Wei, B. G.; Wu, X. Y.; Yao, Z. F.; Huang, H.

    2017-11-01

    Transformers are essential devices of the power system. The accurate computation of the highest temperature (HST) of a transformer’s windings is very significant, as for the HST is a fundamental parameter in controlling the load operation mode and influencing the life time of the insulation. Based on the analysis of the heat transfer processes and the thermal characteristics inside transformers, there is taken into consideration the influence of factors like the sunshine, external wind speed etc. on the oil-immersed transformers. Experimental data and the neural network are used for modeling and protesting of the HST, and furthermore, investigations are conducted on the optimization of the structure and algorithms of neutral network are conducted. Comparison is made between the measured values and calculated values by using the recommended algorithm of IEC60076 and by using the neural network algorithm proposed by the authors; comparison that shows that the value computed with the neural network algorithm approximates better the measured value than the value computed with the algorithm proposed by IEC60076.

  17. Comparison of RF spectrum prediction methods for dynamic spectrum access

    NASA Astrophysics Data System (ADS)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  18. A comparison of back propagation and Generalized Regression Neural Networks performance in neutron spectrometry.

    PubMed

    Martínez-Blanco, Ma Del Rosario; Ornelas-Vargas, Gerardo; Solís-Sánchez, Luis Octavio; Castañeda-Miranada, Rodrigo; Vega-Carrillo, Héctor René; Celaya-Padilla, José M; Garza-Veloz, Idalia; Martínez-Fierro, Margarita; Ortiz-Rodríguez, José Manuel

    2016-11-01

    The process of unfolding the neutron energy spectrum has been subject of research for many years. Monte Carlo, iterative methods, the bayesian theory, the principle of maximum entropy are some of the methods used. The drawbacks associated with traditional unfolding procedures have motivated the research of complementary approaches. Back Propagation Neural Networks (BPNN), have been applied with success in neutron spectrometry and dosimetry domains, however, the structure and learning parameters are factors that highly impact in the networks performance. In ANN domain, Generalized Regression Neural Network (GRNN) is one of the simplest neural networks in term of network architecture and learning algorithm. The learning is instantaneous, requiring no time for training. Opposite to BPNN, a GRNN would be formed instantly with just a 1-pass training on the development data. In the network development phase, the only hurdle is to optimize the hyper-parameter, which is known as sigma, governing the smoothness of the network. The aim of this work was to compare the performance of BPNN and GRNN in the solution of the neutron spectrometry problem. From results obtained it can be observed that despite the very similar results, GRNN performs better than BPNN. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Neurocomputing strategies in decomposition based structural design

    NASA Technical Reports Server (NTRS)

    Szewczyk, Z.; Hajela, P.

    1993-01-01

    The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.

  20. Quantitative structure-activity relationships by neural networks and inductive logic programming. II. The inhibition of dihydrofolate reductase by triazines

    NASA Astrophysics Data System (ADS)

    Hirst, Jonathan D.; King, Ross D.; Sternberg, Michael J. E.

    1994-08-01

    One of the largest available data sets for developing a quantitative structure-activity relationship (QSAR) — the inhibition of dihydrofolate reductase (DHFR) by 2,4-diamino-6,6-dimethyl-5-phenyl-dihydrotriazine derivatives — has been used for a sixfold cross-validation trial of neural networks, inductive logic programming (ILP) and linear regression. No statistically significant difference was found between the predictive capabilities of the methods. However, the representation of molecules by attributes, which is integral to the ILP approach, provides understandable rules about drug-receptor interactions.

  1. Sensorless control for permanent magnet synchronous motor using a neural network based adaptive estimator

    NASA Astrophysics Data System (ADS)

    Kwon, Chung-Jin; Kim, Sung-Joong; Han, Woo-Young; Min, Won-Kyoung

    2005-12-01

    The rotor position and speed estimation of permanent-magnet synchronous motor(PMSM) was dealt with. By measuring the phase voltages and currents of the PMSM drive, two diagonally recurrent neural network(DRNN) based observers, a neural current observer and a neural velocity observer were developed. DRNN which has self-feedback of the hidden neurons ensures that the outputs of DRNN contain the whole past information of the system even if the inputs of DRNN are only the present states and inputs of the system. Thus the structure of DRNN may be simpler than that of feedforward and fully recurrent neural networks. If the backpropagation method was used for the training of the DRNN the problem of slow convergence arise. In order to reduce this problem, recursive prediction error(RPE) based learning method for the DRNN was presented. The simulation results show that the proposed approach gives a good estimation of rotor speed and position, and RPE based training has requires a shorter computation time compared to backpropagation based training.

  2. Study of parameter identification using hybrid neural-genetic algorithm in electro-hydraulic servo system

    NASA Astrophysics Data System (ADS)

    Moon, Byung-Young

    2005-12-01

    The hybrid neural-genetic multi-model parameter estimation algorithm was demonstrated. This method can be applied to structured system identification of electro-hydraulic servo system. This algorithms consist of a recurrent incremental credit assignment(ICRA) neural network and a genetic algorithm. The ICRA neural network evaluates each member of a generation of model and genetic algorithm produces new generation of model. To evaluate the proposed method, electro-hydraulic servo system was designed and manufactured. The experiment was carried out to figure out the hybrid neural-genetic multi-model parameter estimation algorithm. As a result, the dynamic characteristics were obtained such as the parameters(mass, damping coefficient, bulk modulus, spring coefficient), which minimize total square error. The result of this study can be applied to hydraulic systems in industrial fields.

  3. The relevance of network micro-structure for neural dynamics.

    PubMed

    Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan

    2013-01-01

    The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits.

  4. Crystal Structure Prediction via Deep Learning.

    PubMed

    Ryan, Kevin; Lengyel, Jeff; Shatruk, Michael

    2018-06-06

    We demonstrate the application of deep neural networks as a machine-learning tool for the analysis of a large collection of crystallographic data contained in the crystal structure repositories. Using input data in the form of multi-perspective atomic fingerprints, which describe coordination topology around unique crystallographic sites, we show that the neural-network model can be trained to effectively distinguish chemical elements based on the topology of their crystallographic environment. The model also identifies structurally similar atomic sites in the entire dataset of ~50000 crystal structures, essentially uncovering trends that reflect the periodic table of elements. The trained model was used to analyze templates derived from the known binary and ternary crystal structures in order to predict the likelihood to form new compounds that could be generated by placing elements into these structural templates in combinatorial fashion. Statistical analysis of predictive performance of the neural-network model, which was applied to a test set of structures never seen by the model during training, indicates its ability to predict known elemental compositions with a high likelihood of success. In ~30% of cases, the known compositions were found among top-10 most likely candidates proposed by the model. These results suggest that the approach developed in this work can be used to effectively guide the synthetic efforts in the discovery of new materials, especially in the case of systems composed of 3 or more chemical elements.

  5. An analysis of image storage systems for scalable training of deep neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Young, Steven R; Patton, Robert M

    This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less

  6. Engineering-Aligned 3D Neural Circuit in Microfluidic Device.

    PubMed

    Bang, Seokyoung; Na, Sangcheol; Jang, Jae Myung; Kim, Jinhyun; Jeon, Noo Li

    2016-01-07

    The brain is one of the most important and complex organs in the human body. Although various neural network models have been proposed for in vitro 3D neuronal networks, it has been difficult to mimic functional and structural complexity of the in vitro neural circuit. Here, a microfluidic model of a simplified 3D neural circuit is reported. First, the microfluidic device is filled with Matrigel and continuous flow is delivered across the device during gelation. The fluidic flow aligns the extracellular matrix (ECM) components along the flow direction. Following the alignment of ECM fibers, neurites of primary rat cortical neurons are grown into the Matrigel at the average speed of 250 μm d(-1) and form axon bundles approximately 1500 μm in length at 6 days in vitro (DIV). Additionally, neural networks are developed from presynaptic to postsynaptic neurons at 14 DIV. The establishment of aligned 3D neural circuits is confirmed with the immunostaining of PSD-95 and synaptophysin and the observation of calcium signal transmission. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images.

    PubMed

    Li, Wei; Cao, Peng; Zhao, Dazhe; Wang, Junbo

    2016-01-01

    Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.

  8. Toward a More Robust Pruning Procedure for MLP Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    Choosing a proper neural network architecture is a problem of great practical importance. Smaller models mean not only simpler designs but also lower variance for parameter estimation and network prediction. The widespread utilization of neural networks in modeling highlights an issue in human factors. The procedure of building neural models should find an appropriate level of model complexity in a more or less automatic fashion to make it less prone to human subjectivity. In this paper we present a Singular Value Decomposition based node elimination technique and enhanced implementation of the Optimal Brain Surgeon algorithm. Combining both methods creates a powerful pruning engine that can be used for tuning feedforward connectionist models. The performance of the proposed method is demonstrated by adjusting the structure of a multi-input multi-output model used to calibrate a six-component wind tunnel strain gage.

  9. Fitting of dynamic recurrent neural network models to sensory stimulus-response data.

    PubMed

    Doruk, R Ozgur; Zhang, Kechen

    2018-06-02

    We present a theoretical study aiming at model fitting for sensory neurons. Conventional neural network training approaches are not applicable to this problem due to lack of continuous data. Although the stimulus can be considered as a smooth time-dependent variable, the associated response will be a set of neural spike timings (roughly the instants of successive action potential peaks) that have no amplitude information. A recurrent neural network model can be fitted to such a stimulus-response data pair by using the maximum likelihood estimation method where the likelihood function is derived from Poisson statistics of neural spiking. The universal approximation feature of the recurrent dynamical neuron network models allows us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. The stimulus data are generated by a phased cosine Fourier series having a fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size, and sample size are applied in order to examine the effect of the stimulus to the identification process. Results are presented in tabular and graphical forms at the end of this text. In addition, to demonstrate the success of this research, a study involving the same model, nominal parameters and stimulus structure, and another study that works on different models are compared to that of this research.

  10. High-speed all-optical DNA local sequence alignment based on a three-dimensional artificial neural network.

    PubMed

    Maleki, Ehsan; Babashah, Hossein; Koohi, Somayyeh; Kavehvash, Zahra

    2017-07-01

    This paper presents an optical processing approach for exploring a large number of genome sequences. Specifically, we propose an optical correlator for global alignment and an extended moiré matching technique for local analysis of spatially coded DNA, whose output is fed to a novel three-dimensional artificial neural network for local DNA alignment. All-optical implementation of the proposed 3D artificial neural network is developed and its accuracy is verified in Zemax. Thanks to its parallel processing capability, the proposed structure performs local alignment of 4 million sequences of 150 base pairs in a few seconds, which is much faster than its electrical counterparts, such as the basic local alignment search tool.

  11. Neural-network-enhanced evolutionary algorithm applied to supported metal nanoparticles

    NASA Astrophysics Data System (ADS)

    Kolsbjerg, E. L.; Peterson, A. A.; Hammer, B.

    2018-05-01

    We show that approximate structural relaxation with a neural network enables orders of magnitude faster global optimization with an evolutionary algorithm in a density functional theory framework. The increased speed facilitates reliable identification of global minimum energy structures, as exemplified by our finding of a hollow Pt13 nanoparticle on an MgO support. We highlight the importance of knowing the correct structure when studying the catalytic reactivity of the different particle shapes. The computational speedup further enables screening of hundreds of different pathways in the search for optimum kinetic transitions between low-energy conformers and hence pushes the limits of the insight into thermal ensembles that can be obtained from theory.

  12. BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment.

    PubMed

    Kawahara, Jeremy; Brown, Colin J; Miller, Steven P; Booth, Brian G; Chau, Vann; Grunau, Ruth E; Zwicker, Jill G; Hamarneh, Ghassan

    2017-02-01

    We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Efficient organ localization using multi-label convolutional neural networks in thorax-abdomen CT scans

    NASA Astrophysics Data System (ADS)

    Efrain Humpire-Mamani, Gabriel; Arindra Adiyoso Setio, Arnaud; van Ginneken, Bram; Jacobs, Colin

    2018-04-01

    Automatic localization of organs and other structures in medical images is an important preprocessing step that can improve and speed up other algorithms such as organ segmentation, lesion detection, and registration. This work presents an efficient method for simultaneous localization of multiple structures in 3D thorax-abdomen CT scans. Our approach predicts the location of multiple structures using a single multi-label convolutional neural network for each orthogonal view. Each network takes extra slices around the current slice as input to provide extra context. A sigmoid layer is used to perform multi-label classification. The output of the three networks is subsequently combined to compute a 3D bounding box for each structure. We used our approach to locate 11 structures of interest. The neural network was trained and evaluated on a large set of 1884 thorax-abdomen CT scans from patients undergoing oncological workup. Reference bounding boxes were annotated by human observers. The performance of our method was evaluated by computing the wall distance to the reference bounding boxes. The bounding boxes annotated by the first human observer were used as the reference standard for the test set. Using the best configuration, we obtained an average wall distance of 3.20~+/-~7.33 mm in the test set. The second human observer achieved 1.23~+/-~3.39 mm. For all structures, the results were better than those reported in previously published studies. In conclusion, we proposed an efficient method for the accurate localization of multiple organs. Our method uses multiple slices as input to provide more context around the slice under analysis, and we have shown that this improves performance. This method can easily be adapted to handle more organs.

  14. Use of statistical and neural net approaches in predicting toxicity of chemicals.

    PubMed

    Basak, S C; Grunwald, G D; Gute, B D; Balasubramanian, K; Opitz, D

    2000-01-01

    Hierarchical quantitative structure-activity relationships (H-QSAR) have been developed as a new approach in constructing models for estimating physicochemical, biomedicinal, and toxicological properties of interest. This approach uses increasingly more complex molecular descriptors in a graduated approach to model building. In this study, statistical and neural network methods have been applied to the development of H-QSAR models for estimating the acute aquatic toxicity (LC50) of 69 benzene derivatives to Pimephales promelas (fathead minnow). Topostructural, topochemical, geometrical, and quantum chemical indices were used as the four levels of the hierarchical method. It is clear from both the statistical and neural network models that topostructural indices alone cannot adequately model this set of congeneric chemicals. Not surprisingly, topochemical indices greatly increase the predictive power of both statistical and neural network models. Quantum chemical indices also add significantly to the modeling of this set of acute aquatic toxicity data.

  15. The brainstem reticular formation is a small-world, not scale-free, network

    PubMed Central

    Humphries, M.D; Gurney, K; Prescott, T.J

    2005-01-01

    Recently, it has been demonstrated that several complex systems may have simple graph-theoretic characterizations as so-called ‘small-world’ and ‘scale-free’ networks. These networks have also been applied to the gross neural connectivity between primate cortical areas and the nervous system of Caenorhabditis elegans. Here, we extend this work to a specific neural circuit of the vertebrate brain—the medial reticular formation (RF) of the brainstem—and, in doing so, we have made three key contributions. First, this work constitutes the first model (and quantitative review) of this important brain structure for over three decades. Second, we have developed the first graph-theoretic analysis of vertebrate brain connectivity at the neural network level. Third, we propose simple metrics to quantitatively assess the extent to which the networks studied are small-world or scale-free. We conclude that the medial RF is configured to create small-world (implying coherent rapid-processing capabilities), but not scale-free, type networks under assumptions which are amenable to quantitative measurement. PMID:16615219

  16. Neural network for nonsmooth pseudoconvex optimization with general convex constraints.

    PubMed

    Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping

    2018-05-01

    In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Tracking the Reorganization of Module Structure in Time-Varying Weighted Brain Functional Connectivity Networks.

    PubMed

    Schmidt, Christoph; Piper, Diana; Pester, Britta; Mierau, Andreas; Witte, Herbert

    2018-05-01

    Identification of module structure in brain functional networks is a promising way to obtain novel insights into neural information processing, as modules correspond to delineated brain regions in which interactions are strongly increased. Tracking of network modules in time-varying brain functional networks is not yet commonly considered in neuroscience despite its potential for gaining an understanding of the time evolution of functional interaction patterns and associated changing degrees of functional segregation and integration. We introduce a general computational framework for extracting consensus partitions from defined time windows in sequences of weighted directed edge-complete networks and show how the temporal reorganization of the module structure can be tracked and visualized. Part of the framework is a new approach for computing edge weight thresholds for individual networks based on multiobjective optimization of module structure quality criteria as well as an approach for matching modules across time steps. By testing our framework using synthetic network sequences and applying it to brain functional networks computed from electroencephalographic recordings of healthy subjects that were exposed to a major balance perturbation, we demonstrate the framework's potential for gaining meaningful insights into dynamic brain function in the form of evolving network modules. The precise chronology of the neural processing inferred with our framework and its interpretation helps to improve the currently incomplete understanding of the cortical contribution for the compensation of such balance perturbations.

  18. Quantitative structure-activity relationships by neural networks and inductive logic programming. I. The inhibition of dihydrofolate reductase by pyrimidines

    NASA Astrophysics Data System (ADS)

    Hirst, Jonathan D.; King, Ross D.; Sternberg, Michael J. E.

    1994-08-01

    Neural networks and inductive logic programming (ILP) have been compared to linear regression for modelling the QSAR of the inhibition of E. coli dihydrofolate reductase (DHFR) by 2,4-diamino-5-(substitured benzyl)pyrimidines, and, in the subsequent paper [Hirst, J.D., King, R.D. and Sternberg, M.J.E., J. Comput.-Aided Mol. Design, 8 (1994) 421], the inhibition of rodent DHFR by 2,4-diamino-6,6-dimethyl-5-phenyl-dihydrotriazines. Cross-validation trials provide a statistically rigorous assessment of the predictive capabilities of the methods, with training and testing data selected randomly and all the methods developed using identical training data. For the ILP analysis, molecules are represented by attributes other than Hansch parameters. Neural networks and ILP perform better than linear regression using the attribute representation, but the difference is not statistically significant. The major benefit from the ILP analysis is the formulation of understandable rules relating the activity of the inhibitors to their chemical structure.

  19. A Higher-Order Neural Network Design for Improving Segmentation Performance in Medical Image Series

    NASA Astrophysics Data System (ADS)

    Selvi, Eşref; Selver, M. Alper; Güzeliş, Cüneyt; Dicle, Oǧuz

    2014-03-01

    Segmentation of anatomical structures from medical image series is an ongoing field of research. Although, organs of interest are three-dimensional in nature, slice-by-slice approaches are widely used in clinical applications because of their ease of integration with the current manual segmentation scheme. To be able to use slice-by-slice techniques effectively, adjacent slice information, which represents likelihood of a region to be the structure of interest, plays critical role. Recent studies focus on using distance transform directly as a feature or to increase the feature values at the vicinity of the search area. This study presents a novel approach by constructing a higher order neural network, the input layer of which receives features together with their multiplications with the distance transform. This allows higher-order interactions between features through the non-linearity introduced by the multiplication. The application of the proposed method to 9 CT datasets for segmentation of the liver shows higher performance than well-known higher order classification neural networks.

  20. Efficient Simulation of Wing Modal Response: Application of 2nd Order Shape Sensitivities and Neural Networks

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Liu, Youhua

    2000-01-01

    At the preliminary design stage of a wing structure, an efficient simulation, one needing little computation but yielding adequately accurate results for various response quantities, is essential in the search of optimal design in a vast design space. In the present paper, methods of using sensitivities up to 2nd order, and direct application of neural networks are explored. The example problem is how to decide the natural frequencies of a wing given the shape variables of the structure. It is shown that when sensitivities cannot be obtained analytically, the finite difference approach is usually more reliable than a semi-analytical approach provided an appropriate step size is used. The use of second order sensitivities is proved of being able to yield much better results than the case where only the first order sensitivities are used. When neural networks are trained to relate the wing natural frequencies to the shape variables, a negligible computation effort is needed to accurately determine the natural frequencies of a new design.

  1. An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems

    DTIC Science & Technology

    1991-12-01

    neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input

  2. High-Gain AlxGa1-xAs/GaAs Transistors For Neural Networks

    NASA Technical Reports Server (NTRS)

    Kim, Jae-Hoon; Lin, Steven H.

    1991-01-01

    High-gain AlxGa1-xAs/GaAs npn double heterojunction bipolar transistors developed for use as phototransistors in optoelectronic integrated circuits, especially in artificial neural networks. Transistors perform both photodetection and saturating-amplification functions of neurons. Good candidates for such application because structurally compatible with laser diodes and light-emitting diodes, detect light, and provide high current gain needed to compensate for losses in holographic optical elements.

  3. Clustering of neural code words revealed by a first-order phase transition

    NASA Astrophysics Data System (ADS)

    Huang, Haiping; Toyoizumi, Taro

    2016-06-01

    A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., code words, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of code words in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural code words using retinal data by computing the entropy of code words as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal code words in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of code words and located within a reachable distance from most code words. This code-word space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the code-word space that shapes information representation in a biological network.

  4. Application of the artificial neural network in quantitative structure-gradient elution retention relationship of phenylthiocarbamyl amino acids derivatives.

    PubMed

    Tham, S Y; Agatonovic-Kustrin, S

    2002-05-15

    Quantitative structure-retention relationship(QSRR) method was used to model reversed-phase high-performance liquid chromatography (RP-HPLC) separation of 18 selected amino acids. Retention data for phenylthiocarbamyl (PTC) amino acids derivatives were obtained using gradient elution on ODS column with mobile phase of varying acetonitrile, acetate buffer and containing 0.5 ml/l of triethylamine (TEA). Molecular structure of each amino acid was encoded with 36 calculated molecular descriptors. The correlation between the molecular descriptors and the retention time of the compounds in the calibration set was established using the genetic neural network method. A genetic algorithm (GA) was used to select important molecular descriptors and supervised artificial neural network (ANN) was used to correlate mobile phase composition and selected descriptors with the experimentally derived retention times. Retention time values were used as the network's output and calculated molecular descriptors and mobile phase composition as the inputs. The best model with five input descriptors was chosen, and the significance of the selected descriptors for amino acid separation was examined. Results confirmed the dominant role of the organic modifier in such chromatographic systems in addition to lipophilicity (log P) and molecular size and shape (topological indices) of investigated solutes.

  5. Compact VLSI neural computer integrated with active pixel sensor for real-time ATR applications

    NASA Astrophysics Data System (ADS)

    Fang, Wai-Chi; Udomkesmalee, Gabriel; Alkalai, Leon

    1997-04-01

    A compact VLSI neural computer integrated with an active pixel sensor has been under development to mimic what is inherent in biological vision systems. This electronic eye- brain computer is targeted for real-time machine vision applications which require both high-bandwidth communication and high-performance computing for data sensing, synergy of multiple types of sensory information, feature extraction, target detection, target recognition, and control functions. The neural computer is based on a composite structure which combines Annealing Cellular Neural Network (ACNN) and Hierarchical Self-Organization Neural Network (HSONN). The ACNN architecture is a programmable and scalable multi- dimensional array of annealing neurons which are locally connected with their local neurons. Meanwhile, the HSONN adopts a hierarchical structure with nonlinear basis functions. The ACNN+HSONN neural computer is effectively designed to perform programmable functions for machine vision processing in all levels with its embedded host processor. It provides a two order-of-magnitude increase in computation power over the state-of-the-art microcomputer and DSP microelectronics. A compact current-mode VLSI design feasibility of the ACNN+HSONN neural computer is demonstrated by a 3D 16X8X9-cube neural processor chip design in a 2-micrometers CMOS technology. Integration of this neural computer as one slice of a 4'X4' multichip module into the 3D MCM based avionics architecture for NASA's New Millennium Program is also described.

  6. Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models

    PubMed Central

    Cowley, Benjamin R.; Doiron, Brent; Kohn, Adam

    2016-01-01

    Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction—shared dimensionality and percent shared variance—with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure. PMID:27926936

  7. Increasingly diverse brain dynamics in the developmental arc: using Pareto-optimization to infer a mechanism

    NASA Astrophysics Data System (ADS)

    Tang, Evelyn; Giusti, Chad; Baum, Graham; Gu, Shi; Pollock, Eli; Kahn, Ari; Roalf, David; Moore, Tyler; Ruparel, Kosha; Gur, Ruben; Gur, Raquel; Satterthwaite, Theodore; Bassett, Danielle

    Motivated by a recent demonstration that the network architecture of white matter supports emerging control of diverse neural dynamics as children mature into adults, we seek to investigate structural mechanisms that support these changes. Beginning from a network representation of diffusion imaging data, we simulate network evolution with a set of simple growth rules built on principles of network control. Notably, the optimal evolutionary trajectory displays a striking correspondence to the progression of child to adult brain, suggesting that network control is a driver of development. More generally, and in comparison to the complete set of available models, we demonstrate that all brain networks from child to adult are structured in a manner highly optimized for the control of diverse neural dynamics. Within this near-optimality, we observe differences in the predicted control mechanisms of the child and adult brains, suggesting that the white matter architecture in children has a greater potential to increasingly support brain state transitions, potentially underlying cognitive switching.

  8. Automatic Seismic-Event Classification with Convolutional Neural Networks.

    NASA Astrophysics Data System (ADS)

    Bueno Rodriguez, A.; Titos Luzón, M.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.

    2017-12-01

    Active volcanoes exhibit a wide range of seismic signals, providing vast amounts of unlabelled volcano-seismic data that can be analyzed through the lens of artificial intelligence. However, obtaining high-quality labelled data is time-consuming and expensive. Deep neural networks can process data in their raw form, compute high-level features and provide a better representation of the input data distribution. These systems can be deployed to classify seismic data at scale, enhance current early-warning systems and build extensive seismic catalogs. In this research, we aim to classify spectrograms from seven different seismic events registered at "Volcán de Fuego" (Colima, Mexico), during four eruptive periods. Our approach is based on convolutional neural networks (CNNs), a sub-type of deep neural networks that can exploit grid structure from the data. Volcano-seismic signals can be mapped into a grid-like structure using the spectrogram: a representation of the temporal evolution in terms of time and frequency. Spectrograms were computed from the data using Hamming windows with 4 seconds length, 2.5 seconds overlapping and 128 points FFT resolution. Results are compared to deep neural networks, random forest and SVMs. Experiments show that CNNs can exploit temporal and frequency information, attaining a classification accuracy of 93%, similar to deep networks 91% but outperforming SVM and random forest. These results empirically show that CNNs are powerful models to classify a wide range of volcano-seismic signals, and achieve good generalization. Furthermore, volcano-seismic spectrograms contains useful discriminative information for the CNN, as higher layers of the network combine high-level features computed for each frequency band, helping to detect simultaneous events in time. Being at the intersection of deep learning and geophysics, this research enables future studies of how CNNs can be used in volcano monitoring to accurately determine the detection and location of seismic events.

  9. Simulation of an array-based neural net model

    NASA Technical Reports Server (NTRS)

    Barnden, John A.

    1987-01-01

    Research in cognitive science suggests that much of cognition involves the rapid manipulation of complex data structures. However, it is very unclear how this could be realized in neural networks or connectionist systems. A core question is: how could the interconnectivity of items in an abstract-level data structure be neurally encoded? The answer appeals mainly to positional relationships between activity patterns within neural arrays, rather than directly to neural connections in the traditional way. The new method was initially devised to account for abstract symbolic data structures, but it also supports cognitively useful spatial analogue, image-like representations. As the neural model is based on massive, uniform, parallel computations over 2D arrays, the massively parallel processor is a convenient tool for simulation work, although there are complications in using the machine to the fullest advantage. An MPP Pascal simulation program for a small pilot version of the model is running.

  10. Detection, location, and quantification of structural damage by neural-net-processed moiré profilometry

    NASA Astrophysics Data System (ADS)

    Grossman, Barry G.; Gonzalez, Frank S.; Blatt, Joel H.; Hooker, Jeffery A.

    1992-03-01

    The development of efficient high speed techniques to recognize, locate, and quantify damage is vitally important for successful automated inspection systems such as ones used for the inspection of undersea pipelines. Two critical problems must be solved to achieve these goals: the reduction of nonuseful information present in the video image and automatic recognition and quantification of extent and location of damage. Artificial neural network processed moire profilometry appears to be a promising technique to accomplish this. Real time video moire techniques have been developed which clearly distinguish damaged and undamaged areas on structures, thus reducing the amount of extraneous information input into an inspection system. Artificial neural networks have demonstrated advantages for image processing, since they can learn the desired response to a given input and are inherently fast when implemented in hardware due to their parallel computing architecture. Video moire images of pipes with dents of different depths were used to train a neural network, with the desired output being the location and severity of the damage. The system was then successfully tested with a second series of moire images. The techniques employed and the results obtained are discussed.

  11. A chronometric functional sub-network in the thalamo-cortical system regulates the flow of neural information necessary for conscious cognitive processes.

    PubMed

    León-Domínguez, Umberto; Vela-Bueno, Antonio; Froufé-Torres, Manuel; León-Carrión, Jose

    2013-06-01

    The thalamo-cortical system has been defined as a neural network associated with consciousness. While there seems to be wide agreement that the thalamo-cortical system directly intervenes in vigilance and arousal, a divergence of opinion persists regarding its intervention in the control of other cognitive processes necessary for consciousness. In the present manuscript, we provide a review of recent scientific findings on the thalamo-cortical system and its role in the control and regulation of the flow of neural information necessary for conscious cognitive processes. We suggest that the axis formed by the medial prefrontal cortex and different thalamic nuclei (reticular nucleus, intralaminar nucleus, and midline nucleus), represents a core component for consciousness. This axis regulates different cerebral structures which allow basic cognitive processes like attention, arousal and memory to emerge. In order to produce a synchronized coherent response, neural communication between cerebral structures must have exact timing (chronometry). Thus, a chronometric functional sub-network within the thalamo-cortical system keeps us in an optimal and continuous functional state, allowing high-order cognitive processes, essential to awareness and qualia, to take place. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks

    PubMed Central

    Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan C.; van Schaik, André

    2015-01-01

    We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform. PMID:26041985

  13. ANALYSIS OF CLINICAL AND DERMOSCOPIC FEATURES FOR BASAL CELL CARCINOMA NEURAL NETWORK CLASSIFICATION

    PubMed Central

    Cheng, Beibei; Stanley, R. Joe; Stoecker, William V; Stricklin, Sherea M.; Hinton, Kristen A.; Nguyen, Thanh K.; Rader, Ryan K.; Rabinovitz, Harold S.; Oliviero, Margaret; Moss, Randy H.

    2012-01-01

    Background Basal cell carcinoma (BCC) is the most commonly diagnosed cancer in the United States. In this research, we examine four different feature categories used for diagnostic decisions, including patient personal profile (patient age, gender, etc.), general exam (lesion size and location), common dermoscopic (blue-gray ovoids, leaf-structure dirt trails, etc.), and specific dermoscopic lesion (white/pink areas, semitranslucency, etc.). Specific dermoscopic features are more restricted versions of the common dermoscopic features. Methods Combinations of the four feature categories are analyzed over a data set of 700 lesions, with 350 BCCs and 350 benign lesions, for lesion discrimination using neural network-based techniques, including Evolving Artificial Neural Networks and Evolving Artificial Neural Network Ensembles. Results Experiment results based on ten-fold cross validation for training and testing the different neural network-based techniques yielded an area under the receiver operating characteristic curve as high as 0.981 when all features were combined. The common dermoscopic lesion features generally yielded higher discrimination results than other individual feature categories. Conclusions Experimental results show that combining clinical and image information provides enhanced lesion discrimination capability over either information source separately. This research highlights the potential of data fusion as a model for the diagnostic process. PMID:22724561

  14. Advanced Aeroservoelastic Testing and Data Analysis (Les Essais Aeroservoelastiques et l’Analyse des Donnees).

    DTIC Science & Technology

    1995-11-01

    network - based AFS concepts. Neural networks can addition of vanes in each engine exhaust for thrust provide...parameter estimation programs 19-11 8.6 Neural Network Based Methods unknown parameters of the postulated state space model Artificial neural network ...Forward Neural Network the network that the applicability of the recurrent neural and ii) Recurrent Neural Network [117-119]. network to

  15. Neural Networks for Flight Control

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  16. Neural networks and logical reasoning systems: a translation table.

    PubMed

    Martins, J; Mendes, R V

    2001-04-01

    A correspondence is established between the basic elements of logic reasoning systems (knowledge bases, rules, inference and queries) and the structure and dynamical evolution laws of neural networks. The correspondence is pictured as a translation dictionary which might allow to go back and forth between symbolic and network formulations, a desirable step in learning-oriented systems and multicomputer networks. In the framework of Horn clause logics, it is found that atomic propositions with n arguments correspond to nodes with nth order synapses, rules to synaptic intensity constraints, forward chaining to synaptic dynamics and queries either to simple node activation or to a query tensor dynamics.

  17. Organization of Anti-Phase Synchronization Pattern in Neural Networks: What are the Key Factors?

    PubMed Central

    Li, Dong; Zhou, Changsong

    2011-01-01

    Anti-phase oscillation has been widely observed in cortical neural network. Elucidating the mechanism underlying the organization of anti-phase pattern is of significance for better understanding more complicated pattern formations in brain networks. In dynamical systems theory, the organization of anti-phase oscillation pattern has usually been considered to relate to time delay in coupling. This is consistent to conduction delays in real neural networks in the brain due to finite propagation velocity of action potentials. However, other structural factors in cortical neural network, such as modular organization (connection density) and the coupling types (excitatory or inhibitory), could also play an important role. In this work, we investigate the anti-phase oscillation pattern organized on a two-module network of either neuronal cell model or neural mass model, and analyze the impact of the conduction delay times, the connection densities, and coupling types. Our results show that delay times and coupling types can play key roles in this organization. The connection densities may have an influence on the stability if an anti-phase pattern exists due to the other factors. Furthermore, we show that anti-phase synchronization of slow oscillations can be achieved with small delay times if there is interaction between slow and fast oscillations. These results are significant for further understanding more realistic spatiotemporal dynamics of cortico-cortical communications. PMID:22232576

  18. Emergent latent symbol systems in recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Monner, Derek; Reggia, James A.

    2012-12-01

    Fodor and Pylyshyn [(1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2), 3-71] famously argued that neural networks cannot behave systematically short of implementing a combinatorial symbol system. A recent response from Frank et al. [(2009). Connectionist semantic systematicity. Cognition, 110(3), 358-379] claimed to have trained a neural network to behave systematically without implementing a symbol system and without any in-built predisposition towards combinatorial representations. We believe systems like theirs may in fact implement a symbol system on a deeper and more interesting level: one where the symbols are latent - not visible at the level of network structure. In order to illustrate this possibility, we demonstrate our own recurrent neural network that learns to understand sentence-level language in terms of a scene. We demonstrate our model's learned understanding by testing it on novel sentences and scenes. By paring down our model into an architecturally minimal version, we demonstrate how it supports combinatorial computation over distributed representations by using the associative memory operations of Vector Symbolic Architectures. Knowledge of the model's memory scheme gives us tools to explain its errors and construct superior future models. We show how the model designs and manipulates a latent symbol system in which the combinatorial symbols are patterns of activation distributed across the layers of a neural network, instantiating a hybrid of classical symbolic and connectionist representations that combines advantages of both.

  19. Nonlinear channel equalization for QAM signal constellation using artificial neural networks.

    PubMed

    Patra, J C; Pal, R N; Baliarsingh, R; Panda, G

    1999-01-01

    Application of artificial neural networks (ANN's) to adaptive channel equalization in a digital communication system with 4-QAM signal constellation is reported in this paper. A novel computationally efficient single layer functional link ANN (FLANN) is proposed for this purpose. This network has a simple structure in which the nonlinearity is introduced by functional expansion of the input pattern by trigonometric polynomials. Because of input pattern enhancement, the FLANN is capable of forming arbitrarily nonlinear decision boundaries and can perform complex pattern classification tasks. Considering channel equalization as a nonlinear classification problem, the FLANN has been utilized for nonlinear channel equalization. The performance of the FLANN is compared with two other ANN structures [a multilayer perceptron (MLP) and a polynomial perceptron network (PPN)] along with a conventional linear LMS-based equalizer for different linear and nonlinear channel models. The effect of eigenvalue ratio (EVR) of input correlation matrix on the equalizer performance has been studied. The comparison of computational complexity involved for the three ANN structures is also provided.

  20. Mapping soil landscape as spatial continua: The Neural Network Approach

    NASA Astrophysics Data System (ADS)

    Zhu, A.-Xing

    2000-03-01

    A neural network approach was developed to populate a soil similarity model that was designed to represent soil landscape as spatial continua for hydroecological modeling at watersheds of mesoscale size. The approach employs multilayer feed forward neural networks. The input to the network was data on a set of soil formative environmental factors; the output from the network was a set of similarity values to a set of prescribed soil classes. The network was trained using a conjugate gradient algorithm in combination with a simulated annealing technique to learn the relationships between a set of prescribed soils and their environmental factors. Once trained, the network was used to compute for every location in an area the similarity values of the soil to the set of prescribed soil classes. The similarity values were then used to produce detailed soil spatial information. The approach also included a Geographic Information System procedure for selecting representative training and testing samples and a process of determining the network internal structure. The approach was applied to soil mapping in a watershed, the Lubrecht Experimental Forest, in western Montana. The case study showed that the soil spatial information derived using the neural network approach reveals much greater spatial detail and has a higher quality than that derived from the conventional soil map. Implications of this detailed soil spatial information for hydroecological modeling at the watershed scale are also discussed.

  1. Dynamic network communication as a unifying neural basis for cognition, development, aging, and disease.

    PubMed

    Voytek, Bradley; Knight, Robert T

    2015-06-15

    Perception, cognition, and social interaction depend upon coordinated neural activity. This coordination operates within noisy, overlapping, and distributed neural networks operating at multiple timescales. These networks are built upon a structural scaffolding with intrinsic neuroplasticity that changes with development, aging, disease, and personal experience. In this article, we begin from the perspective that successful interregional communication relies upon the transient synchronization between distinct low-frequency (<80 Hz) oscillations, allowing for brief windows of communication via phase-coordinated local neuronal spiking. From this, we construct a theoretical framework for dynamic network communication, arguing that these networks reflect a balance between oscillatory coupling and local population spiking activity and that these two levels of activity interact. We theorize that when oscillatory coupling is too strong, spike timing within the local neuronal population becomes too synchronous; when oscillatory coupling is too weak, spike timing is too disorganized. Each results in specific disruptions to neural communication. These alterations in communication dynamics may underlie cognitive changes associated with healthy development and aging, in addition to neurological and psychiatric disorders. A number of neurological and psychiatric disorders-including Parkinson's disease, autism, depression, schizophrenia, and anxiety-are associated with abnormalities in oscillatory activity. Although aging, psychiatric and neurological disease, and experience differ in the biological changes to structural gray or white matter, neurotransmission, and gene expression, our framework suggests that any resultant cognitive and behavioral changes in normal or disordered states or their treatment are a product of how these physical processes affect dynamic network communication. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  2. Development of a Neural Network Simulator for Studying the Constitutive Behavior of Structural Composite Materials

    DOE PAGES

    Na, Hyuntae; Lee, Seung-Yub; Üstündag, Ersan; ...

    2013-01-01

    This paper introduces a recent development and application of a noncommercial artificial neural network (ANN) simulator with graphical user interface (GUI) to assist in rapid data modeling and analysis in the engineering diffraction field. The real-time network training/simulation monitoring tool has been customized for the study of constitutive behavior of engineering materials, and it has improved data mining and forecasting capabilities of neural networks. This software has been used to train and simulate the finite element modeling (FEM) data for a fiber composite system, both forward and inverse. The forward neural network simulation precisely reduplicates FEM results several orders ofmore » magnitude faster than the slow original FEM. The inverse simulation is more challenging; yet, material parameters can be meaningfully determined with the aid of parameter sensitivity information. The simulator GUI also reveals that output node size for materials parameter and input normalization method for strain data are critical train conditions in inverse network. The successful use of ANN modeling and simulator GUI has been validated through engineering neutron diffraction experimental data by determining constitutive laws of the real fiber composite materials via a mathematically rigorous and physically meaningful parameter search process, once the networks are successfully trained from the FEM database.« less

  3. Synchrony between sensory and cognitive networks is associated with subclinical variation in autistic traits

    PubMed Central

    Young, Jacob S.; Smith, David V.; Coutlee, Christopher G.; Huettel, Scott A.

    2015-01-01

    Individuals with autistic spectrum disorders exhibit distinct personality traits linked to attentional, social, and affective functions, and those traits are expressed with varying levels of severity in the neurotypical and subclinical population. Variation in autistic traits has been linked to reduced functional and structural connectivity (i.e., underconnectivity, or reduced synchrony) with neural networks modulated by attentional, social, and affective functions. Yet, it remains unclear whether reduced synchrony between these neural networks contributes to autistic traits. To investigate this issue, we used functional magnetic resonance imaging to record brain activation while neurotypical participants who varied in their subclinical scores on the Autism-Spectrum Quotient (AQ) viewed alternating blocks of social and nonsocial stimuli (i.e., images of faces and of landscape scenes). We used independent component analysis (ICA) combined with a spatiotemporal regression to quantify synchrony between neural networks. Our results indicated that decreased synchrony between the executive control network (ECN) and a face-scene network (FSN) predicted higher scores on the AQ. This relationship was not explained by individual differences in head motion, preferences for faces, or personality variables related to social cognition. Our findings build on clinical reports by demonstrating that reduced synchrony between distinct neural networks contributes to a range of subclinical autistic traits. PMID:25852527

  4. Improving protein disorder prediction by deep bidirectional long short-term memory recurrent neural networks.

    PubMed

    Hanson, Jack; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi

    2017-03-01

    Capturing long-range interactions between structural but not sequence neighbors of proteins is a long-standing challenging problem in bioinformatics. Recently, long short-term memory (LSTM) networks have significantly improved the accuracy of speech and image classification problems by remembering useful past information in long sequential events. Here, we have implemented deep bidirectional LSTM recurrent neural networks in the problem of protein intrinsic disorder prediction. The new method, named SPOT-Disorder, has steadily improved over a similar method using a traditional, window-based neural network (SPINE-D) in all datasets tested without separate training on short and long disordered regions. Independent tests on four other datasets including the datasets from critical assessment of structure prediction (CASP) techniques and >10 000 annotated proteins from MobiDB, confirmed SPOT-Disorder as one of the best methods in disorder prediction. Moreover, initial studies indicate that the method is more accurate in predicting functional sites in disordered regions. These results highlight the usefulness combining LSTM with deep bidirectional recurrent neural networks in capturing non-local, long-range interactions for bioinformatics applications. SPOT-disorder is available as a web server and as a standalone program at: http://sparks-lab.org/server/SPOT-disorder/index.php . j.hanson@griffith.edu.au or yuedong.yang@griffith.edu.au or yaoqi.zhou@griffith.edu.au. Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  5. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  6. Experimental evaluation of heat transfer efficiency of nanofluid in a double pipe heat exchanger and prediction of experimental results using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Maddah, Heydar; Ghasemi, Nahid

    2017-12-01

    In this study, heat transfer efficiency of water and iron oxide nanofluid in a double pipe heat exchanger equipped with a typical twisted tape is experimentally investigated and impacts of the concentration of nanofluid and twisted tape on the heat transfer efficiency are also studied. Experiments were conducted under the laminar and turbulent flow for Reynolds numbers in the range of 1000 to 6000 and the concentration of nanofluid was 0.01, 0.02 and 0.03 wt%. In order to model and predict the heat transfer efficiency, an artificial neural network was used. The temperature of the hot fluid (nanofluid), the temperature of the cold fluid (water), mass flow rate of hot fluid (nanofluid), mass flow rate of cold fluid (water), the concentration of nanofluid and twist ratio are input data in artificial neural network and heat transfer is output or target. Heat transfer efficiency in the presence of 0.03 wt% nanofluid increases by 30% while using both the 0.03 wt% nanofluid and twisted tape with twist ratio 2 increases the heat transfer efficiency by 60%. Implementation of various structures of neural network with different number of neurons in the middle layer showed that 1-10-6 arrangement with the correlation coefficient 0.99181 and normal root mean square error 0.001621 is suggested as a desirable arrangement. The above structure has been successful in predicting 72% to 97%of variation in heat transfer efficiency characteristics based on the independent variables changes. In total, comparing the predicted results in this study with other studies and also the statistical measures shows the efficiency of artificial neural network.

  7. Propagating waves can explain irregular neural dynamics.

    PubMed

    Keane, Adam; Gong, Pulin

    2015-01-28

    Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. Copyright © 2015 the authors 0270-6474/15/351591-15$15.00/0.

  8. Uncovering the neuroanatomical correlates of cognitive, affective and conative theory of mind in paediatric traumatic brain injury: a neural systems perspective

    PubMed Central

    Catroppa, Cathy; Beare, Richard; Silk, Timothy J.; Hearps, Stephen J.; Beauchamp, Miriam H.; Yeates, Keith O.; Anderson, Vicki A.

    2017-01-01

    Abstract Deficits in theory of mind (ToM) are common after neurological insult acquired in the first and second decade of life, however the contribution of large-scale neural networks to ToM deficits in children with brain injury is unclear. Using paediatric traumatic brain injury (TBI) as a model, this study investigated the sub-acute effect of paediatric traumatic brain injury on grey-matter volume of three large-scale, domain-general brain networks (the Default Mode Network, DMN; the Central Executive Network, CEN; and the Salience Network, SN), as well as two domain-specific neural networks implicated in social-affective processes (the Cerebro-Cerebellar Mentalizing Network, CCMN and the Mirror Neuron/Empathy Network, MNEN). We also evaluated prospective structure–function relationships between these large-scale neural networks and cognitive, affective and conative ToM. 3D T1- weighted magnetic resonance imaging sequences were acquired sub-acutely in 137 children [TBI: n = 103; typically developing (TD) children: n = 34]. All children were assessed on measures of ToM at 24-months post-injury. Children with severe TBI showed sub-acute volumetric reductions in the CCMN, SN, MNEN, CEN and DMN, as well as reduced grey-matter volumes of several hub regions of these neural networks. Volumetric reductions in the CCMN and several of its hub regions, including the cerebellum, predicted poorer cognitive ToM. In contrast, poorer affective and conative ToM were predicted by volumetric reductions in the SN and MNEN, respectively. Overall, results suggest that cognitive, affective and conative ToM may be prospectively predicted by individual differences in structure of different neural systems—the CCMN, SN and MNEN, respectively. The prospective relationship between cerebellar volume and cognitive ToM outcomes is a novel finding in our paediatric brain injury sample and suggests that the cerebellum may play a role in the neural networks important for ToM. These findings are discussed in relation to neurocognitive models of ToM. We conclude that detection of sub-acute volumetric abnormalities of large-scale neural networks and their hub regions may aid in the early identification of children at risk for chronic social-cognitive impairment. PMID:28505355

  9. On-line, adaptive state estimator for active noise control

    NASA Technical Reports Server (NTRS)

    Lim, Tae W.

    1994-01-01

    Dynamic characteristics of airframe structures are expected to vary as aircraft flight conditions change. Accurate knowledge of the changing dynamic characteristics is crucial to enhancing the performance of the active noise control system using feedback control. This research investigates the development of an adaptive, on-line state estimator using a neural network concept to conduct active noise control. In this research, an algorithm has been developed that can be used to estimate displacement and velocity responses at any locations on the structure from a limited number of acceleration measurements and input force information. The algorithm employs band-pass filters to extract from the measurement signal the frequency contents corresponding to a desired mode. The filtered signal is then used to train a neural network which consists of a linear neuron with three weights. The structure of the neural network is designed as simple as possible to increase the sampling frequency as much as possible. The weights obtained through neural network training are then used to construct the transfer function of a mode in z-domain and to identify modal properties of each mode. By using the identified transfer function and interpolating the mode shape obtained at sensor locations, the displacement and velocity responses are estimated with reasonable accuracy at any locations on the structure. The accuracy of the response estimates depends on the number of modes incorporated in the estimates and the number of sensors employed to conduct mode shape interpolation. Computer simulation demonstrates that the algorithm is capable of adapting to the varying dynamic characteristics of structural properties. Experimental implementation of the algorithm on a DSP (digital signal processing) board for a plate structure is underway. The algorithm is expected to reach the sampling frequency range of about 10 kHz to 20 kHz which needs to be maintained for a typical active noise control application.

  10. Structural Covariance of the Prefrontal-Amygdala Pathways Associated with Heart Rate Variability

    PubMed Central

    Wei, Luqing; Chen, Hong; Wu, Guo-Rong

    2018-01-01

    The neurovisceral integration model has shown a key role of the amygdala in neural circuits underlying heart rate variability (HRV) modulation, and suggested that reciprocal connections from amygdala to brain regions centered on the central autonomic network (CAN) are associated with HRV. To provide neuroanatomical evidence for these theoretical perspectives, the current study used covariance analysis of MRI-based gray matter volume (GMV) to map structural covariance network of the amygdala, and then determined whether the interregional structural correlations related to individual differences in HRV. The results showed that covariance patterns of the amygdala encompassed large portions of cortical (e.g., prefrontal, cingulate, and insula) and subcortical (e.g., striatum, hippocampus, and midbrain) regions, lending evidence from structural covariance analysis to the notion that the amygdala was a pivotal node in neural pathways for HRV modulation. Importantly, participants with higher resting HRV showed increased covariance of amygdala to dorsal medial prefrontal cortex and anterior cingulate cortex (dmPFC/dACC) extending into adjacent medial motor regions [i.e., pre-supplementary motor area (pre-SMA)/SMA], demonstrating structural covariance of the prefrontal-amygdala pathways implicated in HRV, and also implying that resting HRV may reflect the function of neural circuits underlying cognitive regulation of emotion as well as facilitation of adaptive behaviors to emotion. Our results, thus, provide anatomical substrates for the neurovisceral integration model that resting HRV may index an integrative neural network which effectively organizes emotional, cognitive, physiological and behavioral responses in the service of goal-directed behavior and adaptability. PMID:29545744

  11. Power prediction in mobile communication systems using an optimal neural-network structure.

    PubMed

    Gao, X M; Gao, X Z; Tanskanen, J A; Ovaska, S J

    1997-01-01

    Presents a novel neural-network-based predictor for received power level prediction in direct sequence code division multiple access (DS/CDMA) systems. The predictor consists of an adaptive linear element (Adaline) followed by a multilayer perceptron (MLP). An important but difficult problem in designing such a cascade predictor is to determine the complexity of the networks. We solve this problem by using the predictive minimum description length (PMDL) principle to select the optimal numbers of input and hidden nodes. This approach results in a predictor with both good noise attenuation and excellent generalization capability. The optimized neural networks are used for predictive filtering of very noisy Rayleigh fading signals with 1.8 GHz carrier frequency. Our results show that the optimal neural predictor can provide smoothed in-phase and quadrature signals with signal-to-noise ratio (SNR) gains of about 12 and 7 dB at the urban mobile speeds of 5 and 50 km/h, respectively. The corresponding power signal SNR gains are about 11 and 5 dB. Therefore, the neural predictor is well suitable for power control applications where ldquodelaylessrdquo noise attenuation and efficient reduction of fast fading are required.

  12. Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.

    PubMed

    Slażyński, Leszek; Bohte, Sander

    2012-01-01

    The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.

  13. The Effects of Spaceflight on Neurocognitive Performance: Extent, Longevity, and Neural Bases

    NASA Technical Reports Server (NTRS)

    Seidler, Rachael D.; Bloomberg, Jacob; Wood, Scott; Mason, Sara; Mulavara, Ajit; Kofman, Igor; De Dios, Yiri; Gadd, Nicole; Stepanyan, Vahagn; Szecsy, Darcy

    2017-01-01

    Spaceflight effects on gait, balance, & manual motor control have been well studied; some evidence for cognitive deficits. Rodent cortical motor & sensory systems show neural structural alterations with spaceflight. We found extensive changes in behavior, brain structure & brain function following 70 days of HDBR. Specific Aim: Aim 1-Identify changes in brain structure, function, and network integrity as a function of spaceflight and characterize their time course. Aim 2-Specify relationships between structural and functional brain changes and performance and characterize their time course.

  14. Impulsivity and the Modular Organization of Resting-State Neural Networks

    PubMed Central

    Davis, F. Caroline; Knodt, Annchen R.; Sporns, Olaf; Lahey, Benjamin B.; Zald, David H.; Brigidi, Bart D.; Hariri, Ahmad R.

    2013-01-01

    Impulsivity is a complex trait associated with a range of maladaptive behaviors, including many forms of psychopathology. Previous research has implicated multiple neural circuits and neurotransmitter systems in impulsive behavior, but the relationship between impulsivity and organization of whole-brain networks has not yet been explored. Using graph theory analyses, we characterized the relationship between impulsivity and the functional segregation (“modularity”) of the whole-brain network architecture derived from resting-state functional magnetic resonance imaging (fMRI) data. These analyses revealed remarkable differences in network organization across the impulsivity spectrum. Specifically, in highly impulsive individuals, regulatory structures including medial and lateral regions of the prefrontal cortex were isolated from subcortical structures associated with appetitive drive, whereas these brain areas clustered together within the same module in less impulsive individuals. Further exploration of the modular organization of whole-brain networks revealed novel shifts in the functional connectivity between visual, sensorimotor, cortical, and subcortical structures across the impulsivity spectrum. The current findings highlight the utility of graph theory analyses of resting-state fMRI data in furthering our understanding of the neurobiological architecture of complex behaviors. PMID:22645253

  15. Frame prediction using recurrent convolutional encoder with residual learning

    NASA Astrophysics Data System (ADS)

    Yue, Boxuan; Liang, Jun

    2018-05-01

    The prediction for the frame of a video is difficult but in urgent need in auto-driving. Conventional methods can only predict some abstract trends of the region of interest. The boom of deep learning makes the prediction for frames possible. In this paper, we propose a novel recurrent convolutional encoder and DE convolutional decoder structure to predict frames. We introduce the residual learning in the convolution encoder structure to solve the gradient issues. The residual learning can transform the gradient back propagation to an identity mapping. It can reserve the whole gradient information and overcome the gradient issues in Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN). Besides, compared with the branches in CNNs and the gated structures in RNNs, the residual learning can save the training time significantly. In the experiments, we use UCF101 dataset to train our networks, the predictions are compared with some state-of-the-art methods. The results show that our networks can predict frames fast and efficiently. Furthermore, our networks are used for the driving video to verify the practicability.

  16. Neural network based load and price forecasting and confidence interval estimation in deregulated power markets

    NASA Astrophysics Data System (ADS)

    Zhang, Li

    With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman filter (EKF) can be used as an integrated adaptive learning and confidence interval estimation algorithm for neural networks, with fast convergence and small confidence intervals. However, EKF learning is computationally expensive because it involves high dimensional matrix manipulations. A modified U-D factorization within the decoupled EKF (DEKF-UD) framework is developed in this research. The computational efficiency and numerical stability are significantly improved.

  17. Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.

    PubMed

    Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong

    2015-03-01

    This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique.

  18. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks

    PubMed Central

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-01-01

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction. PMID:28672867

  19. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks.

    PubMed

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-06-26

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  20. Predicting backbone Cα angles and dihedrals from protein sequences by stacked sparse auto-encoder deep neural network.

    PubMed

    Lyons, James; Dehzangi, Abdollah; Heffernan, Rhys; Sharma, Alok; Paliwal, Kuldip; Sattar, Abdul; Zhou, Yaoqi; Yang, Yuedong

    2014-10-30

    Because a nearly constant distance between two neighbouring Cα atoms, local backbone structure of proteins can be represented accurately by the angle between C(αi-1)-C(αi)-C(αi+1) (θ) and a dihedral angle rotated about the C(αi)-C(αi+1) bond (τ). θ and τ angles, as the representative of structural properties of three to four amino-acid residues, offer a description of backbone conformations that is complementary to φ and ψ angles (single residue) and secondary structures (>3 residues). Here, we report the first machine-learning technique for sequence-based prediction of θ and τ angles. Predicted angles based on an independent test have a mean absolute error of 9° for θ and 34° for τ with a distribution on the θ-τ plane close to that of native values. The average root-mean-square distance of 10-residue fragment structures constructed from predicted θ and τ angles is only 1.9Å from their corresponding native structures. Predicted θ and τ angles are expected to be complementary to predicted ϕ and ψ angles and secondary structures for using in model validation and template-based as well as template-free structure prediction. The deep neural network learning technique is available as an on-line server called Structural Property prediction with Integrated DEep neuRal network (SPIDER) at http://sparks-lab.org. Copyright © 2014 Wiley Periodicals, Inc.

  1. Neural networks as a control methodology

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1990-01-01

    While conventional computers must be programmed in a logical fashion by a person who thoroughly understands the task to be performed, the motivation behind neural networks is to develop machines which can train themselves to perform tasks, using available information about desired system behavior and learning from experience. There are three goals of this fellowship program: (1) to evaluate various neural net methods and generate computer software to implement those deemed most promising on a personal computer equipped with Matlab; (2) to evaluate methods currently in the professional literature for system control using neural nets to choose those most applicable to control of flexible structures; and (3) to apply the control strategies chosen in (2) to a computer simulation of a test article, the Control Structures Interaction Suitcase Demonstrator, which is a portable system consisting of a small flexible beam driven by a torque motor and mounted on springs tuned to the first flexible mode of the beam. Results of each are discussed.

  2. Neural networks for feedback feedforward nonlinear control systems.

    PubMed

    Parisini, T; Zoppoli, R

    1994-01-01

    This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.

  3. Thalamic structures and associated cognitive functions: Relations with age and aging.

    PubMed

    Fama, Rosemary; Sullivan, Edith V

    2015-07-01

    The thalamus, with its cortical, subcortical, and cerebellar connections, is a critical node in networks supporting cognitive functions known to decline in normal aging, including component processes of memory and executive functions of attention and information processing. The macrostructure, microstructure, and neural connectivity of the thalamus changes across the adult lifespan. Structural and functional magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) have demonstrated, regional thalamic volume shrinkage and microstructural degradation, with anterior regions generally more compromised than posterior regions. The integrity of selective thalamic nuclei and projections decline with advancing age, particularly those in thalamofrontal, thalamoparietal, and thalamolimbic networks. This review presents studies that assess the relations between age and aging and the structure, function, and connectivity of the thalamus and associated neural networks and focuses on their relations with processes of attention, speed of information processing, and working and episodic memory. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language

    NASA Astrophysics Data System (ADS)

    Tanadi, Theo

    2018-03-01

    Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.

  5. Visible rodent brain-wide networks at single-neuron resolution

    PubMed Central

    Yuan, Jing; Gong, Hui; Li, Anan; Li, Xiangning; Chen, Shangbin; Zeng, Shaoqun; Luo, Qingming

    2015-01-01

    There are some unsolvable fundamental questions, such as cell type classification, neural circuit tracing and neurovascular coupling, though great progresses are being made in neuroscience. Because of the structural features of neurons and neural circuits, the solution of these questions needs us to break through the current technology of neuroanatomy for acquiring the exactly fine morphology of neuron and vessels and tracing long-distant circuit at axonal resolution in the whole brain of mammals. Combined with fast-developing labeling techniques, efficient whole-brain optical imaging technology emerging at the right moment presents a huge potential in the structure and function research of specific-function neuron and neural circuit. In this review, we summarize brain-wide optical tomography techniques, review the progress on visible brain neuronal/vascular networks benefit from these novel techniques, and prospect the future technical development. PMID:26074784

  6. Global cluster synchronization in nonlinearly coupled community networks with heterogeneous coupling delays.

    PubMed

    Tseng, Jui-Pin

    2017-02-01

    This investigation establishes the global cluster synchronization of complex networks with a community structure based on an iterative approach. The units comprising the network are described by differential equations, and can be non-autonomous and involve time delays. In addition, units in the different communities can be governed by different equations. The coupling configuration of the network is rather general. The coupling terms can be non-diffusive, nonlinear, asymmetric, and with heterogeneous coupling delays. Based on this approach, both delay-dependent and delay-independent criteria for global cluster synchronization are derived. We implement the present approach for a nonlinearly coupled neural network with heterogeneous coupling delays. Two numerical examples are given to show that neural networks can behave in a variety of new collective ways under the synchronization criteria. These examples also demonstrate that neural networks remain synchronized in spite of coupling delays between neurons across different communities; however, they may lose synchrony if the coupling delays between the neurons within the same community are too large, such that the synchronization criteria are violated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. A new neural net approach to robot 3D perception and visuo-motor coordination

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan

    1992-01-01

    A novel neural network approach to robot hand-eye coordination is presented. The approach provides a true sense of visual error servoing, redundant arm configuration control for collision avoidance, and invariant visuo-motor learning under gazing control. A 3-D perception network is introduced to represent the robot internal 3-D metric space in which visual error servoing and arm configuration control are performed. The arm kinematic network performs the bidirectional association between 3-D space arm configurations and joint angles, and enforces the legitimate arm configurations. The arm kinematic net is structured by a radial-based competitive and cooperative network with hierarchical self-organizing learning. The main goal of the present work is to demonstrate that the neural net representation of the robot 3-D perception net serves as an important intermediate functional block connecting robot eyes and arms.

  8. Implanted neural network potentials: Application to Li-Si alloys

    NASA Astrophysics Data System (ADS)

    Onat, Berk; Cubuk, Ekin D.; Malone, Brad D.; Kaxiras, Efthimios

    2018-03-01

    Modeling the behavior of materials composed of elements with different bonding and electronic structure character for large spatial and temporal scales and over a large compositional range is a challenging problem. Cases in point are amorphous alloys of Si, a prototypical covalent material, and Li, a prototypical metal, which are being considered as anodes for high-energy-density batteries. To address this challenge, we develop a methodology based on neural networks that extends the conventional training approach to incorporate pre-trained parts that capture the character of different components, into the overall network; we refer to this model as the "implanted neural network" method. We show that this approach works well for the Si-Li amorphous alloys for a wide range of compositions, giving good results for key quantities like the diffusion coefficients. The method is readily generalizable to more complicated situations that involve two or more different elements.

  9. Speech reconstruction using a deep partially supervised neural network.

    PubMed

    McLoughlin, Ian; Li, Jingjie; Song, Yan; Sharifzadeh, Hamid R

    2017-08-01

    Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.

  10. A patch-based convolutional neural network for remote sensing image classification.

    PubMed

    Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di

    2017-11-01

    Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Two's company, three (or more) is a simplex : Algebraic-topological tools for understanding higher-order structure in neural data.

    PubMed

    Giusti, Chad; Ghrist, Robert; Bassett, Danielle S

    2016-08-01

    The language of graph theory, or network science, has proven to be an exceptional tool for addressing myriad problems in neuroscience. Yet, the use of networks is predicated on a critical simplifying assumption: that the quintessential unit of interest in a brain is a dyad - two nodes (neurons or brain regions) connected by an edge. While rarely mentioned, this fundamental assumption inherently limits the types of neural structure and function that graphs can be used to model. Here, we describe a generalization of graphs that overcomes these limitations, thereby offering a broad range of new possibilities in terms of modeling and measuring neural phenomena. Specifically, we explore the use of simplicial complexes: a structure developed in the field of mathematics known as algebraic topology, of increasing applicability to real data due to a rapidly growing computational toolset. We review the underlying mathematical formalism as well as the budding literature applying simplicial complexes to neural data, from electrophysiological recordings in animal models to hemodynamic fluctuations in humans. Based on the exceptional flexibility of the tools and recent ground-breaking insights into neural function, we posit that this framework has the potential to eclipse graph theory in unraveling the fundamental mysteries of cognition.

  12. A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction

    PubMed Central

    Spencer, Matt; Eickholt, Jesse; Cheng, Jianlin

    2014-01-01

    Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80% and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test data set of 198 proteins, achieving a Q3 accuracy of 80.7% and a Sov accuracy of 74.2%. PMID:25750595

  13. A Deep Learning Network Approach to ab initio Protein Secondary Structure Prediction.

    PubMed

    Spencer, Matt; Eickholt, Jesse; Jianlin Cheng

    2015-01-01

    Ab initio protein secondary structure (SS) predictions are utilized to generate tertiary structure predictions, which are increasingly demanded due to the rapid discovery of proteins. Although recent developments have slightly exceeded previous methods of SS prediction, accuracy has stagnated around 80 percent and many wonder if prediction cannot be advanced beyond this ceiling. Disciplines that have traditionally employed neural networks are experimenting with novel deep learning techniques in attempts to stimulate progress. Since neural networks have historically played an important role in SS prediction, we wanted to determine whether deep learning could contribute to the advancement of this field as well. We developed an SS predictor that makes use of the position-specific scoring matrix generated by PSI-BLAST and deep learning network architectures, which we call DNSS. Graphical processing units and CUDA software optimize the deep network architecture and efficiently train the deep networks. Optimal parameters for the training process were determined, and a workflow comprising three separately trained deep networks was constructed in order to make refined predictions. This deep learning network approach was used to predict SS for a fully independent test dataset of 198 proteins, achieving a Q3 accuracy of 80.7 percent and a Sov accuracy of 74.2 percent.

  14. Learning Orthographic Structure With Sequential Generative Neural Networks.

    PubMed

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  15. Hydraulic and separation characteristics of an industrial gas centrifuge calculated with neural networks

    NASA Astrophysics Data System (ADS)

    Butov, Vladimir; Timchenko, Sergey; Ushakov, Ivan; Golovkov, Nikita; Poberezhnikov, Andrey

    2018-03-01

    Single gas centrifuge (GC) is generally used for the separation of binary mixtures of isotopes. Processes taking place within the centrifuge are complex and non-linear. Their characteristics can change over time with long-term operation due to wear of the main structural elements of the GC construction. The paper is devoted to the determination of basic operation parameters of the centrifuge with the help of neural networks. We have developed a method for determining the parameters of the industrial GC operation by processing statistical data. In this work, we have constructed a neural network that is capable of determining the main hydraulic and separation characteristics of the gas centrifuge, depending on the geometric dimensions of the gas centrifuge, load value, and rotor speed.

  16. Comparison of Mathematical Equation and Neural Network Modeling for Drying Kinetic of Mendong in Microwave Oven

    NASA Astrophysics Data System (ADS)

    Maulidah, Rifa'atul; Purqon, Acep

    2016-08-01

    Mendong (Fimbristylis globulosa) has a potentially industrial application. We investigate a predictive model for heat and mass transfer in drying kinetics during drying a Mendong. We experimentally dry the Mendong by using a microwave oven. In this study, we analyze three mathematical equations and feed forward neural network (FNN) with back propagation to describe the drying behavior of Mendong. Our results show that the experimental data and the artificial neural network model has a good agreement and better than a mathematical equation approach. The best FNN for the prediction is 3-20-1-1 structure with Levenberg- Marquardt training function. This drying kinetics modeling is potentially applied to determine the optimal parameters during mendong drying and to estimate and control of drying process.

  17. Abnormal resting-state connectivity of motor and cognitive networks in early manifest Huntington's disease.

    PubMed

    Wolf, R C; Sambataro, F; Vasic, N; Depping, M S; Thomann, P A; Landwehrmeyer, G B; Süssmuth, S D; Orth, M

    2014-11-01

    Functional magnetic resonance imaging (fMRI) of multiple neural networks during the brain's 'resting state' could facilitate biomarker development in patients with Huntington's disease (HD) and may provide new insights into the relationship between neural dysfunction and clinical symptoms. To date, however, very few studies have examined the functional integrity of multiple resting state networks (RSNs) in manifest HD, and even less is known about whether concomitant brain atrophy affects neural activity in patients. Using MRI, we investigated brain structure and RSN function in patients with early HD (n = 20) and healthy controls (n = 20). For resting-state fMRI data a group-independent component analysis identified spatiotemporally distinct patterns of motor and prefrontal RSNs of interest. We used voxel-based morphometry to assess regional brain atrophy, and 'biological parametric mapping' analyses to investigate the impact of atrophy on neural activity. Compared with controls, patients showed connectivity changes within distinct neural systems including lateral prefrontal, supplementary motor, thalamic, cingulate, temporal and parietal regions. In patients, supplementary motor area and cingulate cortex connectivity indices were associated with measures of motor function, whereas lateral prefrontal connectivity was associated with cognition. This study provides evidence for aberrant connectivity of RSNs associated with motor function and cognition in early manifest HD when controlling for brain atrophy. This suggests clinically relevant changes of RSN activity in the presence of HD-associated cortical and subcortical structural abnormalities.

  18. Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System

    NASA Technical Reports Server (NTRS)

    Williams, Peggy S.

    2004-01-01

    The NASA F-15 Intelligent Flight Control System project team has developed a series of flight control concepts designed to demonstrate the benefits of a neural network-based adaptive controller. The objective of the team is to develop and flight-test control systems that use neural network technology to optimize the performance of the aircraft under nominal conditions as well as stabilize the aircraft under failure conditions. Failure conditions include locked or failed control surfaces as well as unforeseen damage that might occur to the aircraft in flight. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to the baseline aerodynamic derivatives in flight. This set of open-loop flight tests was performed in preparation for a future phase of flights in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed a pitch frequency sweep and an automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. An examination of flight data shows that addition of the flight-identified aerodynamic derivative increments into the simulation improved the pitch handling qualities of the aircraft.

  19. Holography as deep learning

    NASA Astrophysics Data System (ADS)

    Gan, Wen-Cong; Shu, Fu-Wen

    Quantum many-body problem with exponentially large degrees of freedom can be reduced to a tractable computational form by neural network method [G. Carleo and M. Troyer, Science 355 (2017) 602, arXiv:1606.02318.] The power of deep neural network (DNN) based on deep learning is clarified by mapping it to renormalization group (RG), which may shed lights on holographic principle by identifying a sequence of RG transformations to the AdS geometry. In this paper, we show that any network which reflects RG process has intrinsic hyperbolic geometry, and discuss the structure of entanglement encoded in the graph of DNN. We find the entanglement structure of DNN is of Ryu-Takayanagi form. Based on these facts, we argue that the emergence of holographic gravitational theory is related to deep learning process of the quantum-field theory.

  20. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    PubMed

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  1. A network application for modeling a centrifugal compressor performance map

    NASA Astrophysics Data System (ADS)

    Nikiforov, A.; Popova, D.; Soldatova, K.

    2017-08-01

    The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.

  2. Effects of Spike Anticipation on the Spiking Dynamics of Neural Networks

    PubMed Central

    de Santos-Sierra, Daniel; Sanchez-Jimenez, Abel; Garcia-Vellisca, Mariano A.; Navas, Adrian; Villacorta-Atienza, Jose A.

    2015-01-01

    Synchronization is one of the central phenomena involved in information processing in living systems. It is known that the nervous system requires the coordinated activity of both local and distant neural populations. Such an interplay allows to merge different information modalities in a whole processing supporting high-level mental skills as understanding, memory, abstraction, etc. Though, the biological processes underlying synchronization in the brain are not fully understood there have been reported a variety of mechanisms supporting different types of synchronization both at theoretical and experimental level. One of the more intriguing of these phenomena is the anticipating synchronization, which has been recently reported in a pair of unidirectionally coupled artificial neurons under simple conditions (Pyragiene and Pyragas, 2013), where the slave neuron is able to anticipate in time the behavior of the master one. In this paper, we explore the effect of spike anticipation over the information processing performed by a neural network at functional and structural level. We show that the introduction of intermediary neurons in the network enhances spike anticipation and analyse how these variations in spike anticipation can significantly change the firing regime of the neural network according to its functional and structural properties. In addition we show that the interspike interval (ISI), one of the main features of the neural response associated with the information coding, can be closely related to spike anticipation by each spike, and how synaptic plasticity can be modulated through that relationship. This study has been performed through numerical simulation of a coupled system of Hindmarsh–Rose neurons. PMID:26648863

  3. Effects of Spike Anticipation on the Spiking Dynamics of Neural Networks.

    PubMed

    de Santos-Sierra, Daniel; Sanchez-Jimenez, Abel; Garcia-Vellisca, Mariano A; Navas, Adrian; Villacorta-Atienza, Jose A

    2015-01-01

    Synchronization is one of the central phenomena involved in information processing in living systems. It is known that the nervous system requires the coordinated activity of both local and distant neural populations. Such an interplay allows to merge different information modalities in a whole processing supporting high-level mental skills as understanding, memory, abstraction, etc. Though, the biological processes underlying synchronization in the brain are not fully understood there have been reported a variety of mechanisms supporting different types of synchronization both at theoretical and experimental level. One of the more intriguing of these phenomena is the anticipating synchronization, which has been recently reported in a pair of unidirectionally coupled artificial neurons under simple conditions (Pyragiene and Pyragas, 2013), where the slave neuron is able to anticipate in time the behavior of the master one. In this paper, we explore the effect of spike anticipation over the information processing performed by a neural network at functional and structural level. We show that the introduction of intermediary neurons in the network enhances spike anticipation and analyse how these variations in spike anticipation can significantly change the firing regime of the neural network according to its functional and structural properties. In addition we show that the interspike interval (ISI), one of the main features of the neural response associated with the information coding, can be closely related to spike anticipation by each spike, and how synaptic plasticity can be modulated through that relationship. This study has been performed through numerical simulation of a coupled system of Hindmarsh-Rose neurons.

  4. Semantic Segmentation of Convolutional Neural Network for Supervised Classification of Multispectral Remote Sensing

    NASA Astrophysics Data System (ADS)

    Xue, L.; Liu, C.; Wu, Y.; Li, H.

    2018-04-01

    Semantic segmentation is a fundamental research in remote sensing image processing. Because of the complex maritime environment, the classification of roads, vegetation, buildings and water from remote Sensing Imagery is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there are a few of works using CNN for ground object segmentation and the results could be further improved. This paper used convolution neural network named U-Net, its structure has a contracting path and an expansive path to get high resolution output. In the network , We added BN layers, which is more conducive to the reverse pass. Moreover, after upsampling convolution , we add dropout layers to prevent overfitting. They are promoted to get more precise segmentation results. To verify this network architecture, we used a Kaggle dataset. Experimental results show that U-Net achieved good performance compared with other architectures, especially in high-resolution remote sensing imagery.

  5. Classification of conductance traces with recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Lauritzen, Kasper P.; Magyarkuti, András; Balogh, Zoltán; Halbritter, András; Solomon, Gemma C.

    2018-02-01

    We present a new automated method for structural classification of the traces obtained in break junction experiments. Using recurrent neural networks trained on the traces of minimal cross-sectional area in molecular dynamics simulations, we successfully separate the traces into two classes: point contact or nanowire. This is done without any assumptions about the expected features of each class. The trained neural network is applied to experimental break junction conductance traces, and it separates the classes as well as the previously used experimental methods. The effect of using partial conductance traces is explored, and we show that the method performs equally well using full or partial traces (as long as the trace just prior to breaking is included). When only the initial part of the trace is included, the results are still better than random chance. Finally, we show that the neural network classification method can be used to classify experimental conductance traces without using simulated results for training, but instead training the network on a few representative experimental traces. This offers a tool to recognize some characteristic motifs of the traces, which can be hard to find by simple data selection algorithms.

  6. Protein contact prediction using patterns of correlation.

    PubMed

    Hamilton, Nicholas; Burrage, Kevin; Ragan, Mark A; Huber, Thomas

    2004-09-01

    We describe a new method for using neural networks to predict residue contact pairs in a protein. The main inputs to the neural network are a set of 25 measures of correlated mutation between all pairs of residues in two "windows" of size 5 centered on the residues of interest. While the individual pair-wise correlations are a relatively weak predictor of contact, by training the network on windows of correlation the accuracy of prediction is significantly improved. The neural network is trained on a set of 100 proteins and then tested on a disjoint set of 1033 proteins of known structure. An average predictive accuracy of 21.7% is obtained taking the best L/2 predictions for each protein, where L is the sequence length. Taking the best L/10 predictions gives an average accuracy of 30.7%. The predictor is also tested on a set of 59 proteins from the CASP5 experiment. The accuracy is found to be relatively consistent across different sequence lengths, but to vary widely according to the secondary structure. Predictive accuracy is also found to improve by using multiple sequence alignments containing many sequences to calculate the correlations. Copyright 2004 Wiley-Liss, Inc.

  7. Prediction of enzyme activity with neural network models based on electronic and geometrical features of substrates.

    PubMed

    Szaleniec, Maciej

    2012-01-01

    Artificial Neural Networks (ANNs) are introduced as robust and versatile tools in quantitative structure-activity relationship (QSAR) modeling. Their application to the modeling of enzyme reactivity is discussed, along with methodological issues. Methods of input variable selection, optimization of network internal structure, data set division and model validation are discussed. The application of ANNs in the modeling of enzyme activity over the last 20 years is briefly recounted. The discussed methodology is exemplified by the case of ethylbenzene dehydrogenase (EBDH). Intelligent Problem Solver and genetic algorithms are applied for input vector selection, whereas k-means clustering is used to partition the data into training and test cases. The obtained models exhibit high correlation between the predicted and experimental values (R(2) > 0.9). Sensitivity analyses and study of the response curves are used as tools for the physicochemical interpretation of the models in terms of the EBDH reaction mechanism. Neural networks are shown to be a versatile tool for the construction of robust QSAR models that can be applied to a range of aspects important in drug design and the prediction of biological activity.

  8. Adaptive dynamical networks

    NASA Astrophysics Data System (ADS)

    Maslennikov, O. V.; Nekorkin, V. I.

    2017-10-01

    Dynamical networks are systems of active elements (nodes) interacting with each other through links. Examples are power grids, neural structures, coupled chemical oscillators, and communications networks, all of which are characterized by a networked structure and intrinsic dynamics of their interacting components. If the coupling structure of a dynamical network can change over time due to nodal dynamics, then such a system is called an adaptive dynamical network. The term ‘adaptive’ implies that the coupling topology can be rewired; the term ‘dynamical’ implies the presence of internal node and link dynamics. The main results of research on adaptive dynamical networks are reviewed. Key notions and definitions of the theory of complex networks are given, and major collective effects that emerge in adaptive dynamical networks are described.

  9. Optimization design of LED heat dissipation structure based on strip fins

    NASA Astrophysics Data System (ADS)

    Xue, Lingyun; Wan, Wenbin; Chen, Qingguang; Rao, Huanle; Xu, Ping

    2018-03-01

    To solve the heat dissipation problem of LED, a radiator structure based on strip fins is designed and the method to optimize the structure parameters of strip fins is proposed in this paper. The combination of RBF neural networks and particle swarm optimization (PSO) algorithm is used for modeling and optimization respectively. During the experiment, the 150 datasets of LED junction temperature when structure parameters of number of strip fins, length, width and height of the fins have different values are obtained by ANSYS software. Then RBF neural network is applied to build the non-linear regression model and the parameters optimization of structure based on particle swarm optimization algorithm is performed with this model. The experimental results show that the lowest LED junction temperature reaches 43.88 degrees when the number of hidden layer nodes in RBF neural network is 10, the two learning factors in particle swarm optimization algorithm are 0.5, 0.5 respectively, the inertia factor is 1 and the maximum number of iterations is 100, and now the number of fins is 64, the distribution structure is 8*8, and the length, width and height of fins are 4.3mm, 4.48mm and 55.3mm respectively. To compare the modeling and optimization results, LED junction temperature at the optimized structure parameters was simulated and the result is 43.592°C which approximately equals to the optimal result. Compared with the ordinary plate-fin-type radiator structure whose temperature is 56.38°C, the structure greatly enhances heat dissipation performance of the structure.

  10. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    PubMed

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  11. Enhancement of digital radiography image quality using a convolutional neural network.

    PubMed

    Sun, Yuewen; Li, Litao; Cong, Peng; Wang, Zhentao; Guo, Xiaojing

    2017-01-01

    Digital radiography system is widely used for noninvasive security check and medical imaging examination. However, the system has a limitation of lower image quality in spatial resolution and signal to noise ratio. In this study, we explored whether the image quality acquired by the digital radiography system can be improved with a modified convolutional neural network to generate high-resolution images with reduced noise from the original low-quality images. The experiment evaluated on a test dataset, which contains 5 X-ray images, showed that the proposed method outperformed the traditional methods (i.e., bicubic interpolation and 3D block-matching approach) as measured by peak signal to noise ratio (PSNR) about 1.3 dB while kept highly efficient processing time within one second. Experimental results demonstrated that a residual to residual (RTR) convolutional neural network remarkably improved the image quality of object structural details by increasing the image resolution and reducing image noise. Thus, this study indicated that applying this RTR convolutional neural network system was useful to improve image quality acquired by the digital radiography system.

  12. Inverse simulation system for manual-controlled rendezvous and docking based on artificial neural network

    NASA Astrophysics Data System (ADS)

    Zhou, Wanmeng; Wang, Hua; Tang, Guojin; Guo, Shuai

    2016-09-01

    The time-consuming experimental method for handling qualities assessment cannot meet the increasing fast design requirements for the manned space flight. As a tool for the aircraft handling qualities research, the model-predictive-control structured inverse simulation (MPC-IS) has potential applications in the aerospace field to guide the astronauts' operations and evaluate the handling qualities more effectively. Therefore, this paper establishes MPC-IS for the manual-controlled rendezvous and docking (RVD) and proposes a novel artificial neural network inverse simulation system (ANN-IS) to further decrease the computational cost. The novel system was obtained by replacing the inverse model of MPC-IS with the artificial neural network. The optimal neural network was trained by the genetic Levenberg-Marquardt algorithm, and finally determined by the Levenberg-Marquardt algorithm. In order to validate MPC-IS and ANN-IS, the manual-controlled RVD experiments on the simulator were carried out. The comparisons between simulation results and experimental data demonstrated the validity of two systems and the high computational efficiency of ANN-IS.

  13. Identification and control of plasma vertical position using neural network in Damavand tokamak.

    PubMed

    Rasouli, H; Rasouli, C; Koohi, A

    2013-02-01

    In this work, a nonlinear model is introduced to determine the vertical position of the plasma column in Damavand tokamak. Using this model as a simulator, a nonlinear neural network controller has been designed. In the first stage, the electronic drive and sensory circuits of Damavand tokamak are modified. These circuits can control the vertical position of the plasma column inside the vacuum vessel. Since the vertical position of plasma is an unstable parameter, a direct closed loop system identification algorithm is performed. In the second stage, a nonlinear model is identified for plasma vertical position, based on the multilayer perceptron (MLP) neural network (NN) structure. Estimation of simulator parameters has been performed by back-propagation error algorithm using Levenberg-Marquardt gradient descent optimization technique. The model is verified through simulation of the whole closed loop system using both simulator and actual plant in similar conditions. As the final stage, a MLP neural network controller is designed for simulator model. In the last step, online training is performed to tune the controller parameters. Simulation results justify using of the NN controller for the actual plant.

  14. Automated Analysis of Planktic Foraminifers Part III: Neural Network Classification

    NASA Astrophysics Data System (ADS)

    Schiebel, R.; Bollmann, J.; Quinn, P.; Vela, M.; Schmidt, D. N.; Thierstein, H. R.

    2003-04-01

    The abundance and assemblage composition of microplankton, together with the chemical and stable isotopic composition of their shells, are among the most successful methods in paleoceanography and paleoclimatology. However, the manual collection of statistically significant numbers of unbiased, reproducible data is time consuming. Consequently, automated microfossil analysis and species recognition has been a long-standing goal in micropaleontology. We have developed a Windows based software package COGNIS for the segmentation, preprocessing, and classification of automatically acquired microfossil images (see Part II, Bollmann et al., this volume), using operator designed neural network structures. With a five-layered convolutional neural network we obtain an average recognition rate of 75 % (max. 88 %) for 6 taxa (N. dutertrei, N. pachyderma dextral, N. pachyderma sinistral, G. inflata, G. menardii/tumida, O. universa), represented by 50 images each for 20 classes (separation of spiral and umbilical views, and of sinistral and dextral forms). Our investigation indicates that neural networks hold great potential for the automated classification of planktic foraminifers and offer new perspectives in micropaleontology, paleoceanography, and paleoclimatology (see Part I, Schmidt et al., this volume).

  15. Diagonal recurrent neural network based adaptive control of nonlinear dynamical systems using lyapunov stability criterion.

    PubMed

    Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P

    2017-03-01

    In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Emergent spatial synaptic structure from diffusive plasticity.

    PubMed

    Sweeney, Yann; Clopath, Claudia

    2017-04-01

    Some neurotransmitters can diffuse freely across cell membranes, influencing neighbouring neurons regardless of their synaptic coupling. This provides a means of neural communication, alternative to synaptic transmission, which can influence the way in which neural networks process information. Here, we ask whether diffusive neurotransmission can also influence the structure of synaptic connectivity in a network undergoing plasticity. We propose a form of Hebbian synaptic plasticity which is mediated by a diffusive neurotransmitter. Whenever a synapse is modified at an individual neuron through our proposed mechanism, similar but smaller modifications occur in synapses connecting to neighbouring neurons. The effects of this diffusive plasticity are explored in networks of rate-based neurons. This leads to the emergence of spatial structure in the synaptic connectivity of the network. We show that this spatial structure can coexist with other forms of structure in the synaptic connectivity, such as with groups of strongly interconnected neurons that form in response to correlated external drive. Finally, we explore diffusive plasticity in a simple feedforward network model of receptive field development. We show that, as widely observed across sensory cortex, the preferred stimulus identity of neurons in our network become spatially correlated due to diffusion. Our proposed mechanism of diffusive plasticity provides an efficient mechanism for generating these spatial correlations in stimulus preference which can flexibly interact with other forms of synaptic organisation. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Experiments on neural network architectures for fuzzy logic

    NASA Technical Reports Server (NTRS)

    Keller, James M.

    1991-01-01

    The use of fuzzy logic to model and manage uncertainty in a rule-based system places high computational demands on an inference engine. In an earlier paper, the authors introduced a trainable neural network structure for fuzzy logic. These networks can learn and extrapolate complex relationships between possibility distributions for the antecedents and consequents in the rules. Here, the power of these networks is further explored. The insensitivity of the output to noisy input distributions (which are likely if the clauses are generated from real data) is demonstrated as well as the ability of the networks to internalize multiple conjunctive clause and disjunctive clause rules. Since different rules with the same variables can be encoded in a single network, this approach to fuzzy logic inference provides a natural mechanism for rule conflict resolution.

  18. Occipital cortical thickness in very low birth weight born adolescents predicts altered neural specialization of visual semantic category related neural networks.

    PubMed

    Klaver, Peter; Latal, Beatrice; Martin, Ernst

    2015-01-01

    Very low birth weight (VLBW) premature born infants have a high risk to develop visual perceptual and learning deficits as well as widespread functional and structural brain abnormalities during infancy and childhood. Whether and how prematurity alters neural specialization within visual neural networks is still unknown. We used functional and structural brain imaging to examine the visual semantic system of VLBW born (<1250 g, gestational age 25-32 weeks) adolescents (13-15 years, n = 11, 3 males) and matched term born control participants (13-15 years, n = 11, 3 males). Neurocognitive assessment revealed no group differences except for lower scores on an adaptive visuomotor integration test. All adolescents were scanned while viewing pictures of animals and tools and scrambled versions of these pictures. Both groups demonstrated animal and tool category related neural networks. Term born adolescents showed tool category related neural activity, i.e. tool pictures elicited more activity than animal pictures, in temporal and parietal brain areas. Animal category related activity was found in the occipital, temporal and frontal cortex. VLBW born adolescents showed reduced tool category related activity in the dorsal visual stream compared with controls, specifically the left anterior intraparietal sulcus, and enhanced animal category related activity in the left middle occipital gyrus and right lingual gyrus. Lower birth weight of VLBW adolescents correlated with larger thickness of the pericalcarine gyrus in the occipital cortex and smaller surface area of the superior temporal gyrus in the lateral temporal cortex. Moreover, larger thickness of the pericalcarine gyrus and smaller surface area of the superior temporal gyrus correlated with reduced tool category related activity in the parietal cortex. Together, our data suggest that very low birth weight predicts alterations of higher order visual semantic networks, particularly in the dorsal stream. The differences in neural specialization may be associated with aberrant cortical development of areas in the visual system that develop early in childhood. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator

    PubMed Central

    Wang, Runchun M.; Thakur, Chetan S.; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks. PMID:29692702

  20. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator.

    PubMed

    Wang, Runchun M; Thakur, Chetan S; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.

  1. A novel recurrent neural network with finite-time convergence for linear programming.

    PubMed

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  2. Exploring Neural Network Models with Hierarchical Memories and Their Use in Modeling Biological Systems

    NASA Astrophysics Data System (ADS)

    Pusuluri, Sai Teja

    Energy landscapes are often used as metaphors for phenomena in biology, social sciences and finance. Different methods have been implemented in the past for the construction of energy landscapes. Neural network models based on spin glass physics provide an excellent mathematical framework for the construction of energy landscapes. This framework uses a minimal number of parameters and constructs the landscape using data from the actual phenomena. In the past neural network models were used to mimic the storage and retrieval process of memories (patterns) in the brain. With advances in the field now, these models are being used in machine learning, deep learning and modeling of complex phenomena. Most of the past literature focuses on increasing the storage capacity and stability of stored patterns in the network but does not study these models from a modeling perspective or an energy landscape perspective. This dissertation focuses on neural network models both from a modeling perspective and from an energy landscape perspective. I firstly show how the cellular interconversion phenomenon can be modeled as a transition between attractor states on an epigenetic landscape constructed using neural network models. The model allows the identification of a reaction coordinate of cellular interconversion by analyzing experimental and simulation time course data. Monte Carlo simulations of the model show that the initial phase of cellular interconversion is a Poisson process and the later phase of cellular interconversion is a deterministic process. Secondly, I explore the static features of landscapes generated using neural network models, such as sizes of basins of attraction and densities of metastable states. The simulation results show that the static landscape features are strongly dependent on the correlation strength and correlation structure between patterns. Using different hierarchical structures of the correlation between patterns affects the landscape features. These results show how the static landscape features can be controlled by adjusting the correlations between patterns. Finally, I explore the dynamical features of landscapes generated using neural network models such as the stability of minima and the transition rates between minima. The results from this project show that the stability depends on the correlations between patterns. It is also found that the transition rates between minima strongly depend on the type of bias applied and the correlation between patterns. The results from this part of the dissertation can be useful in engineering an energy landscape without even having the complete information about the associated minima of the landscape.

  3. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Functional Stem Cell Integration into Neural Networks Assessed by Organotypic Slice Cultures.

    PubMed

    Forsberg, David; Thonabulsombat, Charoensri; Jäderstad, Johan; Jäderstad, Linda Maria; Olivius, Petri; Herlenius, Eric

    2017-08-14

    Re-formation or preservation of functional, electrically active neural networks has been proffered as one of the goals of stem cell-mediated neural therapeutics. A primary issue for a cell therapy approach is the formation of functional contacts between the implanted cells and the host tissue. Therefore, it is of fundamental interest to establish protocols that allow us to delineate a detailed time course of grafted stem cell survival, migration, differentiation, integration, and functional interaction with the host. One option for in vitro studies is to examine the integration of exogenous stem cells into an existing active neural network in ex vivo organotypic cultures. Organotypic cultures leave the structural integrity essentially intact while still allowing the microenvironment to be carefully controlled. This allows detailed studies over time of cellular responses and cell-cell interactions, which are not readily performed in vivo. This unit describes procedures for using organotypic slice cultures as ex vivo model systems for studying neural stem cell and embryonic stem cell engraftment and communication with CNS host tissue. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  5. Neural connections foster social connections: a diffusion-weighted imaging study of social networks

    PubMed Central

    Hampton, William H.; Unger, Ashley; Von Der Heide, Rebecca J.

    2016-01-01

    Although we know the transition from childhood to adulthood is marked by important social and neural development, little is known about how social network size might affect neurocognitive development or vice versa. Neuroimaging research has identified several brain regions, such as the amygdala, as key to this affiliative behavior. However, white matter connectivity among these regions, and its behavioral correlates, remain unclear. Here we tested two hypotheses: that an amygdalocentric structural white matter network governs social affiliative behavior and that this network changes during adolescence and young adulthood. We measured social network size behaviorally, and white matter microstructure using probabilistic diffusion tensor imaging in a sample of neurologically normal adolescents and young adults. Our results suggest amygdala white matter microstructure is key to understanding individual differences in social network size, with connectivity to other social brain regions such as the orbitofrontal cortex and anterior temporal lobe predicting much variation. In addition, participant age correlated with both network size and white matter variation in this network. These findings suggest the transition to adulthood may constitute a critical period for the optimization of structural brain networks underlying affiliative behavior. PMID:26755769

  6. A hybrid linear/nonlinear training algorithm for feedforward neural networks.

    PubMed

    McLoone, S; Brown, M D; Irwin, G; Lightbody, A

    1998-01-01

    This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.

  7. Neural network modelling of the influence of channelopathies on reflex visual attention.

    PubMed

    Gravier, Alexandre; Quek, Chai; Duch, Włodzisław; Wahab, Abdul; Gravier-Rymaszewska, Joanna

    2016-02-01

    This paper introduces a model of Emergent Visual Attention in presence of calcium channelopathy (EVAC). By modelling channelopathy, EVAC constitutes an effort towards identifying the possible causes of autism. The network structure embodies the dual pathways model of cortical processing of visual input, with reflex attention as an emergent property of neural interactions. EVAC extends existing work by introducing attention shift in a larger-scale network and applying a phenomenological model of channelopathy. In presence of a distractor, the channelopathic network's rate of failure to shift attention is lower than the control network's, but overall, the control network exhibits a lower classification error rate. The simulation results also show differences in task-relative reaction times between control and channelopathic networks. The attention shift timings inferred from the model are consistent with studies of attention shift in autistic children.

  8. Detection of network attacks based on adaptive resonance theory

    NASA Astrophysics Data System (ADS)

    Bukhanov, D. G.; Polyakov, V. M.

    2018-05-01

    The paper considers an approach to intrusion detection systems using a neural network of adaptive resonant theory. It suggests the structure of an intrusion detection system consisting of two types of program modules. The first module manages connections of user applications by preventing the undesirable ones. The second analyzes the incoming network traffic parameters to check potential network attacks. After attack detection, it notifies the required stations using a secure transmission channel. The paper describes the experiment on the detection and recognition of network attacks using the test selection. It also compares the obtained results with similar experiments carried out by other authors. It gives findings and conclusions on the sufficiency of the proposed approach. The obtained information confirms the sufficiency of applying the neural networks of adaptive resonant theory to analyze network traffic within the intrusion detection system.

  9. Automatic delineation and 3D visualization of the human ventricular system using probabilistic neural networks

    NASA Astrophysics Data System (ADS)

    Hatfield, Fraser N.; Dehmeshki, Jamshid

    1998-09-01

    Neurosurgery is an extremely specialized area of medical practice, requiring many years of training. It has been suggested that virtual reality models of the complex structures within the brain may aid in the training of neurosurgeons as well as playing an important role in the preparation for surgery. This paper focuses on the application of a probabilistic neural network to the automatic segmentation of the ventricles from magnetic resonance images of the brain, and their three dimensional visualization.

  10. Classification and Prediction of RF Coupling inside A-320 and A-319 Airplanes using Feed Forward Neural Networks

    NASA Technical Reports Server (NTRS)

    Jafri, Madiha; Ely, Jay; Vahala, Linda

    2006-01-01

    Neural Network Modeling is introduced in this paper to classify and predict Interference Path Loss measurements on Airbus 319 and 320 airplanes. Interference patterns inside the aircraft are classified and predicted based on the locations of the doors, windows, aircraft structures and the communication/navigation system-of-concern. Modeled results are compared with measured data and a plan is proposed to enhance the modeling for better prediction of electromagnetic coupling problems inside aircraft.

  11. ANNS An X Window Based Version of the AFIT Neural Network Simulator

    DTIC Science & Technology

    1993-06-01

    programer or user can view the dy- namic behavior of an algorithm and its changes of learning state while the neural network paradigms or algorithms...an object as "something you can do things to. An object has state, behavior , and identity, the structure and behavior of similar objects are defined in...their common class. The terms instance and object are interchangeable" [5:516]. The behavior of an object is "characterized by the actions that it

  12. A light intensity monitoring method based on fiber Bragg grating sensing technology and BP neural network

    NASA Astrophysics Data System (ADS)

    Li, Lu-Ming; Zhu, Qian; Zhang, Zhi-Guo; Cai, Zhi-Min; Liao, Zhi-Jun; Hu, Zhen-Yan

    2017-04-01

    In this paper, a light intensity monitoring method based on FBG is proposed. The method establishes a light intensity monitoring model with cantilever beam structure and BP neural network algorithm, which is based on fiber grating sensing technology. The accuracy of the model can meet the requirements of engineering project and it can monitor light intensity in real time. The experimental results show that the method has good stability and high sensitivity.

  13. Incidence and anatomy of gaze-evoked nystagmus in patients with cerebellar lesions.

    PubMed

    Baier, Bernhard; Dieterich, Marianne

    2011-01-25

    Disorders of gaze-holding--organized by a neural network located in the brainstem or the cerebellum--may lead to nystagmus. Based on previous animal studies it was concluded that one key player of the cerebellar part of this gaze-holding neural network is the flocculus. Up to now, in humans there are no systematic studies in patients with cerebellar lesions examining one of the most common forms of nystagmus: gaze-evoked nystagmus (GEN). The aim of our present study was to clarify which cerebellar structures are involved in the generation of GEN. Twenty-one patients with acute unilateral cerebellar stroke were analyzed by means of modern MRI-based voxel-wise lesion-behavior mapping. Our data indicate that cerebellar structures such as the vermal pyramid, the uvula, and the tonsil, but also parts of the biventer lobule and the inferior semilunar lobule, were affected in horizontal GEN. It seems that these structures are part of a gaze-holding neural integrator control system. Furthermore, GEN might present a diagnostic sign pointing toward ipsilesionally located lesions of midline and lower cerebellar structures.

  14. Trade-off between Multiple Constraints Enables Simultaneous Formation of Modules and Hubs in Neural Systems

    PubMed Central

    Chen, Yuhan; Wang, Shengjun; Hilgetag, Claus C.; Zhou, Changsong

    2013-01-01

    The formation of the complex network architecture of neural systems is subject to multiple structural and functional constraints. Two obvious but apparently contradictory constraints are low wiring cost and high processing efficiency, characterized by short overall wiring length and a small average number of processing steps, respectively. Growing evidence shows that neural networks are results from a trade-off between physical cost and functional value of the topology. However, the relationship between these competing constraints and complex topology is not well understood quantitatively. We explored this relationship systematically by reconstructing two known neural networks, Macaque cortical connectivity and C. elegans neuronal connections, from combinatory optimization of wiring cost and processing efficiency constraints, using a control parameter , and comparing the reconstructed networks to the real networks. We found that in both neural systems, the reconstructed networks derived from the two constraints can reveal some important relations between the spatial layout of nodes and the topological connectivity, and match several properties of the real networks. The reconstructed and real networks had a similar modular organization in a broad range of , resulting from spatial clustering of network nodes. Hubs emerged due to the competition of the two constraints, and their positions were close to, and partly coincided, with the real hubs in a range of values. The degree of nodes was correlated with the density of nodes in their spatial neighborhood in both reconstructed and real networks. Generally, the rebuilt network matched a significant portion of real links, especially short-distant ones. These findings provide clear evidence to support the hypothesis of trade-off between multiple constraints on brain networks. The two constraints of wiring cost and processing efficiency, however, cannot explain all salient features in the real networks. The discrepancy suggests that there are further relevant factors that are not yet captured here. PMID:23505352

  15. Melanoma segmentation based on deep learning.

    PubMed

    Zhang, Xiaoqing

    2017-12-01

    Malignant melanoma is one of the most deadly forms of skin cancer, which is one of the world's fastest-growing cancers. Early diagnosis and treatment is critical. In this study, a neural network structure is utilized to construct a broad and accurate basis for the diagnosis of skin cancer, thereby reducing screening errors. The technique is able to improve the efficacy for identification of normally indistinguishable lesions (such as pigment spots) versus clinically unknown lesions, and to ultimately improve the diagnostic accuracy. In the field of medical imaging, in general, using neural networks for image segmentation is relatively rare. The existing traditional machine-learning neural network algorithms still cannot completely solve the problem of information loss, nor detect the precise division of the boundary area. We use an improved neural network framework, described herein, to achieve efficacious feature learning, and satisfactory segmentation of melanoma images. The architecture of the network includes multiple convolution layers, dropout layers, softmax layers, multiple filters, and activation functions. The number of data sets can be increased via rotation of the training set. A non-linear activation function (such as ReLU and ELU) is employed to alleviate the problem of gradient disappearance, and RMSprop/Adam are incorporated to optimize the loss algorithm. A batch normalization layer is added between the convolution layer and the activation layer to solve the problem of gradient disappearance and explosion. Experiments, described herein, show that our improved neural network architecture achieves higher accuracy for segmentation of melanoma images as compared with existing processes.

  16. Inversion of Density Interfaces Using the Pseudo-Backpropagation Neural Network Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiaohong; Du, Yukun; Liu, Zhan; Zhao, Wenju; Chen, Xiaocheng

    2018-05-01

    This paper presents a new pseudo-backpropagation (BP) neural network method that can invert multi-density interfaces at one time. The new method is based on the conventional forward modeling and inverse modeling theories in addition to conventional pseudo-BP neural network arithmetic. A 3D inversion model for gravity anomalies of multi-density interfaces using the pseudo-BP neural network method is constructed after analyzing the structure and function of the artificial neural network. The corresponding iterative inverse formula of the space field is presented at the same time. Based on trials of gravity anomalies and density noise, the influence of the two kinds of noise on the inverse result is discussed and the scale of noise requested for the stability of the arithmetic is analyzed. The effects of the initial model on the reduction of the ambiguity of the result and improvement of the precision of inversion are discussed. The correctness and validity of the method were verified by the 3D model of the three interfaces. 3D inversion was performed on the observed gravity anomaly data of the Okinawa trough using the program presented herein. The Tertiary basement and Moho depth were obtained from the inversion results, which also testify the adaptability of the method. This study has made a useful attempt for the inversion of gravity density interfaces.

  17. Structure, Function, and Propagation of Information across Living Two, Four, and Eight Node Degree Topologies.

    PubMed

    Alagapan, Sankaraleengam; Franca, Eric; Pan, Liangbin; Leondopulos, Stathis; Wheeler, Bruce C; DeMarse, Thomas B

    2016-01-01

    In this study, we created four network topologies composed of living cortical neurons and compared resultant structural-functional dynamics including the nature and quality of information transmission. Each living network was composed of living cortical neurons and were created using microstamping of adhesion promoting molecules and each was "designed" with different levels of convergence embedded within each structure. Networks were cultured over a grid of electrodes that permitted detailed measurements of neural activity at each node in the network. Of the topologies we tested, the "Random" networks in which neurons connect based on their own intrinsic properties transmitted information embedded within their spike trains with higher fidelity relative to any other topology we tested. Within our patterned topologies in which we explicitly manipulated structure, the effect of convergence on fidelity was dependent on both topology and time-scale (rate vs. temporal coding). A more detailed examination using tools from network analysis revealed that these changes in fidelity were also associated with a number of other structural properties including a node's degree, degree-degree correlations, path length, and clustering coefficients. Whereas information transmission was apparent among nodes with few connections, the greatest transmission fidelity was achieved among the few nodes possessing the highest number of connections (high degree nodes or putative hubs). These results provide a unique view into the relationship between structure and its affect on transmission fidelity, at least within these small neural populations with defined network topology. They also highlight the potential role of tools such as microstamp printing and microelectrode array recordings to construct and record from arbitrary network topologies to provide a new direction in which to advance the study of structure-function relationships.

  18. Neural network pattern recognition of thermal-signature spectra for chemical defense

    NASA Astrophysics Data System (ADS)

    Carrieri, Arthur H.; Lim, Pascal I.

    1995-05-01

    We treat infrared patterns of absorption or emission by nerve and blister agent compounds (and simulants of this chemical group) as features for the training of neural networks to detect the compounds' liquid layers on the ground or their vapor plumes during evaporation by external heating. Training of a four-layer network architecture is composed of a backward-error-propagation algorithm and a gradient-descent paradigm. We conduct testing by feed-forwarding preprocessed spectra through the network in a scaled format consistent with the structure of the training-data-set representation. The best-performance weight matrix (spectral filter) evolved from final network training and testing with software simulation trials is electronically transferred to a set of eight artificial intelligence integrated circuits (ICs') in specific modular form (splitting of weight matrices). This form makes full use of all input-output IC nodes. This neural network computer serves an important real-time detection function when it is integrated into pre-and postprocessing data-handling units of a tactical prototype thermoluminescence sensor now under development at the Edgewood Research, Development, and Engineering Center.

  19. Linear matrix inequality approach to exponential synchronization of a class of chaotic neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Cui, Bao-Tong

    2007-07-01

    In this paper, a synchronization scheme for a class of chaotic neural networks with time-varying delays is presented. This class of chaotic neural networks covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks, and bidirectional associative memory networks. The obtained criteria are expressed in terms of linear matrix inequalities, thus they can be efficiently verified. A comparison between our results and the previous results shows that our results are less restrictive.

  20. The Effects of Spaceflight and Head Down Tilt Bed Rest on Neurocognitive Performance: Extent, Longevity, and Neural Bases

    NASA Technical Reports Server (NTRS)

    Seidler, Rachael D.; Bloomberg, Jacob; Wood, Scott; Mulavara, Ajit; Kofman, Igor; De Dios, Yiri; Gadd, Nicole; Stepanyan, Vahagn

    2017-01-01

    Spaceflight effects on gait, balance, & manual motor control have been well studied; some evidence for cognitive deficits. Rodent cortical motor & sensory systems show neural structural alterations with spaceflight. specific Aims: Aim 1-Identify changes in brain structure, function, and network integrity as a function of head down tilt bed rest and spaceflight, and characterize their time course. Aim 2-Specify relationships between structural and functional brain changes and performance and characterize their time course.

  1. A three-dimensional neural spheroid model for capillary-like network formation.

    PubMed

    Boutin, Molly E; Kramer, Liana L; Livi, Liane L; Brown, Tyler; Moore, Christopher; Hoffman-Kim, Diane

    2018-04-01

    In vitro three-dimensional neural spheroid models have an in vivo-like cell density, and have the potential to reduce animal usage and increase experimental throughput. The aim of this study was to establish a spheroid model to study the formation of capillary-like networks in a three-dimensional environment that incorporates both neuronal and glial cell types, and does not require exogenous vasculogenic growth factors. We created self-assembled, scaffold-free cellular spheroids using primary-derived postnatal rodent cortex as a cell source. The interactions between relevant neural cell types, basement membrane proteins, and endothelial cells were characterized by immunohistochemistry. Transmission electron microscopy was used to determine if endothelial network structures had lumens. Endothelial cells within cortical spheroids assembled into capillary-like networks with lumens. Networks were surrounded by basement membrane proteins, including laminin, fibronectin and collagen IV, as well as key neurovascular cell types. Existing in vitro models of the cortical neurovascular environment study monolayers of endothelial cells, either on transwell inserts or coating cellular spheroids. These models are not well suited to study vasculogenesis, a process hallmarked by endothelial cell cord formation and subsequent lumenization. The neural spheroid is a new model to study the formation of endothelial cell capillary-like structures in vitro within a high cell density three-dimensional environment that contains both neuronal and glial populations. This model can be applied to investigate vascular assembly in healthy or disease states, such as stroke, traumatic brain injury, or neurodegenerative disorders. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  3. A Pruning Neural Network Model in Credit Classification Analysis

    PubMed Central

    Tang, Yajiao; Ji, Junkai; Dai, Hongwei; Yu, Yang; Todo, Yuki

    2018-01-01

    Nowadays, credit classification models are widely applied because they can help financial decision-makers to handle credit classification issues. Among them, artificial neural networks (ANNs) have been widely accepted as the convincing methods in the credit industry. In this paper, we propose a pruning neural network (PNN) and apply it to solve credit classification problem by adopting the well-known Australian and Japanese credit datasets. The model is inspired by synaptic nonlinearity of a dendritic tree in a biological neural model. And it is trained by an error back-propagation algorithm. The model is capable of realizing a neuronal pruning function by removing the superfluous synapses and useless dendrites and forms a tidy dendritic morphology at the end of learning. Furthermore, we utilize logic circuits (LCs) to simulate the dendritic structures successfully which makes PNN be implemented on the hardware effectively. The statistical results of our experiments have verified that PNN obtains superior performance in comparison with other classical algorithms in terms of accuracy and computational efficiency. PMID:29606961

  4. Multivariate Statistical Inference of Lightning Occurrence, and Using Lightning Observations

    NASA Technical Reports Server (NTRS)

    Boccippio, Dennis

    2004-01-01

    Two classes of multivariate statistical inference using TRMM Lightning Imaging Sensor, Precipitation Radar, and Microwave Imager observation are studied, using nonlinear classification neural networks as inferential tools. The very large and globally representative data sample provided by TRMM allows both training and validation (without overfitting) of neural networks with many degrees of freedom. In the first study, the flashing / or flashing condition of storm complexes is diagnosed using radar, passive microwave and/or environmental observations as neural network inputs. The diagnostic skill of these simple lightning/no-lightning classifiers can be quite high, over land (above 80% Probability of Detection; below 20% False Alarm Rate). In the second, passive microwave and lightning observations are used to diagnose radar reflectivity vertical structure. A priori diagnosis of hydrometeor vertical structure is highly important for improved rainfall retrieval from either orbital radars (e.g., the future Global Precipitation Mission "mothership") or radiometers (e.g., operational SSM/I and future Global Precipitation Mission passive microwave constellation platforms), we explore the incremental benefit to such diagnosis provided by lightning observations.

  5. CNNdel: Calling Structural Variations on Low Coverage Data Based on Convolutional Neural Networks

    PubMed Central

    2017-01-01

    Many structural variations (SVs) detection methods have been proposed due to the popularization of next-generation sequencing (NGS). These SV calling methods use different SV-property-dependent features; however, they all suffer from poor accuracy when running on low coverage sequences. The union of results from these tools achieves fairly high sensitivity but still produces low accuracy on low coverage sequence data. That is, these methods contain many false positives. In this paper, we present CNNdel, an approach for calling deletions from paired-end reads. CNNdel gathers SV candidates reported by multiple tools and then extracts features from aligned BAM files at the positions of candidates. With labeled feature-expressed candidates as a training set, CNNdel trains convolutional neural networks (CNNs) to distinguish true unlabeled candidates from false ones. Results show that CNNdel works well with NGS reads from 26 low coverage genomes of the 1000 Genomes Project. The paper demonstrates that convolutional neural networks can automatically assign the priority of SV features and reduce the false positives efficaciously. PMID:28630866

  6. Mullite ceramic membranes for industrial oily wastewater treatment: experimental and neural network modeling.

    PubMed

    Shokrkar, H; Salahi, A; Kasiri, N; Mohammadi, T

    2011-01-01

    In this paper, results of an experimental and modeling of separation of oil from industrial oily wastewaters (desalter unit effluent of Seraje, Ghom gas wells, Iran) with mullite ceramic membranes are presented. Mullite microfiltration symmetric membranes were synthesized from kaolin clay and alpha-alumina powder. The results show that the mullite ceramic membrane has a high total organic carbon and chemical oxygen demand rejection (94 and 89%, respectively), a low fouling resistance (30%) and a high final permeation flux (75 L/m2 h). Also, an artificial neural network, a predictive tool for tracking the inputs and outputs of a non-linear problem, is used to model the permeation flux decline during microfiltration of oily wastewater. The aim was to predict the permeation flux as a function of feed temperature, trans-membrane pressure, cross-flow velocity, oil concentration and filtration time, using a feed-forward neural network. Finally the structure of hidden layers and nodes in each layer with minimum error were reported leading to a 4-15 structure which demonstrated good agreement with the experimental measurements with an average error of less than 2%.

  7. Prediction of octanol-water partition coefficients of organic compounds by multiple linear regression, partial least squares, and artificial neural network.

    PubMed

    Golmohammadi, Hassan

    2009-11-30

    A quantitative structure-property relationship (QSPR) study was performed to develop models those relate the structure of 141 organic compounds to their octanol-water partition coefficients (log P(o/w)). A genetic algorithm was applied as a variable selection tool. Modeling of log P(o/w) of these compounds as a function of theoretically derived descriptors was established by multiple linear regression (MLR), partial least squares (PLS), and artificial neural network (ANN). The best selected descriptors that appear in the models are: atomic charge weighted partial positively charged surface area (PPSA-3), fractional atomic charge weighted partial positive surface area (FPSA-3), minimum atomic partial charge (Qmin), molecular volume (MV), total dipole moment of molecule (mu), maximum antibonding contribution of a molecule orbital in the molecule (MAC), and maximum free valency of a C atom in the molecule (MFV). The result obtained showed the ability of developed artificial neural network to prediction of partition coefficients of organic compounds. Also, the results revealed the superiority of ANN over the MLR and PLS models. Copyright 2009 Wiley Periodicals, Inc.

  8. Developing Generic Image Search Strategies for Large Astronomical Data Sets and Archives using Convolutional Neural Networks and Transfer Learning

    NASA Astrophysics Data System (ADS)

    Peek, Joshua E. G.; Hargis, Jonathan R.; Jones, Craig K.

    2018-01-01

    Astronomical instruments produce petabytes of images every year, vastly more than can be inspected by a member of the astronomical community in search of a specific population of structures. Fortunately, the sky is mostly black and source extraction algorithms have been developed to provide searchable catalogs of unconfused sources like stars and galaxies. These tools often fail for studies of more diffuse structures like the interstellar medium and unresolved stellar structures in nearby galaxies, leaving astronomers interested in observations of photodissociation regions, stellar clusters, diffuse interstellar clouds without the crucial ability to search. In this work we present a new path forward for finding structures in large data sets similar to an input structure using convolutional neural networks, transfer learning, and machine learning clustering techniques. We show applications to archival data in the Mikulski Archive for Space Telescopes (MAST).

  9. Lesion Mapping the Four-Factor Structure of Emotional Intelligence

    PubMed Central

    Operskalski, Joachim T.; Paul, Erick J.; Colom, Roberto; Barbey, Aron K.; Grafman, Jordan

    2015-01-01

    Emotional intelligence (EI) refers to an individual’s ability to process and respond to emotions, including recognizing the expression of emotions in others, using emotions to enhance thought and decision making, and regulating emotions to drive effective behaviors. Despite their importance for goal-directed social behavior, little is known about the neural mechanisms underlying specific facets of EI. Here, we report findings from a study investigating the neural bases of these specific components for EI in a sample of 130 combat veterans with penetrating traumatic brain injury. We examined the neural mechanisms underlying experiential (perceiving and using emotional information) and strategic (understanding and managing emotions) facets of EI. Factor scores were submitted to voxel-based lesion symptom mapping to elucidate their neural substrates. The results indicate that two facets of EI (perceiving and managing emotions) engage common and distinctive neural systems, with shared dependence on the social knowledge network, and selective engagement of the orbitofrontal and parietal cortex for strategic aspects of emotional information processing. The observed pattern of findings suggests that sub-facets of experiential and strategic EI can be characterized as separable but related processes that depend upon a core network of brain structures within frontal, temporal and parietal cortex. PMID:26858627

  10. Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study

    NASA Astrophysics Data System (ADS)

    Pfeil, Thomas; Jordan, Jakob; Tetzlaff, Tom; Grübl, Andreas; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz

    2016-04-01

    High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad, heavy-tailed firing-rate distributions. In line with former studies, cell heterogeneities reduce shared-input correlations. Overall, however, correlations in the recurrent system can increase with the level of heterogeneity as a consequence of diminished effective negative feedback.

  11. Structure, function, and control of the human musculoskeletal network

    PubMed Central

    Murphy, Andrew C.; Muldoon, Sarah F.; Baker, David; Lastowka, Adam; Bennett, Brittany; Yang, Muzhi

    2018-01-01

    The human body is a complex organism, the gross mechanical properties of which are enabled by an interconnected musculoskeletal network controlled by the nervous system. The nature of musculoskeletal interconnection facilitates stability, voluntary movement, and robustness to injury. However, a fundamental understanding of this network and its control by neural systems has remained elusive. Here we address this gap in knowledge by utilizing medical databases and mathematical modeling to reveal the organizational structure, predicted function, and neural control of the musculoskeletal system. We constructed a highly simplified whole-body musculoskeletal network in which single muscles connect to multiple bones via both origin and insertion points. We demonstrated that, using this simplified model, a muscle’s role in this network could offer a theoretical prediction of the susceptibility of surrounding components to secondary injury. Finally, we illustrated that sets of muscles cluster into network communities that mimic the organization of control modules in primary motor cortex. This novel formalism for describing interactions between the muscular and skeletal systems serves as a foundation to develop and test therapeutic responses to injury, inspiring future advances in clinical treatments. PMID:29346370

  12. Classification of Weed Species Using Artificial Neural Networks Based on Color Leaf Texture Feature

    NASA Astrophysics Data System (ADS)

    Li, Zhichen; An, Qiu; Ji, Changying

    The potential impact of herbicide utilization compel people to use new method of weed control. Selective herbicide application is optimal method to reduce herbicide usage while maintain weed control. The key of selective herbicide is how to discriminate weed exactly. The HIS color co-occurrence method (CCM) texture analysis techniques was used to extract four texture parameters: Angular second moment (ASM), Entropy(E), Inertia quadrature (IQ), and Inverse difference moment or local homogeneity (IDM).The weed species selected for studying were Arthraxon hispidus, Digitaria sanguinalis, Petunia, Cyperus, Alternanthera Philoxeroides and Corchoropsis psilocarpa. The software of neuroshell2 was used for designing the structure of the neural network, training and test the data. It was found that the 8-40-1 artificial neural network provided the best classification performance and was capable of classification accuracies of 78%.

  13. Inner and Outer Recursive Neural Networks for Chemoinformatics Applications.

    PubMed

    Urban, Gregor; Subrahmanya, Niranjan; Baldi, Pierre

    2018-02-26

    Deep learning methods applied to problems in chemoinformatics often require the use of recursive neural networks to handle data with graphical structure and variable size. We present a useful classification of recursive neural network approaches into two classes, the inner and outer approach. The inner approach uses recursion inside the underlying graph, to essentially "crawl" the edges of the graph, while the outer approach uses recursion outside the underlying graph, to aggregate information over progressively longer distances in an orthogonal direction. We illustrate the inner and outer approaches on several examples. More importantly, we provide open-source implementations [available at www.github.com/Chemoinformatics/InnerOuterRNN and cdb.ics.uci.edu ] for both approaches in Tensorflow which can be used in combination with training data to produce efficient models for predicting the physical, chemical, and biological properties of small molecules.

  14. Classification and recognition of texture collagen obtaining by multiphoton microscope with neural network analysis

    NASA Astrophysics Data System (ADS)

    Wu, Shulian; Peng, Yuanyuan; Hu, Liangjun; Zhang, Xiaoman; Li, Hui

    2016-01-01

    Second harmonic generation microscopy (SHGM) was used to monitor the process of chronological aging skin in vivo. The collagen structures of mice model with different ages were obtained using SHGM. Then, texture feature with contrast, correlation and entropy were extracted and analysed using the grey level co-occurrence matrix. At last, the neural network tool of Matlab was applied to train the texture of collagen in different statues during the aging process. And the simulation of mice collagen texture was carried out. The results indicated that the classification accuracy reach 85%. Results demonstrated that the proposed approach effectively detected the target object in the collagen texture image during the chronological aging process and the analysis tool based on neural network applied the skin of classification and feature extraction method is feasible.

  15. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks

    PubMed Central

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  16. Predicting acute aquatic toxicity of structurally diverse chemicals in fish using artificial intelligence approaches.

    PubMed

    Singh, Kunwar P; Gupta, Shikha; Rai, Premanjali

    2013-09-01

    The research aims to develop global modeling tools capable of categorizing structurally diverse chemicals in various toxicity classes according to the EEC and European Community directives, and to predict their acute toxicity in fathead minnow using set of selected molecular descriptors. Accordingly, artificial intelligence approach based classification and regression models, such as probabilistic neural networks (PNN), generalized regression neural networks (GRNN), multilayer perceptron neural network (MLPN), radial basis function neural network (RBFN), support vector machines (SVM), gene expression programming (GEP), and decision tree (DT) were constructed using the experimental toxicity data. Diversity and non-linearity in the chemicals' data were tested using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. Predictive and generalization abilities of various models constructed here were compared using several statistical parameters. PNN and GRNN models performed relatively better than MLPN, RBFN, SVM, GEP, and DT. Both in two and four category classifications, PNN yielded a considerably high accuracy of classification in training (95.85 percent and 90.07 percent) and validation data (91.30 percent and 86.96 percent), respectively. GRNN rendered a high correlation between the measured and model predicted -log LC50 values both for the training (0.929) and validation (0.910) data and low prediction errors (RMSE) of 0.52 and 0.49 for two sets. Efficiency of the selected PNN and GRNN models in predicting acute toxicity of new chemicals was adequately validated using external datasets of different fish species (fathead minnow, bluegill, trout, and guppy). The PNN and GRNN models showed good predictive and generalization abilities and can be used as tools for predicting toxicities of structurally diverse chemical compounds. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Equilibria of perceptrons for simple contingency problems.

    PubMed

    Dawson, Michael R W; Dupuis, Brian

    2012-08-01

    The contingency between cues and outcomes is fundamentally important to theories of causal reasoning and to theories of associative learning. Researchers have computed the equilibria of Rescorla-Wagner models for a variety of contingency problems, and have used these equilibria to identify situations in which the Rescorla-Wagner model is consistent, or inconsistent, with normative models of contingency. Mathematical analyses that directly compare artificial neural networks to contingency theory have not been performed, because of the assumed equivalence between the Rescorla-Wagner learning rule and the delta rule training of artificial neural networks. However, recent results indicate that this equivalence is not as straightforward as typically assumed, suggesting a strong need for mathematical accounts of how networks deal with contingency problems. One such analysis is presented here, where it is proven that the structure of the equilibrium for a simple network trained on a basic contingency problem is quite different from the structure of the equilibrium for a Rescorla-Wagner model faced with the same problem. However, these structural differences lead to functionally equivalent behavior. The implications of this result for the relationships between associative learning, contingency theory, and connectionism are discussed.

  18. Nonlinear dynamic systems identification using recurrent interval type-2 TSK fuzzy neural network - A novel structure.

    PubMed

    El-Nagar, Ahmad M

    2018-01-01

    In this study, a novel structure of a recurrent interval type-2 Takagi-Sugeno-Kang (TSK) fuzzy neural network (FNN) is introduced for nonlinear dynamic and time-varying systems identification. It combines the type-2 fuzzy sets (T2FSs) and a recurrent FNN to avoid the data uncertainties. The fuzzy firing strengths in the proposed structure are returned to the network input as internal variables. The interval type-2 fuzzy sets (IT2FSs) is used to describe the antecedent part for each rule while the consequent part is a TSK-type, which is a linear function of the internal variables and the external inputs with interval weights. All the type-2 fuzzy rules for the proposed RIT2TSKFNN are learned on-line based on structure and parameter learning, which are performed using the type-2 fuzzy clustering. The antecedent and consequent parameters of the proposed RIT2TSKFNN are updated based on the Lyapunov function to achieve network stability. The obtained results indicate that our proposed network has a small root mean square error (RMSE) and a small integral of square error (ISE) with a small number of rules and a small computation time compared with other type-2 FNNs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Region stability analysis and tracking control of memristive recurrent neural network.

    PubMed

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. The structure-AChE inhibitory activity relationships study in a series of pyridazine analogues.

    PubMed

    Saracoglu, M; Kandemirli, F

    2009-07-01

    The structure-activity relationships (SAR) are investigated by means of the Electronic-Topological Method (ETM) followed by the Neural Networks application (ETM-NN) for a class of anti-cholinesterase inhibitors (AChE, 53 molecules) being pyridazine derivatives. AChE activities of the series were measured in IC(50) units, and relative to the activity levels, the series was partitioned into classes of active and inactive compounds. Based on pharmacophores and antipharmacophores calculated by the ETM-software as sub-matrices containing important spatial and electronic characteristics, a system for the activity prognostication is developed. Input data for the ETM were taken as the results of conformational and quantum-mechanics calculations. To predict the activity, we used one of the most well known neural networks, namely, the feed-forward neural networks (FFNNs) trained with the back propagation algorithm. The supervised learning was performed using a variant of FFNN known as the Associative Neural Networks (ASNN). The result of the testing revealed that the high ETM's ability of predicting both activity and inactivity of potential AChE inhibitors. Analysis of HOMOs for the compounds containing Ph1 and APh1 has shown that atoms with the highest values of the atomic orbital coefficients are mainly those atoms that enter into the pharmacophores. Thus, the set of pharmacophores and antipharmacophores found as the result of this study forms a basis for a system of the anti-cholinesterase activity prediction.

  1. Sediment classification using neural networks: An example from the site-U1344A of IODP Expedition 323 in the Bering Sea

    NASA Astrophysics Data System (ADS)

    Ojha, Maheswar; Maiti, Saumen

    2016-03-01

    A novel approach based on the concept of Bayesian neural network (BNN) has been implemented for classifying sediment boundaries using downhole log data obtained during Integrated Ocean Drilling Program (IODP) Expedition 323 in the Bering Sea slope region. The Bayesian framework in conjunction with Markov Chain Monte Carlo (MCMC)/hybrid Monte Carlo (HMC) learning paradigm has been applied to constrain the lithology boundaries using density, density porosity, gamma ray, sonic P-wave velocity and electrical resistivity at the Hole U1344A. We have demonstrated the effectiveness of our supervised classification methodology by comparing our findings with a conventional neural network and a Bayesian neural network optimized by scaled conjugate gradient method (SCG), and tested the robustness of the algorithm in the presence of red noise in the data. The Bayesian results based on the HMC algorithm (BNN.HMC) resolve detailed finer structures at certain depths in addition to main lithology such as silty clay, diatom clayey silt and sandy silt. Our method also recovers the lithology information from a depth ranging between 615 and 655 m Wireline log Matched depth below Sea Floor of no core recovery zone. Our analyses demonstrate that the BNN based approach renders robust means for the classification of complex lithology successions at the Hole U1344A, which could be very useful for other studies and understanding the oceanic crustal inhomogeneity and structural discontinuities.

  2. A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints.

    PubMed

    Liang, X B; Wang, J

    2000-01-01

    This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.

  3. Salient regions detection using convolutional neural networks and color volume

    NASA Astrophysics Data System (ADS)

    Liu, Guang-Hai; Hou, Yingkun

    2018-03-01

    Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.

  4. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Astrophysics Data System (ADS)

    Guo, T. H.; Musgrave, J.

    1992-11-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.

  5. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Guo, T. H.; Musgrave, J.

    1992-01-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.

  6. Ion track based tunable device as humidity sensor: a neural network approach

    NASA Astrophysics Data System (ADS)

    Sharma, Mamta; Sharma, Anuradha; Bhattacherjee, Vandana

    2013-01-01

    Artificial Neural Network (ANN) has been applied in statistical model development, adaptive control system, pattern recognition in data mining, and decision making under uncertainty. The nonlinear dependence of any sensor output on the input physical variable has been the motivation for many researchers to attempt unconventional modeling techniques such as neural networks and other machine learning approaches. Artificial neural network (ANN) is a computational tool inspired by the network of neurons in biological nervous system. It is a network consisting of arrays of artificial neurons linked together with different weights of connection. The states of the neurons as well as the weights of connections among them evolve according to certain learning rules.. In the present work we focus on the category of sensors which respond to electrical property changes such as impedance or capacitance. Recently, sensor materials have been embedded in etched tracks due to their nanometric dimensions and high aspect ratio which give high surface area available for exposure to sensing material. Various materials can be used for this purpose to probe physical (light intensity, temperature etc.), chemical (humidity, ammonia gas, alcohol etc.) or biological (germs, hormones etc.) parameters. The present work involves the application of TEMPOS structures as humidity sensors. The sample to be studied was prepared using the polymer electrolyte (PEO/NH4ClO4) with CdS nano-particles dispersed in the polymer electrolyte. In the present research we have attempted to correlate the combined effects of voltage and frequency on impedance of humidity sensors using a neural network model and results have indicated that the mean absolute error of the ANN Model for the training data was 3.95% while for the validation data it was 4.65%. The corresponding values for the LR model were 8.28% and 8.35% respectively. It was also demonstrated the percentage improvement of the ANN Model with respect to the linear regression model. This demonstrates the suitability of neural networks to perform such modeling.

  7. Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.

    PubMed

    Hardy, N F; Buonomano, Dean V

    2018-02-01

    Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.

  8. Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment

    PubMed Central

    Yamashita, Yuichi; Tani, Jun

    2008-01-01

    It is generally thought that skilled behavior in human beings results from a functional hierarchy of the motor control system, within which reusable motor primitives are flexibly integrated into various sensori-motor sequence patterns. The underlying neural mechanisms governing the way in which continuous sensori-motor flows are segmented into primitives and the way in which series of primitives are integrated into various behavior sequences have, however, not yet been clarified. In earlier studies, this functional hierarchy has been realized through the use of explicit hierarchical structure, with local modules representing motor primitives in the lower level and a higher module representing sequences of primitives switched via additional mechanisms such as gate-selecting. When sequences contain similarities and overlap, however, a conflict arises in such earlier models between generalization and segmentation, induced by this separated modular structure. To address this issue, we propose a different type of neural network model. The current model neither makes use of separate local modules to represent primitives nor introduces explicit hierarchical structure. Rather than forcing architectural hierarchy onto the system, functional hierarchy emerges through a form of self-organization that is based on two distinct types of neurons, each with different time properties (“multiple timescales”). Through the introduction of multiple timescales, continuous sequences of behavior are segmented into reusable primitives, and the primitives, in turn, are flexibly integrated into novel sequences. In experiments, the proposed network model, coordinating the physical body of a humanoid robot through high-dimensional sensori-motor control, also successfully situated itself within a physical environment. Our results suggest that it is not only the spatial connections between neurons but also the timescales of neural activity that act as important mechanisms leading to functional hierarchy in neural systems. PMID:18989398

  9. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    USGS Publications Warehouse

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  10. A computational framework for the detection of subcortical brain dysmaturation in neonatal MRI using 3D Convolutional Neural Networks.

    PubMed

    Ceschin, Rafael; Zahner, Alexandria; Reynolds, William; Gaesser, Jenna; Zuccoli, Giulio; Lo, Cecilia W; Gopalakrishnan, Vanathi; Panigrahy, Ashok

    2018-05-21

    Deep neural networks are increasingly being used in both supervised learning for classification tasks and unsupervised learning to derive complex patterns from the input data. However, the successful implementation of deep neural networks using neuroimaging datasets requires adequate sample size for training and well-defined signal intensity based structural differentiation. There is a lack of effective automated diagnostic tools for the reliable detection of brain dysmaturation in the neonatal period, related to small sample size and complex undifferentiated brain structures, despite both translational research and clinical importance. Volumetric information alone is insufficient for diagnosis. In this study, we developed a computational framework for the automated classification of brain dysmaturation from neonatal MRI, by combining a specific deep neural network implementation with neonatal structural brain segmentation as a method for both clinical pattern recognition and data-driven inference into the underlying structural morphology. We implemented three-dimensional convolution neural networks (3D-CNNs) to specifically classify dysplastic cerebelli, a subset of surface-based subcortical brain dysmaturation, in term infants born with congenital heart disease. We obtained a 0.985 ± 0. 0241-classification accuracy of subtle cerebellar dysplasia in CHD using 10-fold cross-validation. Furthermore, the hidden layer activations and class activation maps depicted regional vulnerability of the superior surface of the cerebellum, (composed of mostly the posterior lobe and the midline vermis), in regards to differentiating the dysplastic process from normal tissue. The posterior lobe and the midline vermis provide regional differentiation that is relevant to not only to the clinical diagnosis of cerebellar dysplasia, but also genetic mechanisms and neurodevelopmental outcome correlates. These findings not only contribute to the detection and classification of a subset of neonatal brain dysmaturation, but also provide insight to the pathogenesis of cerebellar dysplasia in CHD. In addition, this is one of the first examples of the application of deep learning to a neuroimaging dataset, in which the hidden layer activation revealed diagnostically and biologically relevant features about the clinical pathogenesis. The code developed for this project is open source, published under the BSD License, and designed to be generalizable to applications both within and beyond neonatal brain imaging. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Theoretical Neuroanatomy:Analyzing the Structure, Dynamics,and Function of Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Seth, Anil K.; Edelman, Gerald M.

    The mammalian brain is an extraordinary object: its networks give rise to our conscious experiences as well as to the generation of adaptive behavior for the organism within its environment. Progress in understanding the structure, dynamics and function of the brain faces many challenges. Biological neural networks change over time, their detailed structure is difficult to elucidate, and they are highly heterogeneous both in their neuronal units and synaptic connections. In facing these challenges, graph-theoretic and information-theoretic approaches have yielded a number of useful insights and promise many more.

  12. Altered Integration of Structural Covariance Networks in Young Children With Type 1 Diabetes.

    PubMed

    Hosseini, S M Hadi; Mazaika, Paul; Mauras, Nelly; Buckingham, Bruce; Weinzimer, Stuart A; Tsalikian, Eva; White, Neil H; Reiss, Allan L

    2016-11-01

    Type 1 diabetes mellitus (T1D), one of the most frequent chronic diseases in children, is associated with glucose dysregulation that contributes to an increased risk for neurocognitive deficits. While there is a bulk of evidence regarding neurocognitive deficits in adults with T1D, little is known about how early-onset T1D affects neural networks in young children. Recent data demonstrated widespread alterations in regional gray matter and white matter associated with T1D in young children. These widespread neuroanatomical changes might impact the organization of large-scale brain networks. In the present study, we applied graph-theoretical analysis to test whether the organization of structural covariance networks in the brain for a cohort of young children with T1D (N = 141) is altered compared to healthy controls (HC; N = 69). While the networks in both groups followed a small world organization-an architecture that is simultaneously highly segregated and integrated-the T1D network showed significantly longer path length compared with HC, suggesting reduced global integration of brain networks in young children with T1D. In addition, network robustness analysis revealed that the T1D network model showed more vulnerability to neural insult compared with HC. These results suggest that early-onset T1D negatively impacts the global organization of structural covariance networks and influences the trajectory of brain development in childhood. This is the first study to examine structural covariance networks in young children with T1D. Improving glycemic control for young children with T1D might help prevent alterations in brain networks in this population. Hum Brain Mapp 37:4034-4046, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Generating Seismograms with Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Krischer, L.; Fichtner, A.

    2017-12-01

    The recent surge of successful uses of deep neural networks in computer vision, speech recognition, and natural language processing, mainly enabled by the availability of fast GPUs and extremely large data sets, is starting to see many applications across all natural sciences. In seismology these are largely confined to classification and discrimination tasks. In this contribution we explore the use of deep neural networks for another class of problems: so called generative models.Generative modelling is a branch of statistics concerned with generating new observed data samples, usually by drawing from some underlying probability distribution. Samples with specific attributes can be generated by conditioning on input variables. In this work we condition on seismic source (mechanism and location) and receiver (location) parameters to generate multi-component seismograms.The deep neural networks are trained on synthetic data calculated with Instaseis (http://instaseis.net, van Driel et al. (2015)) and waveforms from the global ShakeMovie project (http://global.shakemovie.princeton.edu, Tromp et al. (2010)). The underlying radially symmetric or smoothly three dimensional Earth structures result in comparatively small waveform differences from similar events or at close receivers and the networks learn to interpolate between training data samples.Of particular importance is the chosen misfit functional. Generative adversarial networks (Goodfellow et al. (2014)) implement a system in which two networks compete: the generator network creates samples and the discriminator network distinguishes these from the true training examples. Both are trained in an adversarial fashion until the discriminator can no longer distinguish between generated and real samples. We show how this can be applied to seismograms and in particular how it compares to networks trained with more conventional misfit metrics. Last but not least we attempt to shed some light on the black-box nature of neural networks by estimating the quality and uncertainties of the generated seismograms.

  14. GA-based fuzzy reinforcement learning for control of a magnetic bearing system.

    PubMed

    Lin, C T; Jou, C P

    2000-01-01

    This paper proposes a TD (temporal difference) and GA (genetic algorithm)-based reinforcement (TDGAR) learning method and applies it to the control of a real magnetic bearing system. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to perform the reinforcement learning task. The TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network to guide the learning of the other network (the action network) which determines the outputs (actions) of the TDGAR learning system. The action network can be a normal neural network or a neural fuzzy network. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. The key concept of the TDGAR learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA such that the GA can evaluate the candidate solutions (chromosomes) regularly, even during periods without external feedback from the environment. This enables the GA to proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning since a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problem. The proposed TDGAR learning system has been used to control an active magnetic bearing (AMB) system in practice. A systematic design procedure is developed to achieve successful integration of all the subsystems including magnetic suspension, mechanical structure, and controller training. The results show that the TDGAR learning scheme can successfully find a neural controller or a neural fuzzy controller for a self-designed magnetic bearing system.

  15. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  16. Muscle networks: Connectivity analysis of EMG activity during postural control

    NASA Astrophysics Data System (ADS)

    Boonstra, Tjeerd W.; Danna-Dos-Santos, Alessander; Xie, Hong-Bo; Roerdink, Melvyn; Stins, John F.; Breakspear, Michael

    2015-12-01

    Understanding the mechanisms that reduce the many degrees of freedom in the musculoskeletal system remains an outstanding challenge. Muscle synergies reduce the dimensionality and hence simplify the control problem. How this is achieved is not yet known. Here we use network theory to assess the coordination between multiple muscles and to elucidate the neural implementation of muscle synergies. We performed connectivity analysis of surface EMG from ten leg muscles to extract the muscle networks while human participants were standing upright in four different conditions. We observed widespread connectivity between muscles at multiple distinct frequency bands. The network topology differed significantly between frequencies and between conditions. These findings demonstrate how muscle networks can be used to investigate the neural circuitry of motor coordination. The presence of disparate muscle networks across frequencies suggests that the neuromuscular system is organized into a multiplex network allowing for parallel and hierarchical control structures.

  17. Puzzle Pieces: Neural Structure and Function in Prader-Willi Syndrome

    PubMed Central

    Manning, Katherine E.; Holland, Anthony J.

    2015-01-01

    Prader-Willi syndrome (PWS) is a neurodevelopmental disorder of genomic imprinting, presenting with a behavioural phenotype encompassing hyperphagia, intellectual disability, social and behavioural difficulties, and propensity to psychiatric illness. Research has tended to focus on the cognitive and behavioural investigation of these features, and, with the exception of eating behaviour, the neural physiology is currently less well understood. A systematic review was undertaken to explore findings relating to neural structure and function in PWS, using search terms designed to encompass all published articles concerning both in vivo and post-mortem studies of neural structure and function in PWS. This supported the general paucity of research in this area, with many articles reporting case studies and qualitative descriptions or focusing solely on the overeating behaviour, although a number of systematic investigations were also identified. Research to date implicates a combination of subcortical and higher order structures in PWS, including those involved in processing reward, motivation, affect and higher order cognitive functions, with both anatomical and functional investigations indicating abnormalities. It appears likely that PWS involves aberrant activity across distributed neural networks. The characterisation of neural structure and function warrants both replication and further systematic study. PMID:28943631

  18. Puzzle Pieces: Neural Structure and Function in Prader-Willi Syndrome.

    PubMed

    Manning, Katherine E; Holland, Anthony J

    2015-12-17

    Prader-Willi syndrome (PWS) is a neurodevelopmental disorder of genomic imprinting, presenting with a behavioural phenotype encompassing hyperphagia, intellectual disability, social and behavioural difficulties, and propensity to psychiatric illness. Research has tended to focus on the cognitive and behavioural investigation of these features, and, with the exception of eating behaviour, the neural physiology is currently less well understood. A systematic review was undertaken to explore findings relating to neural structure and function in PWS, using search terms designed to encompass all published articles concerning both in vivo and post-mortem studies of neural structure and function in PWS. This supported the general paucity of research in this area, with many articles reporting case studies and qualitative descriptions or focusing solely on the overeating behaviour, although a number of systematic investigations were also identified. Research to date implicates a combination of subcortical and higher order structures in PWS, including those involved in processing reward, motivation, affect and higher order cognitive functions, with both anatomical and functional investigations indicating abnormalities. It appears likely that PWS involves aberrant activity across distributed neural networks. The characterisation of neural structure and function warrants both replication and further systematic study.

  19. Real-time support for high performance aircraft operation

    NASA Technical Reports Server (NTRS)

    Vidal, Jacques J.

    1989-01-01

    The feasibility of real-time processing schemes using artificial neural networks (ANNs) is investigated. A rationale for digital neural nets is presented and a general processor architecture for control applications is illustrated. Research results on ANN structures for real-time applications are given. Research results on ANN algorithms for real-time control are also shown.

  20. Stochastic architecture for Hopfield neural nets

    NASA Technical Reports Server (NTRS)

    Pavel, Sandy

    1992-01-01

    An expandable stochastic digital architecture for recurrent (Hopfield like) neural networks is proposed. The main features and basic principles of stochastic processing are presented. The stochastic digital architecture is based on a chip with n full interconnected neurons with a pipeline, bit processing structure. For large applications, a flexible way to interconnect many such chips is provided.

  1. Approach to design neural cryptography: a generalized architecture and a heuristic rule.

    PubMed

    Mu, Nankun; Liao, Xiaofeng; Huang, Tingwen

    2013-06-01

    Neural cryptography, a type of public key exchange protocol, is widely considered as an effective method for sharing a common secret key between two neural networks on public channels. How to design neural cryptography remains a great challenge. In this paper, in order to provide an approach to solve this challenge, a generalized network architecture and a significant heuristic rule are designed. The proposed generic framework is named as tree state classification machine (TSCM), which extends and unifies the existing structures, i.e., tree parity machine (TPM) and tree committee machine (TCM). Furthermore, we carefully study and find that the heuristic rule can improve the security of TSCM-based neural cryptography. Therefore, TSCM and the heuristic rule can guide us to designing a great deal of effective neural cryptography candidates, in which it is possible to achieve the more secure instances. Significantly, in the light of TSCM and the heuristic rule, we further expound that our designed neural cryptography outperforms TPM (the most secure model at present) on security. Finally, a series of numerical simulation experiments are provided to verify validity and applicability of our results.

  2. Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.

    PubMed

    Wan, Ying; Cao, Jinde; Wen, Guanghui

    In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.

  3. ChainMail based neural dynamics modeling of soft tissue deformation for surgical simulation.

    PubMed

    Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-07-20

    Realistic and real-time modeling and simulation of soft tissue deformation is a fundamental research issue in the field of surgical simulation. In this paper, a novel cellular neural network approach is presented for modeling and simulation of soft tissue deformation by combining neural dynamics of cellular neural network with ChainMail mechanism. The proposed method formulates the problem of elastic deformation into cellular neural network activities to avoid the complex computation of elasticity. The local position adjustments of ChainMail are incorporated into the cellular neural network as the local connectivity of cells, through which the dynamic behaviors of soft tissue deformation are transformed into the neural dynamics of cellular neural network. Experiments demonstrate that the proposed neural network approach is capable of modeling the soft tissues' nonlinear deformation and typical mechanical behaviors. The proposed method not only improves ChainMail's linear deformation with the nonlinear characteristics of neural dynamics but also enables the cellular neural network to follow the principle of continuum mechanics to simulate soft tissue deformation.

  4. Prediction of the Fundamental Period of Infilled RC Frame Structures Using Artificial Neural Networks.

    PubMed

    Asteris, Panagiotis G; Tsaris, Athanasios K; Cavaleri, Liborio; Repapis, Constantinos C; Papalou, Angeliki; Di Trapani, Fabio; Karypidis, Dimitrios F

    2016-01-01

    The fundamental period is one of the most critical parameters for the seismic design of structures. There are several literature approaches for its estimation which often conflict with each other, making their use questionable. Furthermore, the majority of these approaches do not take into account the presence of infill walls into the structure despite the fact that infill walls increase the stiffness and mass of structure leading to significant changes in the fundamental period. In the present paper, artificial neural networks (ANNs) are used to predict the fundamental period of infilled reinforced concrete (RC) structures. For the training and the validation of the ANN, a large data set is used based on a detailed investigation of the parameters that affect the fundamental period of RC structures. The comparison of the predicted values with analytical ones indicates the potential of using ANNs for the prediction of the fundamental period of infilled RC frame structures taking into account the crucial parameters that influence its value.

  5. Prediction of the Fundamental Period of Infilled RC Frame Structures Using Artificial Neural Networks

    PubMed Central

    Asteris, Panagiotis G.; Tsaris, Athanasios K.; Cavaleri, Liborio; Repapis, Constantinos C.; Papalou, Angeliki; Di Trapani, Fabio; Karypidis, Dimitrios F.

    2016-01-01

    The fundamental period is one of the most critical parameters for the seismic design of structures. There are several literature approaches for its estimation which often conflict with each other, making their use questionable. Furthermore, the majority of these approaches do not take into account the presence of infill walls into the structure despite the fact that infill walls increase the stiffness and mass of structure leading to significant changes in the fundamental period. In the present paper, artificial neural networks (ANNs) are used to predict the fundamental period of infilled reinforced concrete (RC) structures. For the training and the validation of the ANN, a large data set is used based on a detailed investigation of the parameters that affect the fundamental period of RC structures. The comparison of the predicted values with analytical ones indicates the potential of using ANNs for the prediction of the fundamental period of infilled RC frame structures taking into account the crucial parameters that influence its value. PMID:27066069

  6. Fuzzy logic and neural networks in artificial intelligence and pattern recognition

    NASA Astrophysics Data System (ADS)

    Sanchez, Elie

    1991-10-01

    With the use of fuzzy logic techniques, neural computing can be integrated in symbolic reasoning to solve complex real world problems. In fact, artificial neural networks, expert systems, and fuzzy logic systems, in the context of approximate reasoning, share common features and techniques. A model of Fuzzy Connectionist Expert System is introduced, in which an artificial neural network is designed to construct the knowledge base of an expert system from, training examples (this model can also be used for specifications of rules in fuzzy logic control). Two types of weights are associated with the synaptic connections in an AND-OR structure: primary linguistic weights, interpreted as labels of fuzzy sets, and secondary numerical weights. Cell activation is computed through min-max fuzzy equations of the weights. Learning consists in finding the (numerical) weights and the network topology. This feedforward network is described and first illustrated in a biomedical application (medical diagnosis assistance from inflammatory-syndromes/proteins profiles). Then, it is shown how this methodology can be utilized for handwritten pattern recognition (characters play the role of diagnoses): in a fuzzy neuron describing a number for example, the linguistic weights represent fuzzy sets on cross-detecting lines and the numerical weights reflect the importance (or weakness) of connections between cross-detecting lines and characters.

  7. Comparative Analysis of Soft Computing Models in Prediction of Bending Rigidity of Cotton Woven Fabrics

    NASA Astrophysics Data System (ADS)

    Guruprasad, R.; Behera, B. K.

    2015-10-01

    Quantitative prediction of fabric mechanical properties is an essential requirement for design engineering of textile and apparel products. In this work, the possibility of prediction of bending rigidity of cotton woven fabrics has been explored with the application of Artificial Neural Network (ANN) and two hybrid methodologies, namely Neuro-genetic modeling and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling. For this purpose, a set of cotton woven grey fabrics was desized, scoured and relaxed. The fabrics were then conditioned and tested for bending properties. With the database thus created, a neural network model was first developed using back propagation as the learning algorithm. The second model was developed by applying a hybrid learning strategy, in which genetic algorithm was first used as a learning algorithm to optimize the number of neurons and connection weights of the neural network. The Genetic algorithm optimized network structure was further allowed to learn using back propagation algorithm. In the third model, an ANFIS modeling approach was attempted to map the input-output data. The prediction performances of the models were compared and a sensitivity analysis was reported. The results show that the prediction by neuro-genetic and ANFIS models were better in comparison with that of back propagation neural network model.

  8. Neural networks for learning and prediction with applications to remote sensing and speech perception

    NASA Astrophysics Data System (ADS)

    Gjaja, Marin N.

    1997-11-01

    Neural networks for supervised and unsupervised learning are developed and applied to problems in remote sensing, continuous map learning, and speech perception. Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART networks synthesize fuzzy logic and neural networks, and supervised ARTMAP networks incorporate ART modules for prediction and classification. New ART and ARTMAP methods resulting from analyses of data structure, parameter specification, and category selection are developed. Architectural modifications providing flexibility for a variety of applications are also introduced and explored. A new methodology for automatic mapping from Landsat Thematic Mapper (TM) and terrain data, based on fuzzy ARTMAP, is developed. System capabilities are tested on a challenging remote sensing problem, prediction of vegetation classes in the Cleveland National Forest from spectral and terrain features. After training at the pixel level, performance is tested at the stand level, using sites not seen during training. Results are compared to those of maximum likelihood classifiers, back propagation neural networks, and K-nearest neighbor algorithms. Best performance is obtained using a hybrid system based on a convex combination of fuzzy ARTMAP and maximum likelihood predictions. This work forms the foundation for additional studies exploring fuzzy ARTMAP's capability to estimate class mixture composition for non-homogeneous sites. Exploratory simulations apply ARTMAP to the problem of learning continuous multidimensional mappings. A novel system architecture retains basic ARTMAP properties of incremental and fast learning in an on-line setting while adding components to solve this class of problems. The perceptual magnet effect is a language-specific phenomenon arising early in infant speech development that is characterized by a warping of speech sound perception. An unsupervised neural network model is proposed that embodies two principal hypotheses supported by experimental data--that sensory experience guides language-specific development of an auditory neural map and that a population vector can predict psychological phenomena based on map cell activities. Model simulations show how a nonuniform distribution of map cell firing preferences can develop from language-specific input and give rise to the magnet effect.

  9. Detection of bars in galaxies using a deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Abraham, Sheelu; Aniyan, A. K.; Kembhavi, Ajit K.; Philip, N. S.; Vaghmare, Kaustubh

    2018-06-01

    We present an automated method for the detection of bar structure in optical images of galaxies using a deep convolutional neural network that is easy to use and provides good accuracy. In our study, we use a sample of 9346 galaxies in the redshift range of 0.009-0.2 from the Sloan Digital Sky Survey (SDSS), which has 3864 barred galaxies, the rest being unbarred. We reach a top precision of 94 per cent in identifying bars in galaxies using the trained network. This accuracy matches the accuracy reached by human experts on the same data without additional information about the images. Since deep convolutional neural networks can be scaled to handle large volumes of data, the method is expected to have great relevance in an era where astronomy data is rapidly increasing in terms of volume, variety, volatility, and velocity along with other V's that characterize big data. With the trained model, we have constructed a catalogue of barred galaxies from SDSS and made it available online.

  10. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  11. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data

    PubMed Central

    2017-01-01

    In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718

  12. Ground-state coding in partially connected neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1989-01-01

    Patterns over (-1,0,1) define, by their outer products, partially connected neural networks, consisting of internally strongly connected, externally weakly connected subnetworks. The connectivity patterns may have highly organized structures, such as lattices and fractal trees or nests. Subpatterns over (-1,1) define the subcodes stored in the subnetwork, that agree in their common bits. It is first shown that the code words are locally stable stares of the network, provided that each of the subcodes consists of mutually orthogonal words or of, at most, two words. Then it is shown that if each of the subcodes consists of two orthogonal words, the code words are the unique ground states (absolute minima) of the Hamiltonian associated with the network. The regions of attraction associated with the code words are shown to grow with the number of subnetworks sharing each of the neurons. Depending on the particular network architecture, the code sizes of partially connected networks can be vastly greater than those of fully connected ones and their error correction capabilities can be significantly greater than those of the disconnected subnetworks. The codes associated with lattice-structured and hierarchical networks are discussed in some detail.

  13. Structural and functional correlates for language efficiency in auditory word processing.

    PubMed

    Jung, JeYoung; Kim, Sunmi; Cho, Hyesuk; Nam, Kichun

    2017-01-01

    This study aims to provide convergent understanding of the neural basis of auditory word processing efficiency using a multimodal imaging. We investigated the structural and functional correlates of word processing efficiency in healthy individuals. We acquired two structural imaging (T1-weighted imaging and diffusion tensor imaging) and functional magnetic resonance imaging (fMRI) during auditory word processing (phonological and semantic tasks). Our results showed that better phonological performance was predicted by the greater thalamus activity. In contrary, better semantic performance was associated with the less activation in the left posterior middle temporal gyrus (pMTG), supporting the neural efficiency hypothesis that better task performance requires less brain activation. Furthermore, our network analysis revealed the semantic network including the left anterior temporal lobe (ATL), dorsolateral prefrontal cortex (DLPFC) and pMTG was correlated with the semantic efficiency. Especially, this network acted as a neural efficient manner during auditory word processing. Structurally, DLPFC and cingulum contributed to the word processing efficiency. Also, the parietal cortex showed a significate association with the word processing efficiency. Our results demonstrated that two features of word processing efficiency, phonology and semantics, can be supported in different brain regions and, importantly, the way serving it in each region was different according to the feature of word processing. Our findings suggest that word processing efficiency can be achieved by in collaboration of multiple brain regions involved in language and general cognitive function structurally and functionally.

  14. Structural and functional correlates for language efficiency in auditory word processing

    PubMed Central

    Kim, Sunmi; Cho, Hyesuk; Nam, Kichun

    2017-01-01

    This study aims to provide convergent understanding of the neural basis of auditory word processing efficiency using a multimodal imaging. We investigated the structural and functional correlates of word processing efficiency in healthy individuals. We acquired two structural imaging (T1-weighted imaging and diffusion tensor imaging) and functional magnetic resonance imaging (fMRI) during auditory word processing (phonological and semantic tasks). Our results showed that better phonological performance was predicted by the greater thalamus activity. In contrary, better semantic performance was associated with the less activation in the left posterior middle temporal gyrus (pMTG), supporting the neural efficiency hypothesis that better task performance requires less brain activation. Furthermore, our network analysis revealed the semantic network including the left anterior temporal lobe (ATL), dorsolateral prefrontal cortex (DLPFC) and pMTG was correlated with the semantic efficiency. Especially, this network acted as a neural efficient manner during auditory word processing. Structurally, DLPFC and cingulum contributed to the word processing efficiency. Also, the parietal cortex showed a significate association with the word processing efficiency. Our results demonstrated that two features of word processing efficiency, phonology and semantics, can be supported in different brain regions and, importantly, the way serving it in each region was different according to the feature of word processing. Our findings suggest that word processing efficiency can be achieved by in collaboration of multiple brain regions involved in language and general cognitive function structurally and functionally. PMID:28892503

  15. MUFOLD-SS: New deep inception-inside-inception networks for protein secondary structure prediction.

    PubMed

    Fang, Chao; Shang, Yi; Xu, Dong

    2018-05-01

    Protein secondary structure prediction can provide important information for protein 3D structure prediction and protein functions. Deep learning offers a new opportunity to significantly improve prediction accuracy. In this article, a new deep neural network architecture, named the Deep inception-inside-inception (Deep3I) network, is proposed for protein secondary structure prediction and implemented as a software tool MUFOLD-SS. The input to MUFOLD-SS is a carefully designed feature matrix corresponding to the primary amino acid sequence of a protein, which consists of a rich set of information derived from individual amino acid, as well as the context of the protein sequence. Specifically, the feature matrix is a composition of physio-chemical properties of amino acids, PSI-BLAST profile, and HHBlits profile. MUFOLD-SS is composed of a sequence of nested inception modules and maps the input matrix to either eight states or three states of secondary structures. The architecture of MUFOLD-SS enables effective processing of local and global interactions between amino acids in making accurate prediction. In extensive experiments on multiple datasets, MUFOLD-SS outperformed the best existing methods and other deep neural networks significantly. MUFold-SS can be downloaded from http://dslsrv8.cs.missouri.edu/~cf797/MUFoldSS/download.html. © 2018 Wiley Periodicals, Inc.

  16. Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing

    PubMed Central

    Frässle, Stefan; Krach, Sören; Paulus, Frieder Michel; Jansen, Andreas

    2016-01-01

    While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness-related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization. PMID:27250879

  17. Handedness is related to neural mechanisms underlying hemispheric lateralization of face processing

    NASA Astrophysics Data System (ADS)

    Frässle, Stefan; Krach, Sören; Paulus, Frieder Michel; Jansen, Andreas

    2016-06-01

    While the right-hemispheric lateralization of the face perception network is well established, recent evidence suggests that handedness affects the cerebral lateralization of face processing at the hierarchical level of the fusiform face area (FFA). However, the neural mechanisms underlying differential hemispheric lateralization of face perception in right- and left-handers are largely unknown. Using dynamic causal modeling (DCM) for fMRI, we aimed to unravel the putative processes that mediate handedness-related differences by investigating the effective connectivity in the bilateral core face perception network. Our results reveal an enhanced recruitment of the left FFA in left-handers compared to right-handers, as evidenced by more pronounced face-specific modulatory influences on both intra- and interhemispheric connections. As structural and physiological correlates of handedness-related differences in face processing, right- and left-handers varied with regard to their gray matter volume in the left fusiform gyrus and their pupil responses to face stimuli. Overall, these results describe how handedness is related to the lateralization of the core face perception network, and point to different neural mechanisms underlying face processing in right- and left-handers. In a wider context, this demonstrates the entanglement of structurally and functionally remote brain networks, suggesting a broader underlying process regulating brain lateralization.

  18. Fetal brain extracellular matrix boosts neuronal network formation in 3D bioengineered model of cortical brain tissue.

    PubMed

    Sood, Disha; Chwalek, Karolina; Stuntz, Emily; Pouli, Dimitra; Du, Chuang; Tang-Schomer, Min; Georgakoudi, Irene; Black, Lauren D; Kaplan, David L

    2016-01-01

    The extracellular matrix (ECM) constituting up to 20% of the organ volume is a significant component of the brain due to its instructive role in the compartmentalization of functional microdomains in every brain structure. The composition, quantity and structure of ECM changes dramatically during the development of an organism greatly contributing to the remarkably sophisticated architecture and function of the brain. Since fetal brain is highly plastic, we hypothesize that the fetal brain ECM may contain cues promoting neural growth and differentiation, highly desired in regenerative medicine. Thus, we studied the effect of brain-derived fetal and adult ECM complemented with matricellular proteins on cortical neurons using in vitro 3D bioengineered model of cortical brain tissue. The tested parameters included neuronal network density, cell viability, calcium signaling and electrophysiology. Both, adult and fetal brain ECM as well as matricellular proteins significantly improved neural network formation as compared to single component, collagen I matrix. Additionally, the brain ECM improved cell viability and lowered glutamate release. The fetal brain ECM induced superior neural network formation, calcium signaling and spontaneous spiking activity over adult brain ECM. This study highlights the difference in the neuroinductive properties of fetal and adult brain ECM and suggests that delineating the basis for this divergence may have implications for regenerative medicine.

  19. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.

    PubMed

    Mocanu, Decebal Constantin; Mocanu, Elena; Stone, Peter; Nguyen, Phuong H; Gibescu, Madeleine; Liotta, Antonio

    2018-06-19

    Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.

  20. Quantum neural networks: Current status and prospects for development

    NASA Astrophysics Data System (ADS)

    Altaisky, M. V.; Kaputkina, N. E.; Krylov, V. A.

    2014-11-01

    The idea of quantum artificial neural networks, first formulated in [34], unites the artificial neural network concept with the quantum computation paradigm. Quantum artificial neural networks were first systematically considered in the PhD thesis by T. Menneer (1998). Based on the works of Menneer and Narayanan [42, 43], Kouda, Matsui, and Nishimura [35, 36], Altaisky [2, 68], Zhou [67], and others, quantum-inspired learning algorithms for neural networks were developed, and are now used in various training programs and computer games [29, 30]. The first practically realizable scaled hardware-implemented model of the quantum artificial neural network is obtained by D-Wave Systems, Inc. [33]. It is a quantum Hopfield network implemented on the basis of superconducting quantum interference devices (SQUIDs). In this work we analyze possibilities and underlying principles of an alternative way to implement quantum neural networks on the basis of quantum dots. A possibility of using quantum neural network algorithms in automated control systems, associative memory devices, and in modeling biological and social networks is examined.

Top