Sample records for complex neural network

  1. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    PubMed Central

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  2. Pruning artificial neural networks using neural complexity measures.

    PubMed

    Jorgensen, Thomas D; Haynes, Barry P; Norlund, Charlotte C F

    2008-10-01

    This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.

  3. Network complexity as a measure of information processing across resting-state networks: evidence from the Human Connectome Project

    PubMed Central

    McDonough, Ian M.; Nashiro, Kaoru

    2014-01-01

    An emerging field of research focused on fluctuations in brain signals has provided evidence that the complexity of those signals, as measured by entropy, conveys important information about network dynamics (e.g., local and distributed processing). While much research has focused on how neural complexity differs in populations with different age groups or clinical disorders, substantially less research has focused on the basic understanding of neural complexity in populations with young and healthy brain states. The present study used resting-state fMRI data from the Human Connectome Project (Van Essen et al., 2013) to test the extent that neural complexity in the BOLD signal, as measured by multiscale entropy (1) would differ from random noise, (2) would differ between four major resting-state networks previously associated with higher-order cognition, and (3) would be associated with the strength and extent of functional connectivity—a complementary method of estimating information processing. We found that complexity in the BOLD signal exhibited different patterns of complexity from white, pink, and red noise and that neural complexity was differentially expressed between resting-state networks, including the default mode, cingulo-opercular, left and right frontoparietal networks. Lastly, neural complexity across all networks was negatively associated with functional connectivity at fine scales, but was positively associated with functional connectivity at coarse scales. The present study is the first to characterize neural complexity in BOLD signals at a high temporal resolution and across different networks and might help clarify the inconsistencies between neural complexity and functional connectivity, thus informing the mechanisms underlying neural complexity. PMID:24959130

  4. Fast Recall for Complex-Valued Hopfield Neural Networks with Projection Rules.

    PubMed

    Kobayashi, Masaki

    2017-01-01

    Many models of neural networks have been extended to complex-valued neural networks. A complex-valued Hopfield neural network (CHNN) is a complex-valued version of a Hopfield neural network. Complex-valued neurons can represent multistates, and CHNNs are available for the storage of multilevel data, such as gray-scale images. The CHNNs are often trapped into the local minima, and their noise tolerance is low. Lee improved the noise tolerance of the CHNNs by detecting and exiting the local minima. In the present work, we propose a new recall algorithm that eliminates the local minima. We show that our proposed recall algorithm not only accelerated the recall but also improved the noise tolerance through computer simulations.

  5. Adaptive exponential synchronization of complex-valued Cohen-Grossberg neural networks with known and unknown parameters.

    PubMed

    Hu, Jin; Zeng, Chunna

    2017-02-01

    The complex-valued Cohen-Grossberg neural network is a special kind of complex-valued neural network. In this paper, the synchronization problem of a class of complex-valued Cohen-Grossberg neural networks with known and unknown parameters is investigated. By using Lyapunov functionals and the adaptive control method based on parameter identification, some adaptive feedback schemes are proposed to achieve synchronization exponentially between the drive and response systems. The results obtained in this paper have extended and improved some previous works on adaptive synchronization of Cohen-Grossberg neural networks. Finally, two numerical examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.

    PubMed

    Li, Shuai; Li, Yangming

    2013-10-28

    The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.

  7. Predicting protein complex geometries with a neural network.

    PubMed

    Chae, Myong-Ho; Krull, Florian; Lorenzen, Stephan; Knapp, Ernst-Walter

    2010-03-01

    A major challenge of the protein docking problem is to define scoring functions that can distinguish near-native protein complex geometries from a large number of non-native geometries (decoys) generated with noncomplexed protein structures (unbound docking). In this study, we have constructed a neural network that employs the information from atom-pair distance distributions of a large number of decoys to predict protein complex geometries. We found that docking prediction can be significantly improved using two different types of polar hydrogen atoms. To train the neural network, 2000 near-native decoys of even distance distribution were used for each of the 185 considered protein complexes. The neural network normalizes the information from different protein complexes using an additional protein complex identity input neuron for each complex. The parameters of the neural network were determined such that they mimic a scoring funnel in the neighborhood of the native complex structure. The neural network approach avoids the reference state problem, which occurs in deriving knowledge-based energy functions for scoring. We show that a distance-dependent atom pair potential performs much better than a simple atom-pair contact potential. We have compared the performance of our scoring function with other empirical and knowledge-based scoring functions such as ZDOCK 3.0, ZRANK, ITScore-PP, EMPIRE, and RosettaDock. In spite of the simplicity of the method and its functional form, our neural network-based scoring function achieves a reasonable performance in rigid-body unbound docking of proteins. Proteins 2010. (c) 2009 Wiley-Liss, Inc.

  8. Quasi-projective synchronization of fractional-order complex-valued recurrent neural networks.

    PubMed

    Yang, Shuai; Yu, Juan; Hu, Cheng; Jiang, Haijun

    2018-08-01

    In this paper, without separating the complex-valued neural networks into two real-valued systems, the quasi-projective synchronization of fractional-order complex-valued neural networks is investigated. First, two new fractional-order inequalities are established by using the theory of complex functions, Laplace transform and Mittag-Leffler functions, which generalize traditional inequalities with the first-order derivative in the real domain. Additionally, different from hybrid control schemes given in the previous work concerning the projective synchronization, a simple and linear control strategy is designed in this paper and several criteria are derived to ensure quasi-projective synchronization of the complex-valued neural networks with fractional-order based on the established fractional-order inequalities and the theory of complex functions. Moreover, the error bounds of quasi-projective synchronization are estimated. Especially, some conditions are also presented for the Mittag-Leffler synchronization of the addressed neural networks. Finally, some numerical examples with simulations are provided to show the effectiveness of the derived theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Application of artificial neural networks to composite ply micromechanics

    NASA Technical Reports Server (NTRS)

    Brown, D. A.; Murthy, P. L. N.; Berke, L.

    1991-01-01

    Artificial neural networks can provide improved computational efficiency relative to existing methods when an algorithmic description of functional relationships is either totally unavailable or is complex in nature. For complex calculations, significant reductions in elapsed computation time are possible. The primary goal is to demonstrate the applicability of artificial neural networks to composite material characterization. As a test case, a neural network was trained to accurately predict composite hygral, thermal, and mechanical properties when provided with basic information concerning the environment, constituent materials, and component ratios used in the creation of the composite. A brief introduction on neural networks is provided along with a description of the project itself.

  10. Master-slave exponential synchronization of delayed complex-valued memristor-based neural networks via impulsive control.

    PubMed

    Li, Xiaofan; Fang, Jian-An; Li, Huiyuan

    2017-09-01

    This paper investigates master-slave exponential synchronization for a class of complex-valued memristor-based neural networks with time-varying delays via discontinuous impulsive control. Firstly, the master and slave complex-valued memristor-based neural networks with time-varying delays are translated to two real-valued memristor-based neural networks. Secondly, an impulsive control law is constructed and utilized to guarantee master-slave exponential synchronization of the neural networks. Thirdly, the master-slave synchronization problems are transformed into the stability problems of the master-slave error system. By employing linear matrix inequality (LMI) technique and constructing an appropriate Lyapunov-Krasovskii functional, some sufficient synchronization criteria are derived. Finally, a numerical simulation is provided to illustrate the effectiveness of the obtained theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. On the complexity of neural network classifiers: a comparison between shallow and deep architectures.

    PubMed

    Bianchini, Monica; Scarselli, Franco

    2014-08-01

    Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.

  12. Modeling fluctuations in default-mode brain network using a spiking neural network.

    PubMed

    Yamanishi, Teruya; Liu, Jian-Qin; Nishimura, Haruhiko

    2012-08-01

    Recently, numerous attempts have been made to understand the dynamic behavior of complex brain systems using neural network models. The fluctuations in blood-oxygen-level-dependent (BOLD) brain signals at less than 0.1 Hz have been observed by functional magnetic resonance imaging (fMRI) for subjects in a resting state. This phenomenon is referred to as a "default-mode brain network." In this study, we model the default-mode brain network by functionally connecting neural communities composed of spiking neurons in a complex network. Through computational simulations of the model, including transmission delays and complex connectivity, the network dynamics of the neural system and its behavior are discussed. The results show that the power spectrum of the modeled fluctuations in the neuron firing patterns is consistent with the default-mode brain network's BOLD signals when transmission delays, a characteristic property of the brain, have finite values in a given range.

  13. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) using Complex Quantum Neuron (CQN): Applications to time series prediction.

    PubMed

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2015-11-01

    Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Gear Fault Diagnosis Based on BP Neural Network

    NASA Astrophysics Data System (ADS)

    Huang, Yongsheng; Huang, Ruoshi

    2018-03-01

    Gear transmission is more complex, widely used in machinery fields, which form of fault has some nonlinear characteristics. This paper uses BP neural network to train the gear of four typical failure modes, and achieves satisfactory results. Tested by using test data, test results have an agreement with the actual results. The results show that the BP neural network can effectively solve the complex state of gear fault in the gear fault diagnosis.

  16. Collision detection in complex dynamic scenes using an LGMD-based visual neural network with feature enhancement.

    PubMed

    Yue, Shigang; Rind, F Claire

    2006-05-01

    The lobula giant movement detector (LGMD) is an identified neuron in the locust brain that responds most strongly to the images of an approaching object such as a predator. Its computational model can cope with unpredictable environments without using specific object recognition algorithms. In this paper, an LGMD-based neural network is proposed with a new feature enhancement mechanism to enhance the expanded edges of colliding objects via grouped excitation for collision detection with complex backgrounds. The isolated excitation caused by background detail will be filtered out by the new mechanism. Offline tests demonstrated the advantages of the presented LGMD-based neural network in complex backgrounds. Real time robotics experiments using the LGMD-based neural network as the only sensory system showed that the system worked reliably in a wide range of conditions; in particular, the robot was able to navigate in arenas with structured surrounds and complex backgrounds.

  17. An artificial neural network improves prediction of observed survival in patients with laryngeal squamous carcinoma.

    PubMed

    Jones, Andrew S; Taktak, Azzam G F; Helliwell, Timothy R; Fenton, John E; Birchall, Martin A; Husband, David J; Fisher, Anthony C

    2006-06-01

    The accepted method of modelling and predicting failure/survival, Cox's proportional hazards model, is theoretically inferior to neural network derived models for analysing highly complex systems with large datasets. A blinded comparison of the neural network versus the Cox's model in predicting survival utilising data from 873 treated patients with laryngeal cancer. These were divided randomly and equally into a training set and a study set and Cox's and neural network models applied in turn. Data were then divided into seven sets of binary covariates and the analysis repeated. Overall survival was not significantly different on Kaplan-Meier plot, or with either test model. Although the network produced qualitatively similar results to Cox's model it was significantly more sensitive to differences in survival curves for age and N stage. We propose that neural networks are capable of prediction in systems involving complex interactions between variables and non-linearity.

  18. Boundedness and global robust stability analysis of delayed complex-valued neural networks with interval parameter uncertainties.

    PubMed

    Song, Qiankun; Yu, Qinqin; Zhao, Zhenjiang; Liu, Yurong; Alsaadi, Fuad E

    2018-07-01

    In this paper, the boundedness and robust stability for a class of delayed complex-valued neural networks with interval parameter uncertainties are investigated. By using Homomorphic mapping theorem, Lyapunov method and inequality techniques, sufficient condition to guarantee the boundedness of networks and the existence, uniqueness and global robust stability of equilibrium point is derived for the considered uncertain neural networks. The obtained robust stability criterion is expressed in complex-valued LMI, which can be calculated numerically using YALMIP with solver of SDPT3 in MATLAB. An example with simulations is supplied to show the applicability and advantages of the acquired result. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Decomposition of Rotor Hopfield Neural Networks Using Complex Numbers.

    PubMed

    Kobayashi, Masaki

    2018-04-01

    A complex-valued Hopfield neural network (CHNN) is a multistate model of a Hopfield neural network. It has the disadvantage of low noise tolerance. Meanwhile, a symmetric CHNN (SCHNN) is a modification of a CHNN that improves noise tolerance. Furthermore, a rotor Hopfield neural network (RHNN) is an extension of a CHNN. It has twice the storage capacity of CHNNs and SCHNNs, and much better noise tolerance than CHNNs, although it requires twice many connection parameters. In this brief, we investigate the relations between CHNN, SCHNN, and RHNN; an RHNN is uniquely decomposed into a CHNN and SCHNN. In addition, the Hebbian learning rule for RHNNs is decomposed into those for CHNNs and SCHNNs.

  20. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  1. A neural network simulation package in CLIPS

    NASA Technical Reports Server (NTRS)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  2. Minimal perceptrons for memorizing complex patterns

    NASA Astrophysics Data System (ADS)

    Pastor, Marissa; Song, Juyong; Hoang, Danh-Tai; Jo, Junghyo

    2016-11-01

    Feedforward neural networks have been investigated to understand learning and memory, as well as applied to numerous practical problems in pattern classification. It is a rule of thumb that more complex tasks require larger networks. However, the design of optimal network architectures for specific tasks is still an unsolved fundamental problem. In this study, we consider three-layered neural networks for memorizing binary patterns. We developed a new complexity measure of binary patterns, and estimated the minimal network size for memorizing them as a function of their complexity. We formulated the minimal network size for regular, random, and complex patterns. In particular, the minimal size for complex patterns, which are neither ordered nor disordered, was predicted by measuring their Hamming distances from known ordered patterns. Our predictions agree with simulations based on the back-propagation algorithm.

  3. Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network

    NASA Astrophysics Data System (ADS)

    Yang, Bin

    2017-07-01

    Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately

  4. Research on artificial neural network intrusion detection photochemistry based on the improved wavelet analysis and transformation

    NASA Astrophysics Data System (ADS)

    Li, Hong; Ding, Xue

    2017-03-01

    This paper combines wavelet analysis and wavelet transform theory with artificial neural network, through the pretreatment on point feature attributes before in intrusion detection, to make them suitable for improvement of wavelet neural network. The whole intrusion classification model gets the better adaptability, self-learning ability, greatly enhances the wavelet neural network for solving the problem of field detection invasion, reduces storage space, contributes to improve the performance of the constructed neural network, and reduces the training time. Finally the results of the KDDCup99 data set simulation experiment shows that, this method reduces the complexity of constructing wavelet neural network, but also ensures the accuracy of the intrusion classification.

  5. Method for Constructing Composite Response Surfaces by Combining Neural Networks with Polynominal Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2007-01-01

    A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode

  6. Real-time biomimetic Central Pattern Generators in an FPGA for hybrid experiments

    PubMed Central

    Ambroise, Matthieu; Levi, Timothée; Joucla, Sébastien; Yvert, Blaise; Saïghi, Sylvain

    2013-01-01

    This investigation of the leech heartbeat neural network system led to the development of a low resources, real-time, biomimetic digital hardware for use in hybrid experiments. The leech heartbeat neural network is one of the simplest central pattern generators (CPG). In biology, CPG provide the rhythmic bursts of spikes that form the basis for all muscle contraction orders (heartbeat) and locomotion (walking, running, etc.). The leech neural network system was previously investigated and this CPG formalized in the Hodgkin–Huxley neural model (HH), the most complex devised to date. However, the resources required for a neural model are proportional to its complexity. In response to this issue, this article describes a biomimetic implementation of a network of 240 CPGs in an FPGA (Field Programmable Gate Array), using a simple model (Izhikevich) and proposes a new synapse model: activity-dependent depression synapse. The network implementation architecture operates on a single computation core. This digital system works in real-time, requires few resources, and has the same bursting activity behavior as the complex model. The implementation of this CPG was initially validated by comparing it with a simulation of the complex model. Its activity was then matched with pharmacological data from the rat spinal cord activity. This digital system opens the way for future hybrid experiments and represents an important step toward hybridization of biological tissue and artificial neural networks. This CPG network is also likely to be useful for mimicking the locomotion activity of various animals and developing hybrid experiments for neuroprosthesis development. PMID:24319408

  7. Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies

    ERIC Educational Resources Information Center

    Peng, Yefei

    2010-01-01

    An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…

  8. Flight control with adaptive critic neural network

    NASA Astrophysics Data System (ADS)

    Han, Dongchen

    2001-10-01

    In this dissertation, the adaptive critic neural network technique is applied to solve complex nonlinear system control problems. Based on dynamic programming, the adaptive critic neural network can embed the optimal solution into a neural network. Though trained off-line, the neural network forms a real-time feedback controller. Because of its general interpolation properties, the neurocontroller has inherit robustness. The problems solved here are an agile missile control for U.S. Air Force and a midcourse guidance law for U.S. Navy. In the first three papers, the neural network was used to control an air-to-air agile missile to implement a minimum-time heading-reverse in a vertical plane corresponding to following conditions: a system without constraint, a system with control inequality constraint, and a system with state inequality constraint. While the agile missile is a one-dimensional problem, the midcourse guidance law is the first test-bed for multiple-dimensional problem. In the fourth paper, the neurocontroller is synthesized to guide a surface-to-air missile to a fixed final condition, and to a flexible final condition from a variable initial condition. In order to evaluate the adaptive critic neural network approach, the numerical solutions for these cases are also obtained by solving two-point boundary value problem with a shooting method. All of the results showed that the adaptive critic neural network could solve complex nonlinear system control problems.

  9. ChainMail based neural dynamics modeling of soft tissue deformation for surgical simulation.

    PubMed

    Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-07-20

    Realistic and real-time modeling and simulation of soft tissue deformation is a fundamental research issue in the field of surgical simulation. In this paper, a novel cellular neural network approach is presented for modeling and simulation of soft tissue deformation by combining neural dynamics of cellular neural network with ChainMail mechanism. The proposed method formulates the problem of elastic deformation into cellular neural network activities to avoid the complex computation of elasticity. The local position adjustments of ChainMail are incorporated into the cellular neural network as the local connectivity of cells, through which the dynamic behaviors of soft tissue deformation are transformed into the neural dynamics of cellular neural network. Experiments demonstrate that the proposed neural network approach is capable of modeling the soft tissues' nonlinear deformation and typical mechanical behaviors. The proposed method not only improves ChainMail's linear deformation with the nonlinear characteristics of neural dynamics but also enables the cellular neural network to follow the principle of continuum mechanics to simulate soft tissue deformation.

  10. Empirical modeling for intelligent, real-time manufacture control

    NASA Technical Reports Server (NTRS)

    Xu, Xiaoshu

    1994-01-01

    Artificial neural systems (ANS), also known as neural networks, are an attempt to develop computer systems that emulate the neural reasoning behavior of biological neural systems (e.g. the human brain). As such, they are loosely based on biological neural networks. The ANS consists of a series of nodes (neurons) and weighted connections (axons) that, when presented with a specific input pattern, can associate specific output patterns. It is essentially a highly complex, nonlinear, mathematical relationship or transform. These constructs have two significant properties that have proven useful to the authors in signal processing and process modeling: noise tolerance and complex pattern recognition. Specifically, the authors have developed a new network learning algorithm that has resulted in the successful application of ANS's to high speed signal processing and to developing models of highly complex processes. Two of the applications, the Weld Bead Geometry Control System and the Welding Penetration Monitoring System, are discussed in the body of this paper.

  11. Singularities of Three-Layered Complex-Valued Neural Networks With Split Activation Function.

    PubMed

    Kobayashi, Masaki

    2018-05-01

    There are three important concepts related to learning processes in neural networks: reducibility, nonminimality, and singularity. Although the definitions of these three concepts differ, they are equivalent in real-valued neural networks. This is also true of complex-valued neural networks (CVNNs) with hidden neurons not employing biases. The situation of CVNNs with hidden neurons employing biases, however, is very complicated. Exceptional reducibility was found, and it was shown that reducibility and nonminimality are not the same. Irreducibility consists of minimality and exceptional reducibility. The relationship between minimality and singularity has not yet been established. In this paper, we describe our surprising finding that minimality and singularity are independent. We also provide several examples based on exceptional reducibility.

  12. The effect of the neural activity on topological properties of growing neural networks.

    PubMed

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  13. Brainlab: A Python Toolkit to Aid in the Design, Simulation, and Analysis of Spiking Neural Networks with the NeoCortical Simulator.

    PubMed

    Drewes, Rich; Zou, Quan; Goodman, Philip H

    2009-01-01

    Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading "glue" tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS.

  14. Brainlab: A Python Toolkit to Aid in the Design, Simulation, and Analysis of Spiking Neural Networks with the NeoCortical Simulator

    PubMed Central

    Drewes, Rich; Zou, Quan; Goodman, Philip H.

    2008-01-01

    Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading “glue” tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS. PMID:19506707

  15. Global exponential periodicity and stability of discrete-time complex-valued recurrent neural networks with time-delays.

    PubMed

    Hu, Jin; Wang, Jun

    2015-06-01

    In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Prediction of Aerodynamic Coefficient using Genetic Algorithm Optimized Neural Network for Sparse Data

    NASA Technical Reports Server (NTRS)

    Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Wind tunnels use scale models to characterize aerodynamic coefficients, Wind tunnel testing can be slow and costly due to high personnel overhead and intensive power utilization. Although manual curve fitting can be done, it is highly efficient to use a neural network to define the complex relationship between variables. Numerical simulation of complex vehicles on the wide range of conditions required for flight simulation requires static and dynamic data. Static data at low Mach numbers and angles of attack may be obtained with simpler Euler codes. Static data of stalled vehicles where zones of flow separation are usually present at higher angles of attack require Navier-Stokes simulations which are costly due to the large processing time required to attain convergence. Preliminary dynamic data may be obtained with simpler methods based on correlations and vortex methods; however, accurate prediction of the dynamic coefficients requires complex and costly numerical simulations. A reliable and fast method of predicting complex aerodynamic coefficients for flight simulation I'S presented using a neural network. The training data for the neural network are derived from numerical simulations and wind-tunnel experiments. The aerodynamic coefficients are modeled as functions of the flow characteristics and the control surfaces of the vehicle. The basic coefficients of lift, drag and pitching moment are expressed as functions of angles of attack and Mach number. The modeled and training aerodynamic coefficients show good agreement. This method shows excellent potential for rapid development of aerodynamic models for flight simulation. Genetic Algorithms (GA) are used to optimize a previously built Artificial Neural Network (ANN) that reliably predicts aerodynamic coefficients. Results indicate that the GA provided an efficient method of optimizing the ANN model to predict aerodynamic coefficients. The reliability of the ANN using the GA includes prediction of aerodynamic coefficients to an accuracy of 110% . In our problem, we would like to get an optimized neural network architecture and minimum data set. This has been accomplished within 500 training cycles of a neural network. After removing training pairs (outliers), the GA has produced much better results. The neural network constructed is a feed forward neural network with a back propagation learning mechanism. The main goal has been to free the network design process from constraints of human biases, and to discover better forms of neural network architectures. The automation of the network architecture search by genetic algorithms seems to have been the best way to achieve this goal.

  17. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  18. An adaptive Hinfinity controller design for bank-to-turn missiles using ridge Gaussian neural networks.

    PubMed

    Lin, Chuan-Kai; Wang, Sheng-De

    2004-11-01

    A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the Hinfinity control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with Hinfinity stabilization.

  19. The Study of Learners' Preference for Visual Complexity on Small Screens of Mobile Computers Using Neural Networks

    ERIC Educational Resources Information Center

    Wang, Lan-Ting; Lee, Kun-Chou

    2014-01-01

    The vision plays an important role in educational technologies because it can produce and communicate quite important functions in teaching and learning. In this paper, learners' preference for the visual complexity on small screens of mobile computers is studied by neural networks. The visual complexity in this study is divided into five…

  20. Fuzzy and neural control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1992-01-01

    Fuzzy logic and neural networks provide new methods for designing control systems. Fuzzy logic controllers do not require a complete analytical model of a dynamic system and can provide knowledge-based heuristic controllers for ill-defined and complex systems. Neural networks can be used for learning control. In this chapter, we discuss hybrid methods using fuzzy logic and neural networks which can start with an approximate control knowledge base and refine it through reinforcement learning.

  1. Understanding the role of speech production in reading: Evidence for a print-to-speech neural network using graphical analysis.

    PubMed

    Cummine, Jacqueline; Cribben, Ivor; Luu, Connie; Kim, Esther; Bahktiari, Reyhaneh; Georgiou, George; Boliek, Carol A

    2016-05-01

    The neural circuitry associated with language processing is complex and dynamic. Graphical models are useful for studying complex neural networks as this method provides information about unique connectivity between regions within the context of the entire network of interest. Here, the authors explored the neural networks during covert reading to determine the role of feedforward and feedback loops in covert speech production. Brain activity of skilled adult readers was assessed in real word and pseudoword reading tasks with functional MRI (fMRI). The authors provide evidence for activity coherence in the feedforward system (inferior frontal gyrus-supplementary motor area) during real word reading and in the feedback system (supramarginal gyrus-precentral gyrus) during pseudoword reading. Graphical models provided evidence of an extensive, highly connected, neural network when individuals read real words that relied on coordination of the feedforward system. In contrast, when individuals read pseudowords the authors found a limited/restricted network that relied on coordination of the feedback system. Together, these results underscore the importance of considering multiple pathways and articulatory loops during language tasks and provide evidence for a print-to-speech neural network. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Coherence resonance in bursting neural networks

    NASA Astrophysics Data System (ADS)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  3. Quantitative analysis of volatile organic compounds using ion mobility spectra and cascade correlation neural networks

    NASA Technical Reports Server (NTRS)

    Harrington, Peter DEB.; Zheng, Peng

    1995-01-01

    Ion Mobility Spectrometry (IMS) is a powerful technique for trace organic analysis in the gas phase. Quantitative measurements are difficult, because IMS has a limited linear range. Factors that may affect the instrument response are pressure, temperature, and humidity. Nonlinear calibration methods, such as neural networks, may be ideally suited for IMS. Neural networks have the capability of modeling complex systems. Many neural networks suffer from long training times and overfitting. Cascade correlation neural networks train at very fast rates. They also build their own topology, that is a number of layers and number of units in each layer. By controlling the decay parameter in training neural networks, reproducible and general models may be obtained.

  4. Newly developed double neural network concept for reliable fast plasma position control

    NASA Astrophysics Data System (ADS)

    Jeon, Young-Mu; Na, Yong-Su; Kim, Myung-Rak; Hwang, Y. S.

    2001-01-01

    Neural network is considered as a parameter estimation tool in plasma controls for next generation tokamak such as ITER. The neural network has been reported to be so accurate and fast for plasma equilibrium identification that it may be applied to the control of complex tokamak plasmas. For this application, the reliability of the conventional neural network needs to be improved. In this study, a new idea of double neural network is developed to achieve this. The new idea has been applied to simple plasma position identification of KSTAR tokamak for feasibility test. Characteristics of the concept show higher reliability and fault tolerance even in severe faulty conditions, which may make neural network applicable to plasma control reliably and widely in future tokamaks.

  5. Fault Analysis of Space Station DC Power Systems-Using Neural Network Adaptive Wavelets to Detect Faults

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Wang, Yanchun; Dolce, James L.

    1997-01-01

    This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.

  6. Using Neural Networks in the Mapping of Mixed Discrete/Continuous Design Spaces With Application to Structural Design

    DTIC Science & Technology

    1994-02-01

    desired that the problem to which the design space mapping techniques were applied be easily analyzed, yet provide a design space with realistic complexity...consistent fully stressed solution. 3 DESIGN SPACE MAPPING In order to reduce the computational expense required to optimize design spaces, neural networks...employed in this study. Some of the issues involved in using neural networks to do design space mapping are how to configure the neural network, how much

  7. Polarity-specific high-level information propagation in neural networks.

    PubMed

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  8. Polarity-specific high-level information propagation in neural networks

    PubMed Central

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals. PMID:24672472

  9. Modelling and prediction for chaotic fir laser attractor using rational function neural network.

    PubMed

    Cho, S

    2001-02-01

    Many real-world systems such as irregular ECG signal, volatility of currency exchange rate and heated fluid reaction exhibit highly complex nonlinear characteristic known as chaos. These chaotic systems cannot be retreated satisfactorily using linear system theory due to its high dimensionality and irregularity. This research focuses on prediction and modelling of chaotic FIR (Far InfraRed) laser system for which the underlying equations are not given. This paper proposed a method for prediction and modelling a chaotic FIR laser time series using rational function neural network. Three network architectures, TDNN (Time Delayed Neural Network), RBF (radial basis function) network and the RF (rational function) network, are also presented. Comparisons between these networks performance show the improvements introduced by the RF network in terms of a decrement in network complexity and better ability of predictability.

  10. Synchronization stability of memristor-based complex-valued neural networks with time delays.

    PubMed

    Liu, Dan; Zhu, Song; Ye, Er

    2017-12-01

    This paper focuses on the dynamical property of a class of memristor-based complex-valued neural networks (MCVNNs) with time delays. By constructing the appropriate Lyapunov functional and utilizing the inequality technique, sufficient conditions are proposed to guarantee exponential synchronization of the coupled systems based on drive-response concept. The proposed results are very easy to verify, and they also extend some previous related works on memristor-based real-valued neural networks. Meanwhile, the obtained sufficient conditions of this paper may be conducive to qualitative analysis of some complex-valued nonlinear delayed systems. A numerical example is given to demonstrate the effectiveness of our theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. The Laplacian spectrum of neural networks

    PubMed Central

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  12. [Measurement and performance analysis of functional neural network].

    PubMed

    Li, Shan; Liu, Xinyu; Chen, Yan; Wan, Hong

    2018-04-01

    The measurement of network is one of the important researches in resolving neuronal population information processing mechanism using complex network theory. For the quantitative measurement problem of functional neural network, the relation between the measure indexes, i.e. the clustering coefficient, the global efficiency, the characteristic path length and the transitivity, and the network topology was analyzed. Then, the spike-based functional neural network was established and the simulation results showed that the measured network could represent the original neural connections among neurons. On the basis of the former work, the coding of functional neural network in nidopallium caudolaterale (NCL) about pigeon's motion behaviors was studied. We found that the NCL functional neural network effectively encoded the motion behaviors of the pigeon, and there were significant differences in four indexes among the left-turning, the forward and the right-turning. Overall, the establishment method of spike-based functional neural network is available and it is an effective tool to parse the brain information processing mechanism.

  13. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks

    PubMed Central

    2018-01-01

    Much of the information the brain processes and stores is temporal in nature—a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds—we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. PMID:29537963

  14. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.

    PubMed

    Goudar, Vishwa; Buonomano, Dean V

    2018-03-14

    Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.

  15. Firing patterns transition and desynchronization induced by time delay in neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Shoufang; Zhang, Jiqian; Wang, Maosheng; Hu, Chin-Kun

    2018-06-01

    We used the Hindmarsh-Rose (HR) model (Hindmarsh and Rose, 1984) to study the effect of time delay on the transition of firing behaviors and desynchronization in neural networks. As time delay is increased, neural networks exhibit diversity of firing behaviors, including regular spiking or bursting and firing patterns transitions (FPTs). Meanwhile, the desynchronization of firing and unstable bursting with decreasing amplitude in neural system, are also increasingly enhanced with the increase of time delay. Furthermore, we also studied the effect of coupling strength and network randomness on these phenomena. Our results imply that time delays can induce transition and desynchronization of firing behaviors in neural networks. These findings provide new insight into the role of time delay in the firing activities of neural networks, and can help to better understand the firing phenomena in complex systems of neural networks. A possible mechanism in brain that can cause the increase of time delay is discussed.

  16. Optimization of neural network architecture using genetic programming improves detection and modeling of gene-gene interactions in studies of human diseases

    PubMed Central

    Ritchie, Marylyn D; White, Bill C; Parker, Joel S; Hahn, Lance W; Moore, Jason H

    2003-01-01

    Background Appropriate definition of neural network architecture prior to data analysis is crucial for successful data mining. This can be challenging when the underlying model of the data is unknown. The goal of this study was to determine whether optimizing neural network architecture using genetic programming as a machine learning strategy would improve the ability of neural networks to model and detect nonlinear interactions among genes in studies of common human diseases. Results Using simulated data, we show that a genetic programming optimized neural network approach is able to model gene-gene interactions as well as a traditional back propagation neural network. Furthermore, the genetic programming optimized neural network is better than the traditional back propagation neural network approach in terms of predictive ability and power to detect gene-gene interactions when non-functional polymorphisms are present. Conclusion This study suggests that a machine learning strategy for optimizing neural network architecture may be preferable to traditional trial-and-error approaches for the identification and characterization of gene-gene interactions in common, complex human diseases. PMID:12846935

  17. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks

    PubMed Central

    Miconi, Thomas

    2017-01-01

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior. DOI: http://dx.doi.org/10.7554/eLife.20899.001 PMID:28230528

  18. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.

    PubMed

    Miconi, Thomas

    2017-02-23

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

  19. Neurophysiological basis of creativity in healthy elderly people: a multiscale entropy approach.

    PubMed

    Ueno, Kanji; Takahashi, Tetsuya; Takahashi, Koichi; Mizukami, Kimiko; Tanaka, Yuji; Wada, Yuji

    2015-03-01

    Creativity, which presumably involves various connections within and across different neural networks, reportedly underpins the mental well-being of older adults. Multiscale entropy (MSE) can characterize the complexity inherent in EEG dynamics with multiple temporal scales. It can therefore provide useful insight into neural networks. Given that background, we sought to clarify the neurophysiological bases of creativity in healthy elderly subjects by assessing EEG complexity with MSE, with emphasis on assessment of neural networks. We recorded resting state EEG of 20 healthy elderly subjects. MSE was calculated for each subject for continuous 20-s epochs. Their relevance to individual creativity was examined concurrently with intellectual function. Higher individual creativity was linked closely to increased EEG complexity across higher temporal scales, but no significant relation was found with intellectual function (IQ score). Considering the general "loss of complexity" theory of aging, our finding of increased EEG complexity in elderly people with heightened creativity supports the idea that creativity is associated with activated neural networks. Results reported here underscore the potential usefulness of MSE analysis for characterizing the neurophysiological bases of elderly people with heightened creativity. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  20. Density-based clustering: A 'landscape view' of multi-channel neural data for inference and dynamic complexity analysis.

    PubMed

    Baglietto, Gabriel; Gigante, Guido; Del Giudice, Paolo

    2017-01-01

    Two, partially interwoven, hot topics in the analysis and statistical modeling of neural data, are the development of efficient and informative representations of the time series derived from multiple neural recordings, and the extraction of information about the connectivity structure of the underlying neural network from the recorded neural activities. In the present paper we show that state-space clustering can provide an easy and effective option for reducing the dimensionality of multiple neural time series, that it can improve inference of synaptic couplings from neural activities, and that it can also allow the construction of a compact representation of the multi-dimensional dynamics, that easily lends itself to complexity measures. We apply a variant of the 'mean-shift' algorithm to perform state-space clustering, and validate it on an Hopfield network in the glassy phase, in which metastable states are largely uncorrelated from memories embedded in the synaptic matrix. In this context, we show that the neural states identified as clusters' centroids offer a parsimonious parametrization of the synaptic matrix, which allows a significant improvement in inferring the synaptic couplings from the neural activities. Moving to the more realistic case of a multi-modular spiking network, with spike-frequency adaptation inducing history-dependent effects, we propose a procedure inspired by Boltzmann learning, but extending its domain of application, to learn inter-module synaptic couplings so that the spiking network reproduces a prescribed pattern of spatial correlations; we then illustrate, in the spiking network, how clustering is effective in extracting relevant features of the network's state-space landscape. Finally, we show that the knowledge of the cluster structure allows casting the multi-dimensional neural dynamics in the form of a symbolic dynamics of transitions between clusters; as an illustration of the potential of such reduction, we define and analyze a measure of complexity of the neural time series.

  1. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  2. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    USGS Publications Warehouse

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  3. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks.

    PubMed

    Honegger, Thibault; Thielen, Moritz I; Feizi, Soheil; Sanjana, Neville E; Voldman, Joel

    2016-06-22

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  4. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks

    NASA Astrophysics Data System (ADS)

    Honegger, Thibault; Thielen, Moritz I.; Feizi, Soheil; Sanjana, Neville E.; Voldman, Joel

    2016-06-01

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  5. Neural Networks for Modeling and Control of Particle Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.

    Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less

  6. Neural Networks for Modeling and Control of Particle Accelerators

    NASA Astrophysics Data System (ADS)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.

    2016-04-01

    Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  7. Neural Networks for Modeling and Control of Particle Accelerators

    DOE PAGES

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; ...

    2016-04-01

    Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less

  8. Classification of 2-dimensional array patterns: assembling many small neural networks is better than using a large one.

    PubMed

    Chen, Liang; Xue, Wei; Tokuda, Naoyuki

    2010-08-01

    In many pattern classification/recognition applications of artificial neural networks, an object to be classified is represented by a fixed sized 2-dimensional array of uniform type, which corresponds to the cells of a 2-dimensional grid of the same size. A general neural network structure, called an undistricted neural network, which takes all the elements in the array as inputs could be used for problems such as these. However, a districted neural network can be used to reduce the training complexity. A districted neural network usually consists of two levels of sub-neural networks. Each of the lower level neural networks, called a regional sub-neural network, takes the elements in a region of the array as its inputs and is expected to output a temporary class label, called an individual opinion, based on the partial information of the entire array. The higher level neural network, called an assembling sub-neural network, uses the outputs (opinions) of regional sub-neural networks as inputs, and by consensus derives the label decision for the object. Each of the sub-neural networks can be trained separately and thus the training is less expensive. The regional sub-neural networks can be trained and performed in parallel and independently, therefore a high speed can be achieved. We prove theoretically in this paper, using a simple model, that a districted neural network is actually more stable than an undistricted neural network in noisy environments. We conjecture that the result is valid for all neural networks. This theory is verified by experiments involving gender classification and human face recognition. We conclude that a districted neural network is highly recommended for neural network applications in recognition or classification of 2-dimensional array patterns in highly noisy environments. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  9. Weaving and neural complexity in symmetric quantum states

    NASA Astrophysics Data System (ADS)

    Susa, Cristian E.; Girolami, Davide

    2018-04-01

    We study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.

  10. An evaluation of Bayesian techniques for controlling model complexity and selecting inputs in a neural network for short-term load forecasting.

    PubMed

    Hippert, Henrique S; Taylor, James W

    2010-04-01

    Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation. Copyright 2009 Elsevier Ltd. All rights reserved.

  11. Radar signal categorization using a neural network

    NASA Technical Reports Server (NTRS)

    Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.

    1991-01-01

    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.

  12. Neural networks for vertical microcode compaction

    NASA Astrophysics Data System (ADS)

    Chu, Pong P.

    1992-09-01

    Neural networks provide an alternative way to solve complex optimization problems. Instead of performing a program of instructions sequentially as in a traditional computer, neural network model explores many competing hypotheses simultaneously using its massively parallel net. The paper shows how to use the neural network approach to perform vertical micro-code compaction for a micro-programmed control unit. The compaction procedure includes two basic steps. The first step determines the compatibility classes and the second step selects a minimal subset to cover the control signals. Since the selection process is an NP- complete problem, to find an optimal solution is impractical. In this study, we employ a customized neural network to obtain the minimal subset. We first formalize this problem, and then define an `energy function' and map it to a two-layer fully connected neural network. The modified network has two types of neurons and can always obtain a valid solution.

  13. A multivariate extension of mutual information for growing neural networks.

    PubMed

    Ball, Kenneth R; Grant, Christopher; Mundy, William R; Shafer, Timothy J

    2017-11-01

    Recordings of neural network activity in vitro are increasingly being used to assess the development of neural network activity and the effects of drugs, chemicals and disease states on neural network function. The high-content nature of the data derived from such recordings can be used to infer effects of compounds or disease states on a variety of important neural functions, including network synchrony. Historically, synchrony of networks in vitro has been assessed either by determination of correlation coefficients (e.g. Pearson's correlation), by statistics estimated from cross-correlation histograms between pairs of active electrodes, and/or by pairwise mutual information and related measures. The present study examines the application of Normalized Multiinformation (NMI) as a scalar measure of shared information content in a multivariate network that is robust with respect to changes in network size. Theoretical simulations are designed to investigate NMI as a measure of complexity and synchrony in a developing network relative to several alternative approaches. The NMI approach is applied to these simulations and also to data collected during exposure of in vitro neural networks to neuroactive compounds during the first 12 days in vitro, and compared to other common measures, including correlation coefficients and mean firing rates of neurons. NMI is shown to be more sensitive to developmental effects than first order synchronous and nonsynchronous measures of network complexity. Finally, NMI is a scalar measure of global (rather than pairwise) mutual information in a multivariate network, and hence relies on less assumptions for cross-network comparisons than historical approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Weaving and neural complexity in symmetric quantum states

    DOE PAGES

    Susa, Cristian E.; Girolami, Davide

    2017-12-27

    Here, we study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.

  15. Weaving and neural complexity in symmetric quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Susa, Cristian E.; Girolami, Davide

    Here, we study the behaviour of two different measures of the complexity of multipartite correlation patterns, weaving and neural complexity, for symmetric quantum states. Weaving is the weighted sum of genuine multipartite correlations of any order, where the weights are proportional to the correlation order. The neural complexity, originally introduced to characterize correlation patterns in classical neural networks, is here extended to the quantum scenario. We derive closed formulas of the two quantities for GHZ states mixed with white noise.

  16. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  17. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    PubMed Central

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  18. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    PubMed

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  19. Logarithmic r-θ mapping for hybrid optical neural network filter for multiple objects recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.

    2009-04-01

    θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.

  20. Implicity Defined Neural Networks for Sequence Labeling

    DTIC Science & Technology

    2017-02-13

    popularity of the Long Short - Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and variants such as the Gated Recurrent Unit (GRU) (Cho et al., 2014...bidirectional lstm and other neural network architectures. Neural Net- works 18(5):602–610. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short - term ...hid- den states of the network to coupled together, allowing potential improvement on problems with complex, long -distance dependencies. Initial

  1. Neural substrates of decision-making.

    PubMed

    Broche-Pérez, Y; Herrera Jiménez, L F; Omar-Martínez, E

    2016-06-01

    Decision-making is the process of selecting a course of action from among 2 or more alternatives by considering the potential outcomes of selecting each option and estimating its consequences in the short, medium and long term. The prefrontal cortex (PFC) has traditionally been considered the key neural structure in decision-making process. However, new studies support the hypothesis that describes a complex neural network including both cortical and subcortical structures. The aim of this review is to summarise evidence on the anatomical structures underlying the decision-making process, considering new findings that support the existence of a complex neural network that gives rise to this complex neuropsychological process. Current evidence shows that the cortical structures involved in decision-making include the orbitofrontal cortex (OFC), anterior cingulate cortex (ACC), and dorsolateral prefrontal cortex (DLPFC). This process is assisted by subcortical structures including the amygdala, thalamus, and cerebellum. Findings to date show that both cortical and subcortical brain regions contribute to the decision-making process. The neural basis of decision-making is a complex neural network of cortico-cortical and cortico-subcortical connections which includes subareas of the PFC, limbic structures, and the cerebellum. Copyright © 2014 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.

  2. A Decade of Neural Networks: Practical Applications and Prospects

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina E.

    1994-01-01

    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization.

  3. Dissipativity and stability analysis of fractional-order complex-valued neural networks with time delay.

    PubMed

    Velmurugan, G; Rakkiyappan, R; Vembarasan, V; Cao, Jinde; Alsaedi, Ahmed

    2017-02-01

    As we know, the notion of dissipativity is an important dynamical property of neural networks. Thus, the analysis of dissipativity of neural networks with time delay is becoming more and more important in the research field. In this paper, the authors establish a class of fractional-order complex-valued neural networks (FCVNNs) with time delay, and intensively study the problem of dissipativity, as well as global asymptotic stability of the considered FCVNNs with time delay. Based on the fractional Halanay inequality and suitable Lyapunov functions, some new sufficient conditions are obtained that guarantee the dissipativity of FCVNNs with time delay. Moreover, some sufficient conditions are derived in order to ensure the global asymptotic stability of the addressed FCVNNs with time delay. Finally, two numerical simulations are posed to ensure that the attention of our main results are valuable. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. The Complexity of Dynamics in Small Neural Circuits

    PubMed Central

    Panzeri, Stefano

    2016-01-01

    Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737

  5. A study on ?-dissipative synchronisation of coupled reaction-diffusion neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Ali, M. Syed; Zhu, Quanxin; Pavithra, S.; Gunasekaran, N.

    2018-03-01

    This study examines the problem of dissipative synchronisation of coupled reaction-diffusion neural networks with time-varying delays. This paper proposes a complex dynamical network consisting of N linearly and diffusively coupled identical reaction-diffusion neural networks. By constructing a suitable Lyapunov-Krasovskii functional (LKF), utilisation of Jensen's inequality and reciprocally convex combination (RCC) approach, strictly ?-dissipative conditions of the addressed systems are derived. Finally, a numerical example is given to show the effectiveness of the theoretical results.

  6. The Topographical Mapping in Drosophila Central Complex Network and Its Signal Routing

    PubMed Central

    Chang, Po-Yen; Su, Ta-Shun; Shih, Chi-Tin; Lo, Chung-Chuan

    2017-01-01

    Neural networks regulate brain functions by routing signals. Therefore, investigating the detailed organization of a neural circuit at the cellular levels is a crucial step toward understanding the neural mechanisms of brain functions. To study how a complicated neural circuit is organized, we analyzed recently published data on the neural circuit of the Drosophila central complex, a brain structure associated with a variety of functions including sensory integration and coordination of locomotion. We discovered that, except for a small number of “atypical” neuron types, the network structure formed by the identified 194 neuron types can be described by only a few simple mathematical rules. Specifically, the topological mapping formed by these neurons can be reconstructed by applying a generation matrix on a small set of initial neurons. By analyzing how information flows propagate with or without the atypical neurons, we found that while the general pattern of signal propagation in the central complex follows the simple topological mapping formed by the “typical” neurons, some atypical neurons can substantially re-route the signal pathways, implying specific roles of these neurons in sensory signal integration. The present study provides insights into the organization principle and signal integration in the central complex. PMID:28443014

  7. Retinal Connectomics: Towards Complete, Accurate Networks

    PubMed Central

    Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott

    2013-01-01

    Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532

  8. Feed-forward neural network model for hunger and satiety related VAS score prediction.

    PubMed

    Krishnan, Shaji; Hendriks, Henk F J; Hartvigsen, Merete L; de Graaf, Albert A

    2016-07-07

    An artificial neural network approach was chosen to model the outcome of the complex signaling pathways in the gastro-intestinal tract and other peripheral organs that eventually produce the satiety feeling in the brain upon feeding. A multilayer feed-forward neural network was trained with sets of experimental data relating concentration-time courses of plasma satiety hormones to Visual Analog Scales (VAS) scores. The network successfully predicted VAS responses from sets of satiety hormone data obtained in experiments using different food compositions. The correlation coefficients for the predicted VAS responses for test sets having i) a full set of three satiety hormones, ii) a set of only two satiety hormones, and iii) a set of only one satiety hormone were 0.96, 0.96, and 0.89, respectively. The predicted VAS responses discriminated the satiety effects of high satiating food types from less satiating food types both in orally fed and ileal infused forms. From this application of artificial neural networks, one may conclude that neural network models are very suitable to describe situations where behavior is complex and incompletely understood. However, training data sets that fit the experimental conditions need to be available.

  9. Fitness landscape complexity and the emergence of modularity in neural networks

    NASA Astrophysics Data System (ADS)

    Lowell, Jessica

    Previous research has shown that the shape of the fitness landscape can affect the evolution of modularity. We evolved neural networks to solve different tasks with different fitness landscapes, using NEAT, a popular neuroevolution algorithm that quantifies similarity between genomes in order to divide them into species. We used this speciation mechanism as a means to examine fitness landscape complexity, and to examine connections between fitness landscape complexity and the emergence of modularity.

  10. Neural dynamics based on the recognition of neural fingerprints

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2015-01-01

    Experimental evidence has revealed the existence of characteristic spiking features in different neural signals, e.g., individual neural signatures identifying the emitter or functional signatures characterizing specific tasks. These neural fingerprints may play a critical role in neural information processing, since they allow receptors to discriminate or contextualize incoming stimuli. This could be a powerful strategy for neural systems that greatly enhances the encoding and processing capacity of these networks. Nevertheless, the study of information processing based on the identification of specific neural fingerprints has attracted little attention. In this work, we study (i) the emerging collective dynamics of a network of neurons that communicate with each other by exchange of neural fingerprints and (ii) the influence of the network topology on the self-organizing properties within the network. Complex collective dynamics emerge in the network in the presence of stimuli. Predefined inputs, i.e., specific neural fingerprints, are detected and encoded into coexisting patterns of activity that propagate throughout the network with different spatial organization. The patterns evoked by a stimulus can survive after the stimulation is over, which provides memory mechanisms to the network. The results presented in this paper suggest that neural information processing based on neural fingerprints can be a plausible, flexible, and powerful strategy. PMID:25852531

  11. A Deep Neural Network Model for Rainfall Estimation UsingPolarimetric WSR-88DP Radar Observations

    NASA Astrophysics Data System (ADS)

    Tan, H.; Chandra, C. V.; Chen, H.

    2016-12-01

    Rainfall estimation based on radar measurements has been an important topic for a few decades. Generally, radar rainfall estimation is conducted through parametric algorisms such as reflectivity-rainfall relation (i.e., Z-R relation). On the other hand, neural networks are developed for ground rainfall estimation based on radar measurements. This nonparametric method, which takes into account of both radar observations and rainfall measurements from ground rain gauges, has been demonstrated successfully for rainfall rate estimation. However, the neural network-based rainfall estimation is limited in practice due to the model complexity and structure, data quality, as well as different rainfall microphysics. Recently, the deep learning approach has been introduced in pattern recognition and machine learning areas. Compared to traditional neural networks, the deep learning based methodologies have larger number of hidden layers and more complex structure for data representation. Through a hierarchical learning process, the high level structured information and knowledge can be extracted automatically from low level features of the data. In this paper, we introduce a novel deep neural network model for rainfall estimation based on ground polarimetric radar measurements .The model is designed to capture the complex abstractions of radar measurements at different levels using multiple layers feature identification and extraction. The abstractions at different levels can be used independently or fused with other data resource such as satellite-based rainfall products and/or topographic data to represent the rain characteristics at certain location. In particular, the WSR-88DP radar and rain gauge data collected in Dallas - Fort Worth Metroplex and Florida are used extensively to train the model, and for demonstration purposes. Quantitative evaluation of the deep neural network based rainfall products will also be presented, which is based on an independent rain gauge network.

  12. Predicting Slag Generation in Sub-Scale Test Motors Using a Neural Network

    NASA Technical Reports Server (NTRS)

    Wiesenberg, Brent

    1999-01-01

    Generation of slag (aluminum oxide) is an important issue for the Reusable Solid Rocket Motor (RSRM). Thiokol performed testing to quantify the relationship between raw material variations and slag generation in solid propellants by testing sub-scale motors cast with propellant containing various combinations of aluminum fuel and ammonium perchlorate (AP) oxidizer particle sizes. The test data were analyzed using statistical methods and an artificial neural network. This paper primarily addresses the neural network results with some comparisons to the statistical results. The neural network showed that the particle sizes of both the aluminum and unground AP have a measurable effect on slag generation. The neural network analysis showed that aluminum particle size is the dominant driver in slag generation, about 40% more influential than AP. The network predictions of the amount of slag produced during firing of sub-scale motors were 16% better than the predictions of a statistically derived empirical equation. Another neural network successfully characterized the slag generated during full-scale motor tests. The success is attributable to the ability of neural networks to characterize multiple complex factors including interactions that affect slag generation.

  13. Synchronization of fractional-order complex-valued neural networks with time delay.

    PubMed

    Bao, Haibo; Park, Ju H; Cao, Jinde

    2016-09-01

    This paper deals with the problem of synchronization of fractional-order complex-valued neural networks with time delays. By means of linear delay feedback control and a fractional-order inequality, sufficient conditions are obtained to guarantee the synchronization of the drive-response systems. Numerical simulations are provided to show the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control.

    PubMed

    Yang, Shiju; Li, Chuandong; Huang, Tingwen

    2016-03-01

    The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.

    PubMed

    Li, Xiumin; Small, Michael

    2012-06-01

    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.

  16. Pulse-firing winner-take-all networks

    NASA Technical Reports Server (NTRS)

    Meador, Jack L.

    1991-01-01

    Winner-take-all (WTA) neural networks using pulse-firing processing elements are introduced. In the pulse-firing WTA (PWTA) networks described, input and activation signal shunting is controlled by one shared lateral inhibition signal. This organization yields an O(n) area complexity that is convenient for integrated circuit implementation. Appropriately specified network parameters allow for the accurate continuous evaluation of inputs using a signal representation compatible with established pulse-firing neural network implementations.

  17. Application of complex discrete wavelet transform in classification of Doppler signals using complex-valued artificial neural network.

    PubMed

    Ceylan, Murat; Ceylan, Rahime; Ozbay, Yüksel; Kara, Sadik

    2008-09-01

    In biomedical signal classification, due to the huge amount of data, to compress the biomedical waveform data is vital. This paper presents two different structures formed using feature extraction algorithms to decrease size of feature set in training and test data. The proposed structures, named as wavelet transform-complex-valued artificial neural network (WT-CVANN) and complex wavelet transform-complex-valued artificial neural network (CWT-CVANN), use real and complex discrete wavelet transform for feature extraction. The aim of using wavelet transform is to compress data and to reduce training time of network without decreasing accuracy rate. In this study, the presented structures were applied to the problem of classification in carotid arterial Doppler ultrasound signals. Carotid arterial Doppler ultrasound signals were acquired from left carotid arteries of 38 patients and 40 healthy volunteers. The patient group included 22 males and 16 females with an established diagnosis of the early phase of atherosclerosis through coronary or aortofemoropopliteal (lower extremity) angiographies (mean age, 59 years; range, 48-72 years). Healthy volunteers were young non-smokers who seem to not bear any risk of atherosclerosis, including 28 males and 12 females (mean age, 23 years; range, 19-27 years). Sensitivity, specificity and average detection rate were calculated for comparison, after training and test phases of all structures finished. These parameters have demonstrated that training times of CVANN and real-valued artificial neural network (RVANN) were reduced using feature extraction algorithms without decreasing accuracy rate in accordance to our aim.

  18. Hybrid computing using a neural network with dynamic external memory.

    PubMed

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  19. An evolutionary algorithm that constructs recurrent neural networks.

    PubMed

    Angeline, P J; Saunders, G M; Pollack, J B

    1994-01-01

    Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods.

  20. A novel nonlinear adaptive filter using a pipelined second-order Volterra recurrent neural network.

    PubMed

    Zhao, Haiquan; Zhang, Jiashu

    2009-12-01

    To enhance the performance and overcome the heavy computational complexity of recurrent neural networks (RNN), a novel nonlinear adaptive filter based on a pipelined second-order Volterra recurrent neural network (PSOVRNN) is proposed in this paper. A modified real-time recurrent learning (RTRL) algorithm of the proposed filter is derived in much more detail. The PSOVRNN comprises of a number of simple small-scale second-order Volterra recurrent neural network (SOVRNN) modules. In contrast to the standard RNN, these modules of a PSOVRNN can be performed simultaneously in a pipelined parallelism fashion, which can lead to a significant improvement in its total computational efficiency. Moreover, since each module of the PSOVRNN is a SOVRNN in which nonlinearity is introduced by the recursive second-order Volterra (RSOV) expansion, its performance can be further improved. Computer simulations have demonstrated that the PSOVRNN performs better than the pipelined recurrent neural network (PRNN) and RNN for nonlinear colored signals prediction and nonlinear channel equalization. However, the superiority of the PSOVRNN over the PRNN is at the cost of increasing computational complexity due to the introduced nonlinear expansion of each module.

  1. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  2. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule.

    PubMed

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-11-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  3. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than themore » SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.« less

  4. Neurophysiological Basis of Multi-Scale Entropy of Brain Complexity and Its Relationship With Functional Connectivity.

    PubMed

    Wang, Danny J J; Jann, Kay; Fan, Chang; Qiao, Yang; Zang, Yu-Feng; Lu, Hanbing; Yang, Yihong

    2018-01-01

    Recently, non-linear statistical measures such as multi-scale entropy (MSE) have been introduced as indices of the complexity of electrophysiology and fMRI time-series across multiple time scales. In this work, we investigated the neurophysiological underpinnings of complexity (MSE) of electrophysiology and fMRI signals and their relations to functional connectivity (FC). MSE and FC analyses were performed on simulated data using neural mass model based brain network model with the Brain Dynamics Toolbox, on animal models with concurrent recording of fMRI and electrophysiology in conjunction with pharmacological manipulations, and on resting-state fMRI data from the Human Connectome Project. Our results show that the complexity of regional electrophysiology and fMRI signals is positively correlated with network FC. The associations between MSE and FC are dependent on the temporal scales or frequencies, with higher associations between MSE and FC at lower temporal frequencies. Our results from theoretical modeling, animal experiment and human fMRI indicate that (1) Regional neural complexity and network FC may be two related aspects of brain's information processing: the more complex regional neural activity, the higher FC this region has with other brain regions; (2) MSE at high and low frequencies may represent local and distributed information processing across brain regions. Based on literature and our data, we propose that the complexity of regional neural signals may serve as an index of the brain's capacity of information processing-increased complexity may indicate greater transition or exploration between different states of brain networks, thereby a greater propensity for information processing.

  5. Computational exploration of neuron and neural network models in neurobiology.

    PubMed

    Prinz, Astrid A

    2007-01-01

    The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.

  6. Tutorial: Neural networks and their potential application in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhrig, R.E.

    A neural network is a data processing system consisting of a number of simple, highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks have emerged in the past few years as an area of unusual opportunity for research, development and application to a variety of real world problems. Indeed, neural networks exhibit characteristics and capabilities not provided by any other technology. Examples include reading Japanese Kanjimore » characters and human handwriting, reading a typewritten manuscript aloud, compensating for alignment errors in robots, interpreting very noise'' signals (e.g. electroencephalograms), modeling complex systems that cannot be modelled mathematically, and predicting whether proposed loans will be good or fail. This paper presents a brief tutorial on neural networks and describes research on the potential applications to nuclear power plants.« less

  7. Neural networks: Alternatives to conventional techniques for automatic docking

    NASA Technical Reports Server (NTRS)

    Vinz, Bradley L.

    1994-01-01

    Automatic docking of orbiting spacecraft is a crucial operation involving the identification of vehicle orientation as well as complex approach dynamics. The chaser spacecraft must be able to recognize the target spacecraft within a scene and achieve accurate closing maneuvers. In a video-based system, a target scene must be captured and transformed into a pattern of pixels. Successful recognition lies in the interpretation of this pattern. Due to their powerful pattern recognition capabilities, artificial neural networks offer a potential role in interpretation and automatic docking processes. Neural networks can reduce the computational time required by existing image processing and control software. In addition, neural networks are capable of recognizing and adapting to changes in their dynamic environment, enabling enhanced performance, redundancy, and fault tolerance. Most neural networks are robust to failure, capable of continued operation with a slight degradation in performance after minor failures. This paper discusses the particular automatic docking tasks neural networks can perform as viable alternatives to conventional techniques.

  8. Rod-Shaped Neural Units for Aligned 3D Neural Network Connection.

    PubMed

    Kato-Negishi, Midori; Onoe, Hiroaki; Ito, Akane; Takeuchi, Shoji

    2017-08-01

    This paper proposes neural tissue units with aligned nerve fibers (called rod-shaped neural units) that connect neural networks with aligned neurons. To make the proposed units, 3D fiber-shaped neural tissues covered with a calcium alginate hydrogel layer are prepared with a microfluidic system and are cut in an accurate and reproducible manner. These units have aligned nerve fibers inside the hydrogel layer and connectable points on both ends. By connecting the units with a poly(dimethylsiloxane) guide, 3D neural tissues can be constructed and maintained for more than two weeks of culture. In addition, neural networks can be formed between the different neural units via synaptic connections. Experimental results indicate that the proposed rod-shaped neural units are effective tools for the construction of spatially complex connections with aligned nerve fibers in vitro. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Predicting wettability behavior of fluorosilica coated metal surface using optimum neural network

    NASA Astrophysics Data System (ADS)

    Taghipour-Gorjikolaie, Mehran; Valipour Motlagh, Naser

    2018-02-01

    The interaction between variables, which are effective on the surface wettability, is very complex to predict the contact angles and sliding angles of liquid drops. In this paper, in order to solve this complexity, artificial neural network was used to develop reliable models for predicting the angles of liquid drops. Experimental data are divided into training data and testing data. By using training data and feed forward structure for the neural network and using particle swarm optimization for training the neural network based models, the optimum models were developed. The obtained results showed that regression index for the proposed models for the contact angles and sliding angles are 0.9874 and 0.9920, respectively. As it can be seen, these values are close to unit and it means the reliable performance of the models. Also, it can be inferred from the results that the proposed model have more reliable performance than multi-layer perceptron and radial basis function based models.

  10. Human Age Recognition by Electrocardiogram Signal Based on Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Dasgupta, Hirak

    2016-12-01

    The objective of this work is to make a neural network function approximation model to detect human age from the electrocardiogram (ECG) signal. The input vectors of the neural network are the Katz fractal dimension of the ECG signal, frequencies in the QRS complex, male or female (represented by numeric constant) and the average of successive R-R peak distance of a particular ECG signal. The QRS complex has been detected by short time Fourier transform algorithm. The successive R peak has been detected by, first cutting the signal into periods by auto-correlation method and then finding the absolute of the highest point in each period. The neural network used in this problem consists of two layers, with Sigmoid neuron in the input and linear neuron in the output layer. The result shows the mean of errors as -0.49, 1.03, 0.79 years and the standard deviation of errors as 1.81, 1.77, 2.70 years during training, cross validation and testing with unknown data sets, respectively.

  11. Efficient implementation of neural network deinterlacing

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee

    2009-02-01

    Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.

  12. Artificial neural network in cosmic landscape

    NASA Astrophysics Data System (ADS)

    Liu, Junyu

    2017-12-01

    In this paper we propose that artificial neural network, the basis of machine learning, is useful to generate the inflationary landscape from a cosmological point of view. Traditional numerical simulations of a global cosmic landscape typically need an exponential complexity when the number of fields is large. However, a basic application of artificial neural network could solve the problem based on the universal approximation theorem of the multilayer perceptron. A toy model in inflation with multiple light fields is investigated numerically as an example of such an application.

  13. Blur identification by multilayer neural network based on multivalued neurons.

    PubMed

    Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T

    2008-05-01

    A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.

  14. Dynamic Neural Networks Supporting Memory Retrieval

    PubMed Central

    St. Jacques, Peggy L.; Kragel, Philip A.; Rubin, David C.

    2011-01-01

    How do separate neural networks interact to support complex cognitive processes such as remembrance of the personal past? Autobiographical memory (AM) retrieval recruits a consistent pattern of activation that potentially comprises multiple neural networks. However, it is unclear how such large-scale neural networks interact and are modulated by properties of the memory retrieval process. In the present functional MRI (fMRI) study, we combined independent component analysis (ICA) and dynamic causal modeling (DCM) to understand the neural networks supporting AM retrieval. ICA revealed four task-related components consistent with the previous literature: 1) Medial Prefrontal Cortex (PFC) Network, associated with self-referential processes, 2) Medial Temporal Lobe (MTL) Network, associated with memory, 3) Frontoparietal Network, associated with strategic search, and 4) Cingulooperculum Network, associated with goal maintenance. DCM analysis revealed that the medial PFC network drove activation within the system, consistent with the importance of this network to AM retrieval. Additionally, memory accessibility and recollection uniquely altered connectivity between these neural networks. Recollection modulated the influence of the medial PFC on the MTL network during elaboration, suggesting that greater connectivity among subsystems of the default network supports greater re-experience. In contrast, memory accessibility modulated the influence of frontoparietal and MTL networks on the medial PFC network, suggesting that ease of retrieval involves greater fluency among the multiple networks contributing to AM. These results show the integration between neural networks supporting AM retrieval and the modulation of network connectivity by behavior. PMID:21550407

  15. Modular representation of layered neural networks.

    PubMed

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. An Application to the Prediction of LOD Change Based on General Regression Neural Network

    NASA Astrophysics Data System (ADS)

    Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.

    2011-07-01

    Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.

  17. Engineering-Aligned 3D Neural Circuit in Microfluidic Device.

    PubMed

    Bang, Seokyoung; Na, Sangcheol; Jang, Jae Myung; Kim, Jinhyun; Jeon, Noo Li

    2016-01-07

    The brain is one of the most important and complex organs in the human body. Although various neural network models have been proposed for in vitro 3D neuronal networks, it has been difficult to mimic functional and structural complexity of the in vitro neural circuit. Here, a microfluidic model of a simplified 3D neural circuit is reported. First, the microfluidic device is filled with Matrigel and continuous flow is delivered across the device during gelation. The fluidic flow aligns the extracellular matrix (ECM) components along the flow direction. Following the alignment of ECM fibers, neurites of primary rat cortical neurons are grown into the Matrigel at the average speed of 250 μm d(-1) and form axon bundles approximately 1500 μm in length at 6 days in vitro (DIV). Additionally, neural networks are developed from presynaptic to postsynaptic neurons at 14 DIV. The establishment of aligned 3D neural circuits is confirmed with the immunostaining of PSD-95 and synaptophysin and the observation of calcium signal transmission. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Simulator for neural networks and action potentials.

    PubMed

    Baxter, Douglas A; Byrne, John H

    2007-01-01

    A key challenge for neuroinformatics is to devise methods for representing, accessing, and integrating vast amounts of diverse and complex data. A useful approach to represent and integrate complex data sets is to develop mathematical models [Arbib (The Handbook of Brain Theory and Neural Networks, pp. 741-745, 2003); Arbib and Grethe (Computing the Brain: A Guide to Neuroinformatics, 2001); Ascoli (Computational Neuroanatomy: Principles and Methods, 2002); Bower and Bolouri (Computational Modeling of Genetic and Biochemical Networks, 2001); Hines et al. (J. Comput. Neurosci. 17, 7-11, 2004); Shepherd et al. (Trends Neurosci. 21, 460-468, 1998); Sivakumaran et al. (Bioinformatics 19, 408-415, 2003); Smolen et al. (Neuron 26, 567-580, 2000); Vadigepalli et al. (OMICS 7, 235-252, 2003)]. Models of neural systems provide quantitative and modifiable frameworks for representing data and analyzing neural function. These models can be developed and solved using neurosimulators. One such neurosimulator is simulator for neural networks and action potentials (SNNAP) [Ziv (J. Neurophysiol. 71, 294-308, 1994)]. SNNAP is a versatile and user-friendly tool for developing and simulating models of neurons and neural networks. SNNAP simulates many features of neuronal function, including ionic currents and their modulation by intracellular ions and/or second messengers, and synaptic transmission and synaptic plasticity. SNNAP is written in Java and runs on most computers. Moreover, SNNAP provides a graphical user interface (GUI) and does not require programming skills. This chapter describes several capabilities of SNNAP and illustrates methods for simulating neurons and neural networks. SNNAP is available at http://snnap.uth.tmc.edu .

  19. Face recognition: a convolutional neural-network approach.

    PubMed

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  20. Chimera states in brain networks: Empirical neural vs. modular fractal connectivity

    NASA Astrophysics Data System (ADS)

    Chouzouris, Teresa; Omelchenko, Iryna; Zakharova, Anna; Hlinka, Jaroslav; Jiruska, Premysl; Schöll, Eckehard

    2018-04-01

    Complex spatiotemporal patterns, called chimera states, consist of coexisting coherent and incoherent domains and can be observed in networks of coupled oscillators. The interplay of synchrony and asynchrony in complex brain networks is an important aspect in studies of both the brain function and disease. We analyse the collective dynamics of FitzHugh-Nagumo neurons in complex networks motivated by its potential application to epileptology and epilepsy surgery. We compare two topologies: an empirical structural neural connectivity derived from diffusion-weighted magnetic resonance imaging and a mathematically constructed network with modular fractal connectivity. We analyse the properties of chimeras and partially synchronized states and obtain regions of their stability in the parameter planes. Furthermore, we qualitatively simulate the dynamics of epileptic seizures and study the influence of the removal of nodes on the network synchronizability, which can be useful for applications to epileptic surgery.

  1. Analysis of complex neural circuits with nonlinear multidimensional hidden state models

    PubMed Central

    Friedman, Alexander; Slocum, Joshua F.; Tyulmankov, Danil; Gibb, Leif G.; Altshuler, Alex; Ruangwises, Suthee; Shi, Qinru; Toro Arana, Sebastian E.; Beck, Dirk W.; Sholes, Jacquelyn E. C.; Graybiel, Ann M.

    2016-01-01

    A universal need in understanding complex networks is the identification of individual information channels and their mutual interactions under different conditions. In neuroscience, our premier example, networks made up of billions of nodes dynamically interact to bring about thought and action. Granger causality is a powerful tool for identifying linear interactions, but handling nonlinear interactions remains an unmet challenge. We present a nonlinear multidimensional hidden state (NMHS) approach that achieves interaction strength analysis and decoding of networks with nonlinear interactions by including latent state variables for each node in the network. We compare NMHS to Granger causality in analyzing neural circuit recordings and simulations, improvised music, and sociodemographic data. We conclude that NMHS significantly extends the scope of analyses of multidimensional, nonlinear networks, notably in coping with the complexity of the brain. PMID:27222584

  2. Artificial and Bayesian Neural Networks

    PubMed

    Korhani Kangi, Azam; Bahrampour, Abbas

    2018-02-26

    Introduction and purpose: In recent years the use of neural networks without any premises for investigation of prognosis in analyzing survival data has increased. Artificial neural networks (ANN) use small processors with a continuous network to solve problems inspired by the human brain. Bayesian neural networks (BNN) constitute a neural-based approach to modeling and non-linearization of complex issues using special algorithms and statistical methods. Gastric cancer incidence is the first and third ranking for men and women in Iran, respectively. The aim of the present study was to assess the value of an artificial neural network and a Bayesian neural network for modeling and predicting of probability of gastric cancer patient death. Materials and Methods: In this study, we used information on 339 patients aged from 20 to 90 years old with positive gastric cancer, referred to Afzalipoor and Shahid Bahonar Hospitals in Kerman City from 2001 to 2015. The three layers perceptron neural network (ANN) and the Bayesian neural network (BNN) were used for predicting the probability of mortality using the available data. To investigate differences between the models, sensitivity, specificity, accuracy and the area under receiver operating characteristic curves (AUROCs) were generated. Results: In this study, the sensitivity and specificity of the artificial neural network and Bayesian neural network models were 0.882, 0.903 and 0.954, 0.909, respectively. Prediction accuracy and the area under curve ROC for the two models were 0.891, 0.944 and 0.935, 0.961. The age at diagnosis of gastric cancer was most important for predicting survival, followed by tumor grade, morphology, gender, smoking history, opium consumption, receiving chemotherapy, presence of metastasis, tumor stage, receiving radiotherapy, and being resident in a village. Conclusion: The findings of the present study indicated that the Bayesian neural network is preferable to an artificial neural network for predicting survival of gastric cancer patients in Iran. Creative Commons Attribution License

  3. Neural coding in graphs of bidirectional associative memories.

    PubMed

    Bouchain, A David; Palm, Günther

    2012-01-24

    In the last years we have developed large neural network models for the realization of complex cognitive tasks in a neural network architecture that resembles the network of the cerebral cortex. We have used networks of several cortical modules that contain two populations of neurons (one excitatory, one inhibitory). The excitatory populations in these so-called "cortical networks" are organized as a graph of Bidirectional Associative Memories (BAMs), where edges of the graph correspond to BAMs connecting two neural modules and nodes of the graph correspond to excitatory populations with associative feedback connections (and inhibitory interneurons). The neural code in each of these modules consists essentially of the firing pattern of the excitatory population, where mainly it is the subset of active neurons that codes the contents to be represented. The overall activity can be used to distinguish different properties of the patterns that are represented which we need to distinguish and control when performing complex tasks like language understanding with these cortical networks. The most important pattern properties or situations are: exactly fitting or matching input, incomplete information or partially matching pattern, superposition of several patterns, conflicting information, and new information that is to be learned. We show simple simulations of these situations in one area or module and discuss how to distinguish these situations based on the overall internal activation of the module. This article is part of a Special Issue entitled "Neural Coding". Copyright © 2011 Elsevier B.V. All rights reserved.

  4. International Neural Network Society Annual Meeting (1994) Held in San Diego, California on 5-9 June 1994. Volume 3.

    DTIC Science & Technology

    1994-06-09

    Competitive Neural Nets Speed Complex Fluid Flow Calculations 1-366 T. Long, E. Hanzevack Neural Networks for Steam Boiler MIMO Modeling and Advisory Control...Gallinr The Cochlear Nucleus and Primary Cortex as a Sequence of Distributed Neural Filters in Phoneme IV-607 Perception J. Antrobus, C. Tarshish, S...propulsion linear model, a fuel flow actuator modelled as a linear second order system with position and rate limits, and a thrust vectoring actuator

  5. Devices and circuits for nanoelectronic implementation of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Turel, Ozgur

    Biological neural networks perform complicated information processing tasks at speeds better than conventional computers based on conventional algorithms. This has inspired researchers to look into the way these networks function, and propose artificial networks that mimic their behavior. Unfortunately, most artificial neural networks, either software or hardware, do not provide either the speed or the complexity of a human brain. Nanoelectronics, with high density and low power dissipation that it provides, may be used in developing more efficient artificial neural networks. This work consists of two major contributions in this direction. First is the proposal of the CMOL concept, hybrid CMOS-molecular hardware [1-8]. CMOL may circumvent most of the problems in posed by molecular devices, such as low yield, vet provide high active device density, ˜1012/cm 2. The second contribution is CrossNets, artificial neural networks that are based on CMOL. We showed that CrossNets, with their fault tolerance, exceptional speed (˜ 4 to 6 orders of magnitude faster than biological neural networks) can perform any task any artificial neural network can perform. Moreover, there is a hope that if their integration scale is increased to that of human cerebral cortex (˜ 1010 neurons and ˜ 1014 synapses), they may be capable of performing more advanced tasks.

  6. Re-Evaluation of the AASHTO-Flexible Pavement Design Equation with Neural Network Modeling

    PubMed Central

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance. PMID:25397962

  7. Re-evaluation of the AASHTO-flexible pavement design equation with neural network modeling.

    PubMed

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance.

  8. Cloud Classification in Polar and Desert Regions and Smoke Classification from Biomass Burning Using a Hierarchical Neural Network

    NASA Technical Reports Server (NTRS)

    Alexander, June; Corwin, Edward; Lloyd, David; Logar, Antonette; Welch, Ronald

    1996-01-01

    This research focuses on a new neural network scene classification technique. The task is to identify scene elements in Advanced Very High Resolution Radiometry (AVHRR) data from three scene types: polar, desert and smoke from biomass burning in South America (smoke). The ultimate goal of this research is to design and implement a computer system which will identify the clouds present on a whole-Earth satellite view as a means of tracking global climate changes. Previous research has reported results for rule-based systems (Tovinkere et at 1992, 1993) for standard back propagation (Watters et at. 1993) and for a hierarchical approach (Corwin et al 1994) for polar data. This research uses a hierarchical neural network with don't care conditions and applies this technique to complex scenes. A hierarchical neural network consists of a switching network and a collection of leaf networks. The idea of the hierarchical neural network is that it is a simpler task to classify a certain pattern from a subset of patterns than it is to classify a pattern from the entire set. Therefore, the first task is to cluster the classes into groups. The switching, or decision network, performs an initial classification by selecting a leaf network. The leaf networks contain a reduced set of similar classes, and it is in the various leaf networks that the actual classification takes place. The grouping of classes in the various leaf networks is determined by applying an iterative clustering algorithm. Several clustering algorithms were investigated, but due to the size of the data sets, the exhaustive search algorithms were eliminated. A heuristic approach using a confusion matrix from a lightly trained neural network provided the basis for the clustering algorithm. Once the clusters have been identified, the hierarchical network can be trained. The approach of using don't care nodes results from the difficulty in generating extremely complex surfaces in order to separate one class from all of the others. This approach finds pairwise separating surfaces and forms the more complex separating surface from combinations of simpler surfaces. This technique both reduces training time and improves accuracy over the previously reported results. Accuracies of 97.47%, 95.70%, and 99.05% were achieved for the polar, desert and smoke data sets.

  9. Functional neural networks of honesty and dishonesty in children: Evidence from graph theory analysis.

    PubMed

    Ding, Xiao Pan; Wu, Si Jia; Liu, Jiangang; Fu, Genyue; Lee, Kang

    2017-09-21

    The present study examined how different brain regions interact with each other during spontaneous honest vs. dishonest communication. More specifically, we took a complex network approach based on the graph-theory to analyze neural response data when children are spontaneously engaged in honest or dishonest acts. Fifty-nine right-handed children between 7 and 12 years of age participated in the study. They lied or told the truth out of their own volition. We found that lying decreased both the global and local efficiencies of children's functional neural network. This finding, for the first time, suggests that lying disrupts the efficiency of children's cortical network functioning. Further, it suggests that the graph theory based network analysis is a viable approach to study the neural development of deception.

  10. Language Networks as Complex Systems

    ERIC Educational Resources Information Center

    Lee, Max Kueiming; Ou, Sheue-Jen

    2008-01-01

    Starting in the late eighties, with a growing discontent with analytical methods in science and the growing power of computers, researchers began to study complex systems such as living organisms, evolution of genes, biological systems, brain neural networks, epidemics, ecology, economy, social networks, etc. In the early nineties, the research…

  11. Reward-Modulated Hebbian Plasticity as Leverage for Partially Embodied Control in Compliant Robotics

    PubMed Central

    Burms, Jeroen; Caluwaerts, Ken; Dambre, Joni

    2015-01-01

    In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics. PMID:26347645

  12. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream.

    PubMed

    Güçlü, Umut; van Gerven, Marcel A J

    2015-07-08

    Converging evidence suggests that the primate ventral visual pathway encodes increasingly complex stimulus features in downstream areas. We quantitatively show that there indeed exists an explicit gradient for feature complexity in the ventral pathway of the human brain. This was achieved by mapping thousands of stimulus features of increasing complexity across the cortical sheet using a deep neural network. Our approach also revealed a fine-grained functional specialization of downstream areas of the ventral stream. Furthermore, it allowed decoding of representations from human brain activity at an unsurpassed degree of accuracy, confirming the quality of the developed approach. Stimulus features that successfully explained neural responses indicate that population receptive fields were explicitly tuned for object categorization. This provides strong support for the hypothesis that object categorization is a guiding principle in the functional organization of the primate ventral stream. Copyright © 2015 the authors 0270-6474/15/3510005-10$15.00/0.

  13. Pattern recognition neural-net by spatial mapping of biology visual field

    NASA Astrophysics Data System (ADS)

    Lin, Xin; Mori, Masahiko

    2000-05-01

    The method of spatial mapping in biology vision field is applied to artificial neural networks for pattern recognition. By the coordinate transform that is called the complex-logarithm mapping and Fourier transform, the input images are transformed into scale- rotation- and shift- invariant patterns, and then fed into a multilayer neural network for learning and recognition. The results of computer simulation and an optical experimental system are described.

  14. Practical approximation method for firing-rate models of coupled neural networks with correlated inputs

    NASA Astrophysics Data System (ADS)

    Barreiro, Andrea K.; Ly, Cheng

    2017-08-01

    Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.

  15. Robustness of a distributed neural network controller for locomotion in a hexapod robot

    NASA Technical Reports Server (NTRS)

    Chiel, Hillel J.; Beer, Randall D.; Quinn, Roger D.; Espenschied, Kenneth S.

    1992-01-01

    A distributed neural-network controller for locomotion, based on insect neurobiology, has been used to control a hexapod robot. How robust is this controller? Disabling any single sensor, effector, or central component did not prevent the robot from walking. Furthermore, statically stable gaits could be established using either sensor input or central connections. Thus, a complex interplay between central neural elements and sensor inputs is responsible for the robustness of the controller and its ability to generate a continuous range of gaits. These results suggest that biologically inspired neural-network controllers may be a robust method for robotic control.

  16. Neural network versus classical time series forecasting models

    NASA Astrophysics Data System (ADS)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  17. Convolutional neural network for road extraction

    NASA Astrophysics Data System (ADS)

    Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong

    2017-11-01

    In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.

  18. Neural complexity: A graph theoretic interpretation

    NASA Astrophysics Data System (ADS)

    Barnett, L.; Buckley, C. L.; Bullock, S.

    2011-04-01

    One of the central challenges facing modern neuroscience is to explain the ability of the nervous system to coherently integrate information across distinct functional modules in the absence of a central executive. To this end, Tononi [Proc. Natl. Acad. Sci. USA.PNASA60027-842410.1073/pnas.91.11.5033 91, 5033 (1994)] proposed a measure of neural complexity that purports to capture this property based on mutual information between complementary subsets of a system. Neural complexity, so defined, is one of a family of information theoretic metrics developed to measure the balance between the segregation and integration of a system’s dynamics. One key question arising for such measures involves understanding how they are influenced by network topology. Sporns [Cereb. Cortex53OPAV1047-321110.1093/cercor/10.2.127 10, 127 (2000)] employed numerical models in order to determine the dependence of neural complexity on the topological features of a network. However, a complete picture has yet to be established. While De Lucia [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.71.016114 71, 016114 (2005)] made the first attempts at an analytical account of this relationship, their work utilized a formulation of neural complexity that, we argue, did not reflect the intuitions of the original work. In this paper we start by describing weighted connection matrices formed by applying a random continuous weight distribution to binary adjacency matrices. This allows us to derive an approximation for neural complexity in terms of the moments of the weight distribution and elementary graph motifs. In particular, we explicitly establish a dependency of neural complexity on cyclic graph motifs.

  19. Forecasting PM10 in metropolitan areas: Efficacy of neural networks.

    PubMed

    Fernando, H J S; Mammarella, M C; Grandoni, G; Fedele, P; Di Marco, R; Dimitrova, R; Hyde, P

    2012-04-01

    Deterministic photochemical air quality models are commonly used for regulatory management and planning of urban airsheds. These models are complex, computer intensive, and hence are prohibitively expensive for routine air quality predictions. Stochastic methods are becoming increasingly popular as an alternative, which relegate decision making to artificial intelligence based on Neural Networks that are made of artificial neurons or 'nodes' capable of 'learning through training' via historic data. A Neural Network was used to predict particulate matter concentration at a regulatory monitoring site in Phoenix, Arizona; its development, efficacy as a predictive tool and performance vis-à-vis a commonly used regulatory photochemical model are described in this paper. It is concluded that Neural Networks are much easier, quicker and economical to implement without compromising the accuracy of predictions. Neural Networks can be used to develop rapid air quality warning systems based on a network of automated monitoring stations. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Determining geophysical properties from well log data using artificial neural networks and fuzzy inference systems

    NASA Astrophysics Data System (ADS)

    Chang, Hsien-Cheng

    Two novel synergistic systems consisting of artificial neural networks and fuzzy inference systems are developed to determine geophysical properties by using well log data. These systems are employed to improve the determination accuracy in carbonate rocks, which are generally more complex than siliciclastic rocks. One system, consisting of a single adaptive resonance theory (ART) neural network and three fuzzy inference systems (FISs), is used to determine the permeability category. The other system, which is composed of three ART neural networks and a single FIS, is employed to determine the lithofacies. The geophysical properties studied in this research, permeability category and lithofacies, are treated as categorical data. The permeability values are transformed into a "permeability category" to account for the effects of scale differences between core analyses and well logs, and heterogeneity in the carbonate rocks. The ART neural networks dynamically cluster the input data sets into different groups. The FIS is used to incorporate geologic experts' knowledge, which is usually in linguistic forms, into systems. These synergistic systems thus provide viable alternative solutions to overcome the effects of heterogeneity, the uncertainties of carbonate rock depositional environments, and the scarcity of well log data. The results obtained in this research show promising improvements over backpropagation neural networks. For the permeability category, the prediction accuracies are 68.4% and 62.8% for the multiple-single ART neural network-FIS and a single backpropagation neural network, respectively. For lithofacies, the prediction accuracies are 87.6%, 79%, and 62.8% for the single-multiple ART neural network-FIS, a single ART neural network, and a single backpropagation neural network, respectively. The sensitivity analysis results show that the multiple-single ART neural networks-FIS and a single ART neural network possess the same matching trends in determining lithofacies. This research shows that the adaptive resonance theory neural networks enable decision-makers to clearly distinguish the importance of different pieces of data which are useful in three-dimensional subsurface modeling. Geologic experts' knowledge can be easily applied and maintained by using the fuzzy inference systems.

  1. Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Patrick I.

    2003-09-23

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neuralmore » networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing information [2]. Each one of these cells acts as a simple processor. When individual cells interact with one another, the complex abilities of the brain are made possible. In neural networks, the input or data are processed by a propagation function that adds up the values of all the incoming data. The ending value is then compared with a threshold or specific value. The resulting value must exceed the activation function value in order to become output. The activation function is a mathematical function that a neuron uses to produce an output referring to its input value. [8] Figure 1 depicts this process. Neural networks usually have three components an input, a hidden, and an output. These layers create the end result of the neural network. A real world example is a child associating the word dog with a picture. The child says dog and simultaneously looks a picture of a dog. The input is the spoken word ''dog'', the hidden is the brain processing, and the output will be the category of the word dog based on the picture. This illustration describes how a neural network functions.« less

  2. Analysis of structural patterns in the brain with the complex network approach

    NASA Astrophysics Data System (ADS)

    Maksimenko, Vladimir A.; Makarov, Vladimir V.; Kharchenko, Alexander A.; Pavlov, Alexey N.; Khramova, Marina V.; Koronovskii, Alexey A.; Hramov, Alexander E.

    2015-03-01

    In this paper we study mechanisms of the phase synchronization in a model network of Van der Pol oscillators and in the neural network of the brain by consideration of macroscopic parameters of these networks. As the macroscopic characteristics of the model network we consider a summary signal produced by oscillators. Similar to the model simulations, we study EEG signals reflecting the macroscopic dynamics of neural network. We show that the appearance of the phase synchronization leads to an increased peak in the wavelet spectrum related to the dynamics of synchronized oscillators. The observed correlation between the phase relations of individual elements and the macroscopic characteristics of the whole network provides a way to detect phase synchronization in the neural networks in the cases of normal and pathological activity.

  3. A novel neural-wavelet approach for process diagnostics and complex system modeling

    NASA Astrophysics Data System (ADS)

    Gao, Rong

    Neural networks have been effective in several engineering applications because of their learning abilities and robustness. However certain shortcomings, such as slow convergence and local minima, are always associated with neural networks, especially neural networks applied to highly nonlinear and non-stationary problems. These problems can be effectively alleviated by integrating a new powerful tool, wavelets, into conventional neural networks. The multi-resolution analysis and feature localization capabilities of the wavelet transform offer neural networks new possibilities for learning. A neural wavelet network approach developed in this thesis enjoys fast convergence rate with little possibility to be caught at a local minimum. It combines the localization properties of wavelets with the learning abilities of neural networks. Two different testbeds are used for testing the efficiency of the new approach. The first is magnetic flowmeter-based process diagnostics: here we extend previous work, which has demonstrated that wavelet groups contain process information, to more general process diagnostics. A loop at Applied Intelligent Systems Lab (AISL) is used for collecting and analyzing data through the neural-wavelet approach. The research is important for thermal-hydraulic processes in nuclear and other engineering fields. The neural-wavelet approach developed is also tested with data from the electric power grid. More specifically, the neural-wavelet approach is used for performing short-term and mid-term prediction of power load demand. In addition, the feasibility of determining the type of load using the proposed neural wavelet approach is also examined. The notion of cross scale product has been developed as an expedient yet reliable discriminator of loads. Theoretical issues involved in the integration of wavelets and neural networks are discussed and future work outlined.

  4. Convergence analysis of sliding mode trajectories in multi-objective neural networks learning.

    PubMed

    Costa, Marcelo Azevedo; Braga, Antonio Padua; de Menezes, Benjamin Rodrigues

    2012-09-01

    The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Functional complexity emerging from anatomical constraints in the brain: the significance of network modularity and rich-clubs

    NASA Astrophysics Data System (ADS)

    Zamora-López, Gorka; Chen, Yuhan; Deco, Gustavo; Kringelbach, Morten L.; Zhou, Changsong

    2016-12-01

    The large-scale structural ingredients of the brain and neural connectomes have been identified in recent years. These are, similar to the features found in many other real networks: the arrangement of brain regions into modules and the presence of highly connected regions (hubs) forming rich-clubs. Here, we examine how modules and hubs shape the collective dynamics on networks and we find that both ingredients lead to the emergence of complex dynamics. Comparing the connectomes of C. elegans, cats, macaques and humans to surrogate networks in which either modules or hubs are destroyed, we find that functional complexity always decreases in the perturbed networks. A comparison between simulated and empirically obtained resting-state functional connectivity indicates that the human brain, at rest, lies in a dynamical state that reflects the largest complexity its anatomical connectome can host. Last, we generalise the topology of neural connectomes into a new hierarchical network model that successfully combines modular organisation with rich-club forming hubs. This is achieved by centralising the cross-modular connections through a preferential attachment rule. Our network model hosts more complex dynamics than other hierarchical models widely used as benchmarks.

  6. Functional complexity emerging from anatomical constraints in the brain: the significance of network modularity and rich-clubs

    PubMed Central

    Zamora-López, Gorka; Chen, Yuhan; Deco, Gustavo; Kringelbach, Morten L.; Zhou, Changsong

    2016-01-01

    The large-scale structural ingredients of the brain and neural connectomes have been identified in recent years. These are, similar to the features found in many other real networks: the arrangement of brain regions into modules and the presence of highly connected regions (hubs) forming rich-clubs. Here, we examine how modules and hubs shape the collective dynamics on networks and we find that both ingredients lead to the emergence of complex dynamics. Comparing the connectomes of C. elegans, cats, macaques and humans to surrogate networks in which either modules or hubs are destroyed, we find that functional complexity always decreases in the perturbed networks. A comparison between simulated and empirically obtained resting-state functional connectivity indicates that the human brain, at rest, lies in a dynamical state that reflects the largest complexity its anatomical connectome can host. Last, we generalise the topology of neural connectomes into a new hierarchical network model that successfully combines modular organisation with rich-club forming hubs. This is achieved by centralising the cross-modular connections through a preferential attachment rule. Our network model hosts more complex dynamics than other hierarchical models widely used as benchmarks. PMID:27917958

  7. Bio-inspired spiking neural network for nonlinear systems control.

    PubMed

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Design of neural network model-based controller in a fed-batch microbial electrolysis cell reactor for bio-hydrogen gas production

    NASA Astrophysics Data System (ADS)

    Azwar; Hussain, M. A.; Abdul-Wahab, A. K.; Zanil, M. F.; Mukhlishien

    2018-03-01

    One of major challenge in bio-hydrogen production process by using MEC process is nonlinear and highly complex system. This is mainly due to the presence of microbial interactions and highly complex phenomena in the system. Its complexity makes MEC system difficult to operate and control under optimal conditions. Thus, precise control is required for the MEC reactor, so that the amount of current required to produce hydrogen gas can be controlled according to the composition of the substrate in the reactor. In this work, two schemes for controlling the current and voltage of MEC were evaluated. The controllers evaluated are PID and Inverse neural network (NN) controller. The comparative study has been carried out under optimal condition for the production of bio-hydrogen gas wherein the controller output is based on the correlation of optimal current and voltage to the MEC. Various simulation tests involving multiple set-point changes and disturbances rejection have been evaluated and the performances of both controllers are discussed. The neural network-based controller results in fast response time and less overshoots while the offset effects are minimal. In conclusion, the Inverse neural network (NN)-based controllers provide better control performance for the MEC system compared to the PID controller.

  9. Visual NNet: An Educational ANN's Simulation Environment Reusing Matlab Neural Networks Toolbox

    ERIC Educational Resources Information Center

    Garcia-Roselló, Emilio; González-Dacosta, Jacinto; Lado, Maria J.; Méndez, Arturo J.; Garcia Pérez-Schofield, Baltasar; Ferrer, Fátima

    2011-01-01

    Artificial Neural Networks (ANN's) are nowadays a common subject in different curricula of graduate and postgraduate studies. Due to the complex algorithms involved and the dynamic nature of ANN's, simulation software has been commonly used to teach this subject. This software has usually been developed specifically for learning purposes, because…

  10. TopologyNet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions

    PubMed Central

    2017-01-01

    Although deep learning approaches have had tremendous success in image, video and audio processing, computer vision, and speech recognition, their applications to three-dimensional (3D) biomolecular structural data sets have been hindered by the geometric and biological complexity. To address this problem we introduce the element-specific persistent homology (ESPH) method. ESPH represents 3D complex geometry by one-dimensional (1D) topological invariants and retains important biological information via a multichannel image-like representation. This representation reveals hidden structure-function relationships in biomolecules. We further integrate ESPH and deep convolutional neural networks to construct a multichannel topological neural network (TopologyNet) for the predictions of protein-ligand binding affinities and protein stability changes upon mutation. To overcome the deep learning limitations from small and noisy training sets, we propose a multi-task multichannel topological convolutional neural network (MM-TCNN). We demonstrate that TopologyNet outperforms the latest methods in the prediction of protein-ligand binding affinities, mutation induced globular protein folding free energy changes, and mutation induced membrane protein folding free energy changes. Availability: weilab.math.msu.edu/TDL/ PMID:28749969

  11. Passivity analysis of memristor-based impulsive inertial neural networks with time-varying delays.

    PubMed

    Wan, Peng; Jian, Jigui

    2018-03-01

    This paper focuses on delay-dependent passivity analysis for a class of memristive impulsive inertial neural networks with time-varying delays. By choosing proper variable transformation, the memristive inertial neural networks can be rewritten as first-order differential equations. The memristive model presented here is regarded as a switching system rather than employing the theory of differential inclusion and set-value map. Based on matrix inequality and Lyapunov-Krasovskii functional method, several delay-dependent passivity conditions are obtained to ascertain the passivity of the addressed networks. In addition, the results obtained here contain those on the passivity for the addressed networks without impulse effects as special cases and can also be generalized to other neural networks with more complex pulse interference. Finally, one numerical example is presented to show the validity of the obtained results. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Causal influence in neural systems: Reconciling mechanistic-reductionist and statistical perspectives. Comment on "Foundational perspectives on causality in large-scale brain networks" by M. Mannino & S.L. Bressler

    NASA Astrophysics Data System (ADS)

    Griffiths, John D.

    2015-12-01

    The modern understanding of the brain as a large, complex network of interacting elements is a natural consequence of the Neuron Doctrine [1,2] that has been bolstered in recent years by the tools and concepts of connectomics. In this abstracted, network-centric view, the essence of neural and cognitive function derives from the flows between network elements of activity and information - or, more generally, causal influence. The appropriate characterization of causality in neural systems, therefore, is a question at the very heart of systems neuroscience.

  13. Changes in the interaction of resting-state neural networks from adolescence to adulthood.

    PubMed

    Stevens, Michael C; Pearlson, Godfrey D; Calhoun, Vince D

    2009-08-01

    This study examined how the mutual interactions of functionally integrated neural networks during resting-state fMRI differed between adolescence and adulthood. Independent component analysis (ICA) was used to identify functionally connected neural networks in 100 healthy participants aged 12-30 years. Hemodynamic timecourses that represented integrated neural network activity were analyzed with tools that quantified system "causal density" estimates, which indexed the proportion of significant Granger causality relationships among system nodes. Mutual influences among networks decreased with age, likely reflecting stronger within-network connectivity and more efficient between-network influences with greater development. Supplemental tests showed that this normative age-related reduction in causal density was accompanied by fewer significant connections to and from each network, regional increases in the strength of functional integration within networks, and age-related reductions in the strength of numerous specific system interactions. The latter included paths between lateral prefrontal-parietal circuits and "default mode" networks. These results contribute to an emerging understanding that activity in widely distributed networks thought to underlie complex cognition influences activity in other networks. (c) 2009 Wiley-Liss, Inc.

  14. Modeling neural circuits in Parkinson's disease.

    PubMed

    Psiha, Maria; Vlamos, Panayiotis

    2015-01-01

    Parkinson's disease (PD) is caused by abnormal neural activity of the basal ganglia which are connected to the cerebral cortex in the brain surface through complex neural circuits. For a better understanding of the pathophysiological mechanisms of PD, it is important to identify the underlying PD neural circuits, and to pinpoint the precise nature of the crucial aberrations in these circuits. In this paper, the general architecture of a hybrid Multilayer Perceptron (MLP) network for modeling the neural circuits in PD is presented. The main idea of the proposed approach is to divide the parkinsonian neural circuitry system into three discrete subsystems: the external stimuli subsystem, the life-threatening events subsystem, and the basal ganglia subsystem. The proposed model, which includes the key roles of brain neural circuit in PD, is based on both feed-back and feed-forward neural networks. Specifically, a three-layer MLP neural network with feedback in the second layer was designed. The feedback in the second layer of this model simulates the dopamine modulatory effect of compacta on striatum.

  15. Complexities’ day-to-day dynamic evolution analysis and prediction for a Didi taxi trip network based on complex network theory

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Lu, Jian; Zhou, Jialin; Zhu, Jinqing; Li, Yunxuan; Wan, Qian

    2018-03-01

    Didi Dache is the most popular taxi order mobile app in China, which provides online taxi-hailing service. The obtained big database from this app could be used to analyze the complexities’ day-to-day dynamic evolution of Didi taxi trip network (DTTN) from the level of complex network dynamics. First, this paper proposes the data cleaning and modeling methods for expressing Nanjing’s DTTN as a complex network. Second, the three consecutive weeks’ data are cleaned to establish 21 DTTNs based on the proposed big data processing technology. Then, multiple topology measures that characterize the complexities’ day-to-day dynamic evolution of these networks are provided. Third, these measures of 21 DTTNs are calculated and subsequently explained with actual implications. They are used as a training set for modeling the BP neural network which is designed for predicting DTTN complexities evolution. Finally, the reliability of the designed BP neural network is verified by comparing with the actual data and the results obtained from ARIMA method simultaneously. Because network complexities are the basis for modeling cascading failures and conducting link prediction in complex system, this proposed research framework not only provides a novel perspective for analyzing DTTN from the level of system aggregated behavior, but can also be used to improve the DTTN management level.

  16. Computational neural networks in chemistry: Model free mapping devices for predicting chemical reactivity from molecular structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.W.

    1992-01-01

    Computational neural networks (CNNs) are a computational paradigm inspired by the brain's massively parallel network of highly interconnected neurons. The power of computational neural networks derives not so much from their ability to model the brain as from their ability to learn by example and to map highly complex, nonlinear functions, without the need to explicitly specify the functional relationship. Two central questions about CNNs were investigated in the context of predicting chemical reactions: (1) the mapping properties of neural networks and (2) the representation of chemical information for use in CNNs. Chemical reactivity is here considered an example ofmore » a complex, nonlinear function of molecular structure. CNN's were trained using modifications of the back propagation learning rule to map a three dimensional response surface similar to those typically observed in quantitative structure-activity and structure-property relationships. The computational neural network's mapping of the response surface was found to be robust to the effects of training sample size, noisy data and intercorrelated input variables. The investigation of chemical structure representation led to the development of a molecular structure-based connection-table representation suitable for neural network training. An extension of this work led to a BE-matrix structure representation that was found to be general for several classes of reactions. The CNN prediction of chemical reactivity and regiochemistry was investigated for electrophilic aromatic substitution reactions, Markovnikov addition to alkenes, Saytzeff elimination from haloalkanes, Diels-Alder cycloaddition, and retro Diels-Alder ring opening reactions using these connectivity-matrix derived representations. The reaction predictions made by the CNNs were more accurate than those of an expert system and were comparable to predictions made by chemists.« less

  17. Neural signal registration and analysis of axons grown in microchannels

    NASA Astrophysics Data System (ADS)

    Pigareva, Y.; Malishev, E.; Gladkov, A.; Kolpakov, V.; Bukatin, A.; Mukhina, I.; Kazantsev, V.; Pimashkin, A.

    2016-08-01

    Registration of neuronal bioelectrical signals remains one of the main physical tools to study fundamental mechanisms of signal processing in the brain. Neurons generate spiking patterns which propagate through complex map of neural network connectivity. Extracellular recording of isolated axons grown in microchannels provides amplification of the signal for detailed study of spike propagation. In this study we used neuronal hippocampal cultures grown in microfluidic devices combined with microelectrode arrays to investigate a changes of electrical activity during neural network development. We found that after 5 days in vitro after culture plating the spiking activity appears first in microchannels and on the next 2-3 days appears on the electrodes of overall neural network. We conclude that such approach provides a convenient method to study neural signal processing and functional structure development on a single cell and network level of the neuronal culture.

  18. Gradient calculations for dynamic recurrent neural networks: a survey.

    PubMed

    Pearlmutter, B A

    1995-01-01

    Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some "tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed.

  19. Development of a neural network technique for KSTAR Thomson scattering diagnostics.

    PubMed

    Lee, Seung Hun; Lee, J H; Yamada, I; Park, Jae Sun

    2016-11-01

    Neural networks provide powerful approaches of dealing with nonlinear data and have been successfully applied to fusion plasma diagnostics and control systems. Controlling tokamak plasmas in real time is essential to measure the plasma parameters in situ. However, the χ 2 method traditionally used in Thomson scattering diagnostics hampers real-time measurement due to the complexity of the calculations involved. In this study, we applied a neural network approach to Thomson scattering diagnostics in order to calculate the electron temperature, comparing the results to those obtained with the χ 2 method. The best results were obtained for 10 3 training cycles and eight nodes in the hidden layer. Our neural network approach shows good agreement with the χ 2 method and performs the calculation twenty times faster.

  20. A Red-Light Running Prevention System Based on Artificial Neural Network and Vehicle Trajectory Data

    PubMed Central

    Li, Pengfei; Li, Yan; Guo, Xiucheng

    2014-01-01

    The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was also acceptable and its predicting accurate rate was over 80%. Lastly, a prototype red-light running prevention system with the trained ANN model was described. This new system can be directly retrofitted into the existing traffic signal systems. PMID:25435870

  1. A red-light running prevention system based on artificial neural network and vehicle trajectory data.

    PubMed

    Li, Pengfei; Li, Yan; Guo, Xiucheng

    2014-01-01

    The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was also acceptable and its predicting accurate rate was over 80%. Lastly, a prototype red-light running prevention system with the trained ANN model was described. This new system can be directly retrofitted into the existing traffic signal systems.

  2. Properties of a memory network in psychology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wedemann, Roseli S.; Donangelo, Raul; Carvalho, Luis A. V. de

    We have previously described neurotic psychopathology and psychoanalytic working-through by an associative memory mechanism, based on a neural network model, where memory was modelled by a Boltzmann machine (BM). Since brain neural topology is selectively structured, we simulated known microscopic mechanisms that control synaptic properties, showing that the network self-organizes to a hierarchical, clustered structure. Here, we show some statistical mechanical properties of the complex networks which result from this self-organization. They indicate that a generalization of the BM may be necessary to model memory.

  3. Properties of a memory network in psychology

    NASA Astrophysics Data System (ADS)

    Wedemann, Roseli S.; Donangelo, Raul; de Carvalho, Luís A. V.

    2007-12-01

    We have previously described neurotic psychopathology and psychoanalytic working-through by an associative memory mechanism, based on a neural network model, where memory was modelled by a Boltzmann machine (BM). Since brain neural topology is selectively structured, we simulated known microscopic mechanisms that control synaptic properties, showing that the network self-organizes to a hierarchical, clustered structure. Here, we show some statistical mechanical properties of the complex networks which result from this self-organization. They indicate that a generalization of the BM may be necessary to model memory.

  4. Neural network application to comprehensive engine diagnostics

    NASA Technical Reports Server (NTRS)

    Marko, Kenneth A.

    1994-01-01

    We have previously reported on the use of neural networks for detection and identification of faults in complex microprocessor controlled powertrain systems. The data analyzed in those studies consisted of the full spectrum of signals passing between the engine and the real-time microprocessor controller. The specific task of the classification system was to classify system operation as nominal or abnormal and to identify the fault present. The primary concern in earlier work was the identification of faults, in sensors or actuators in the powertrain system as it was exercised over its full operating range. The use of data from a variety of sources, each contributing some potentially useful information to the classification task, is commonly referred to as sensor fusion and typifies the type of problems successfully addressed using neural networks. In this work we explore the application of neural networks to a different diagnostic problem, the diagnosis of faults in newly manufactured engines and the utility of neural networks for process control.

  5. [Neuronal and synaptic properties: fundamentals of network plasticity].

    PubMed

    Le Masson, G

    2000-02-01

    Neurons, within the nervous system, are organized in different neural networks through synaptic connections. Two fundamental components are dynamically interacting in these functional units. The first one are the neurons themselves, and far from being simple action potential generators, they are capable of complex electrical integrative properties due to various types, number, distribution and modulation of voltage-gated ionic channels. The second elements are the synapses where a similar complexity and plasticity is found. Identifying both cellular and synaptic intrinsic properties is necessary to understand the links between neural networks behavior and physiological function, and is a useful step towards a better control of neurological diseases.

  6. Modeling a Neural Network as a Teaching Tool for the Learning of the Structure-Function Relationship

    ERIC Educational Resources Information Center

    Salinas, Dino G.; Acevedo, Cristian; Gomez, Christian R.

    2010-01-01

    The authors describe an activity they have created in which students can visualize a theoretical neural network whose states evolve according to a well-known simple law. This activity provided an uncomplicated approach to a paradigm commonly represented through complex mathematical formulation. From their observations, students learned many basic…

  7. Based on BP Neural Network Stock Prediction

    ERIC Educational Resources Information Center

    Liu, Xiangwei; Ma, Xin

    2012-01-01

    The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…

  8. Application of a neural network to simulate analysis in an optimization process

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Lamarsh, William J., II

    1992-01-01

    A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.

  9. Intrinsic protective mechanisms of the neuron-glia network against glioma invasion.

    PubMed

    Iwadate, Yasuo; Fukuda, Kazumasa; Matsutani, Tomoo; Saeki, Naokatsu

    2016-04-01

    Gliomas arising in the brain parenchyma infiltrate into the surrounding brain and break down established complex neuron-glia networks. However, mounting evidence suggests that initially the network microenvironment of the adult central nervous system (CNS) is innately non-permissive to glioma cell invasion. The main players are inhibitory molecules in CNS myelin, as well as proteoglycans associated with astrocytes. Neural stem cells, and neurons themselves, possess inhibitory functions against neighboring tumor cells. These mechanisms have evolved to protect the established neuron-glia network, which is necessary for brain function. Greater insight into the interaction between glioma cells and the surrounding neuron-glia network is crucial for developing new therapies for treating these devastating tumors while preserving the important and complex neural functions of patients. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Consciousness, cognition and brain networks: New perspectives.

    PubMed

    Aldana, E M; Valverde, J L; Fábregas, N

    2016-10-01

    A detailed analysis of the literature on consciousness and cognition mechanisms based on the neural networks theory is presented. The immune and inflammatory response to the anesthetic-surgical procedure induces modulation of neuronal plasticity by influencing higher cognitive functions. Anesthetic drugs can cause unconsciousness, producing a functional disruption of cortical and thalamic cortical integration complex. The external and internal perceptions are processed through an intricate network of neural connections, involving the higher nervous activity centers, especially the cerebral cortex. This requires an integrated model, formed by neural networks and their interactions with highly specialized regions, through large-scale networks, which are distributed throughout the brain collecting information flow of these perceptions. Functional and effective connectivity between large-scale networks, are essential for consciousness, unconsciousness and cognition. It is what is called the "human connectome" or map neural networks. Copyright © 2014 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Using neural networks for prediction of air pollution index in industrial city

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.; Panchenko, A. A.; Safarov, A. M.

    2017-10-01

    This scientific paper is dedicated to the use of artificial neural networks for the ecological prediction of state of the atmospheric air of an industrial city for capability of the operative environmental decisions. In the paper, there is also the described development of two types of prediction models for determining of the air pollution index on the basis of neural networks: a temporal (short-term forecast of the pollutants content in the air for the nearest days) and a spatial (forecast of atmospheric pollution index in any point of city). The stages of development of the neural network models are briefly overviewed and description of their parameters is also given. The assessment of the adequacy of the prediction models, based on the calculation of the correlation coefficient between the output and reference data, is also provided. Moreover, due to the complexity of perception of the «neural network code» of the offered models by the ordinary users, the software implementations allowing practical usage of neural network models are also offered. It is established that the obtained neural network models provide sufficient reliable forecast, which means that they are an effective tool for analyzing and predicting the behavior of dynamics of the air pollution in an industrial city. Thus, this scientific work successfully develops the urgent matter of forecasting of the atmospheric air pollution index in industrial cities based on the use of neural network models.

  12. On the sample complexity of learning for networks of spiking neurons with nonlinear synaptic interactions.

    PubMed

    Schmitt, Michael

    2004-09-01

    We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.

  13. Trade-off between Multiple Constraints Enables Simultaneous Formation of Modules and Hubs in Neural Systems

    PubMed Central

    Chen, Yuhan; Wang, Shengjun; Hilgetag, Claus C.; Zhou, Changsong

    2013-01-01

    The formation of the complex network architecture of neural systems is subject to multiple structural and functional constraints. Two obvious but apparently contradictory constraints are low wiring cost and high processing efficiency, characterized by short overall wiring length and a small average number of processing steps, respectively. Growing evidence shows that neural networks are results from a trade-off between physical cost and functional value of the topology. However, the relationship between these competing constraints and complex topology is not well understood quantitatively. We explored this relationship systematically by reconstructing two known neural networks, Macaque cortical connectivity and C. elegans neuronal connections, from combinatory optimization of wiring cost and processing efficiency constraints, using a control parameter , and comparing the reconstructed networks to the real networks. We found that in both neural systems, the reconstructed networks derived from the two constraints can reveal some important relations between the spatial layout of nodes and the topological connectivity, and match several properties of the real networks. The reconstructed and real networks had a similar modular organization in a broad range of , resulting from spatial clustering of network nodes. Hubs emerged due to the competition of the two constraints, and their positions were close to, and partly coincided, with the real hubs in a range of values. The degree of nodes was correlated with the density of nodes in their spatial neighborhood in both reconstructed and real networks. Generally, the rebuilt network matched a significant portion of real links, especially short-distant ones. These findings provide clear evidence to support the hypothesis of trade-off between multiple constraints on brain networks. The two constraints of wiring cost and processing efficiency, however, cannot explain all salient features in the real networks. The discrepancy suggests that there are further relevant factors that are not yet captured here. PMID:23505352

  14. Artificial neural networks as a useful tool to predict the risk level of Betula pollen in the air

    NASA Astrophysics Data System (ADS)

    Castellano-Méndez, M.; Aira, M. J.; Iglesias, I.; Jato, V.; González-Manteiga, W.

    2005-05-01

    An increasing percentage of the European population suffers from allergies to pollen. The study of the evolution of air pollen concentration supplies prior knowledge of the levels of pollen in the air, which can be useful for the prevention and treatment of allergic symptoms, and the management of medical resources. The symptoms of Betula pollinosis can be associated with certain levels of pollen in the air. The aim of this study was to predict the risk of the concentration of pollen exceeding a given level, using previous pollen and meteorological information, by applying neural network techniques. Neural networks are a widespread statistical tool useful for the study of problems associated with complex or poorly understood phenomena. The binary response variable associated with each level requires a careful selection of the neural network and the error function associated with the learning algorithm used during the training phase. The performance of the neural network with the validation set showed that the risk of the pollen level exceeding a certain threshold can be successfully forecasted using artificial neural networks. This prediction tool may be implemented to create an automatic system that forecasts the risk of suffering allergic symptoms.

  15. Statistical downscaling of precipitation using long short-term memory recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Misra, Saptarshi; Sarkar, Sudeshna; Mitra, Pabitra

    2017-11-01

    Hydrological impacts of global climate change on regional scale are generally assessed by downscaling large-scale climatic variables, simulated by General Circulation Models (GCMs), to regional, small-scale hydrometeorological variables like precipitation, temperature, etc. In this study, we propose a new statistical downscaling model based on Recurrent Neural Network with Long Short-Term Memory which captures the spatio-temporal dependencies in local rainfall. The previous studies have used several other methods such as linear regression, quantile regression, kernel regression, beta regression, and artificial neural networks. Deep neural networks and recurrent neural networks have been shown to be highly promising in modeling complex and highly non-linear relationships between input and output variables in different domains and hence we investigated their performance in the task of statistical downscaling. We have tested this model on two datasets—one on precipitation in Mahanadi basin in India and the second on precipitation in Campbell River basin in Canada. Our autoencoder coupled long short-term memory recurrent neural network model performs the best compared to other existing methods on both the datasets with respect to temporal cross-correlation, mean squared error, and capturing the extremes.

  16. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  17. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot.

    PubMed

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world.

  18. Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot

    PubMed Central

    Grinke, Eduard; Tetzlaff, Christian; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like insects, with little neural computing can effectively perform complex behaviors. For example, they can walk around their environment, escape from corners/deadlocks, and avoid or climb over obstacles. While performing all these behaviors, they can also adapt their movements to deal with an unknown situation. As a consequence, they successfully navigate through their complex environment. The versatile and adaptive abilities are the result of an integration of several ingredients embedded in their sensorimotor loop. Biological studies reveal that the ingredients include neural dynamics, plasticity, sensory feedback, and biomechanics. Generating such versatile and adaptive behaviors for a many degrees-of-freedom (DOFs) walking robot is a challenging task. Thus, in this study, we present a bio-inspired approach to solve this task. Specifically, the approach combines neural mechanisms with plasticity, exteroceptive sensory feedback, and biomechanics. The neural mechanisms consist of adaptive neural sensory processing and modular neural locomotion control. The sensory processing is based on a small recurrent neural network consisting of two fully connected neurons. Online correlation-based learning with synaptic scaling is applied to adequately change the connections of the network. By doing so, we can effectively exploit neural dynamics (i.e., hysteresis effects and single attractors) in the network to generate different turning angles with short-term memory for a walking robot. The turning information is transmitted as descending steering signals to the neural locomotion control which translates the signals into motor actions. As a result, the robot can walk around and adapt its turning angle for avoiding obstacles in different situations. The adaptation also enables the robot to effectively escape from sharp corners or deadlocks. Using backbone joint control embedded in the the locomotion control allows the robot to climb over small obstacles. Consequently, it can successfully explore and navigate in complex environments. We firstly tested our approach on a physical simulation environment and then applied it to our real biomechanical walking robot AMOSII with 19 DOFs to adaptively avoid obstacles and navigate in the real world. PMID:26528176

  19. Chaos in a neural network circuit

    NASA Astrophysics Data System (ADS)

    Kepler, Thomas B.; Datt, Sumeet; Meyer, Robert B.; Abott, L. F.

    1990-12-01

    We have constructed a neural network circuit of four clipped, high-grain, integrating operational amplifiers coupled to each other through an array of digitally programmable resistor ladders (MDACs). In addition to fixed-point and cyclic behavior, the circuit exhibits chaotic behavior with complex strange attractors which are approached through period doubling, intermittent attractor expansion and/or quasiperiodic pathways. Couplings between the nonlinear circuit elements are controlled by a computer which can automatically search through the space of couplings for interesting phenomena. We report some initial statistical results relating the behavior of the network to properties of its coupling matrix. Through these results and further research the circuit should help resolve fundamental issues concerning chaos in neural networks.

  20. State-dependent, bidirectional modulation of neural network activity by endocannabinoids.

    PubMed

    Piet, Richard; Garenne, André; Farrugia, Fanny; Le Masson, Gwendal; Marsicano, Giovanni; Chavis, Pascale; Manzoni, Olivier J

    2011-11-16

    The endocannabinoid (eCB) system and the cannabinoid CB1 receptor (CB1R) play key roles in the modulation of brain functions. Although actions of eCBs and CB1Rs are well described at the synaptic level, little is known of their modulation of neural activity at the network level. Using microelectrode arrays, we have examined the role of CB1R activation in the modulation of the electrical activity of rat and mice cortical neural networks in vitro. We find that exogenous activation of CB1Rs expressed on glutamatergic neurons decreases the spontaneous activity of cortical neural networks. Moreover, we observe that the net effect of the CB1R antagonist AM251 inversely correlates with the initial level of activity in the network: blocking CB1Rs increases network activity when basal network activity is low, whereas it depresses spontaneous activity when its initial level is high. Our results reveal a complex role of CB1Rs in shaping spontaneous network activity, and suggest that the outcome of endogenous neuromodulation on network function might be state dependent.

  1. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  2. Geometric Bioinspired Networks for Recognition of 2-D and 3-D Low-Level Structures and Transformations.

    PubMed

    Bayro-Corrochano, Eduardo; Vazquez-Santacruz, Eduardo; Moya-Sanchez, Eduardo; Castillo-Munis, Efrain

    2016-10-01

    This paper presents the design of radial basis function geometric bioinspired networks and their applications. Until now, the design of neural networks has been inspired by the biological models of neural networks but mostly using vector calculus and linear algebra. However, these designs have never shown the role of geometric computing. The question is how biological neural networks handle complex geometric representations involving Lie group operations like rotations. Even though the actual artificial neural networks are biologically inspired, they are just models which cannot reproduce a plausible biological process. Until now researchers have not shown how, using these models, one can incorporate them into the processing of geometric computing. Here, for the first time in the artificial neural networks domain, we address this issue by designing a kind of geometric RBF using the geometric algebra framework. As a result, using our artificial networks, we show how geometric computing can be carried out by the artificial neural networks. Such geometric neural networks have a great potential in robot vision. This is the most important aspect of this contribution to propose artificial geometric neural networks for challenging tasks in perception and action. In our experimental analysis, we show the applicability of our geometric designs, and present interesting experiments using 2-D data of real images and 3-D screw axis data. In general, our models should be used to process different types of inputs, such as visual cues, touch (texture, elasticity, temperature), taste, and sound. One important task of a perception-action system is to fuse a variety of cues coming from the environment and relate them via a sensor-motor manifold with motor modules to carry out diverse reasoned actions.

  3. Neural-Network Quantum States, String-Bond States, and Chiral Topological States

    NASA Astrophysics Data System (ADS)

    Glasser, Ivan; Pancotti, Nicola; August, Moritz; Rodriguez, Ivan D.; Cirac, J. Ignacio

    2018-01-01

    Neural-network quantum states have recently been introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between neural-network quantum states in the form of restricted Boltzmann machines and some classes of tensor-network states in arbitrary dimensions. In particular, we demonstrate that short-range restricted Boltzmann machines are entangled plaquette states, while fully connected restricted Boltzmann machines are string-bond states with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of restricted Boltzmann machines and their efficiency at representing many-body quantum states. String-bond states also provide a generic way of enhancing the power of neural-network quantum states and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of tensor networks and the efficiency of neural-network quantum states into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional tensor networks, we show that, because of their nonlocal geometry, neural-network quantum states and their string-bond-state extension can describe a lattice fractional quantum Hall state exactly. In addition, we provide numerical evidence that neural-network quantum states can approximate a chiral spin liquid with better accuracy than entangled plaquette states and local string-bond states. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of string-bond states as a tool in more traditional machine-learning applications.

  4. Development of a neural network technique for KSTAR Thomson scattering diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seung Hun, E-mail: leesh81@nfri.re.kr; Lee, J. H.; Yamada, I.

    Neural networks provide powerful approaches of dealing with nonlinear data and have been successfully applied to fusion plasma diagnostics and control systems. Controlling tokamak plasmas in real time is essential to measure the plasma parameters in situ. However, the χ{sup 2} method traditionally used in Thomson scattering diagnostics hampers real-time measurement due to the complexity of the calculations involved. In this study, we applied a neural network approach to Thomson scattering diagnostics in order to calculate the electron temperature, comparing the results to those obtained with the χ{sup 2} method. The best results were obtained for 10{sup 3} training cyclesmore » and eight nodes in the hidden layer. Our neural network approach shows good agreement with the χ{sup 2} method and performs the calculation twenty times faster.« less

  5. Simulation of short-term electric load using an artificial neural network

    NASA Astrophysics Data System (ADS)

    Ivanin, O. A.

    2018-01-01

    While solving the task of optimizing operation modes and equipment composition of small energy complexes or other tasks connected with energy planning, it is necessary to have data on energy loads of a consumer. Usually, there is a problem with obtaining real load charts and detailed information about the consumer, because a method of load-charts simulation on the basis of minimal information should be developed. The analysis of work devoted to short-term loads prediction allows choosing artificial neural networks as a most suitable mathematical instrument for solving this problem. The article provides an overview of applied short-term load simulation methods; it describes the advantages of artificial neural networks and offers a neural network structure for electric loads of residential buildings simulation. The results of modeling loads with proposed method and the estimation of its error are presented.

  6. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  7. Temporal neural networks and transient analysis of complex engineering systems

    NASA Astrophysics Data System (ADS)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  8. Artificial Intelligence Procedures for Tree Taper Estimation within a Complex Vegetation Mosaic in Brazil

    PubMed Central

    Nunes, Matheus Henrique

    2016-01-01

    Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects. PMID:27187074

  9. Artificial Intelligence Procedures for Tree Taper Estimation within a Complex Vegetation Mosaic in Brazil.

    PubMed

    Nunes, Matheus Henrique; Görgens, Eric Bastos

    2016-01-01

    Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.

  10. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  11. Neural network based load and price forecasting and confidence interval estimation in deregulated power markets

    NASA Astrophysics Data System (ADS)

    Zhang, Li

    With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman filter (EKF) can be used as an integrated adaptive learning and confidence interval estimation algorithm for neural networks, with fast convergence and small confidence intervals. However, EKF learning is computationally expensive because it involves high dimensional matrix manipulations. A modified U-D factorization within the decoupled EKF (DEKF-UD) framework is developed in this research. The computational efficiency and numerical stability are significantly improved.

  12. Neural network explanation using inversion.

    PubMed

    Saad, Emad W; Wunsch, Donald C

    2007-01-01

    An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.

  13. Evolutionary Artificial Neural Network Weight Tuning to Optimize Decision Making for an Abstract Game

    DTIC Science & Technology

    2010-03-01

    separate LoA heuristic. If any of the examined heuristics produced competitive player , then the final measurement was a success . Barring that, a...if offline training actually results in a successful player . Whereas offline learning plays many games and then trains as many networks as desired...a competitive Lines of Action player , shedding light on the difficulty of developing a neural network to model such a large and complex solution

  14. A novel neural network for variational inequalities with linear and nonlinear constraints.

    PubMed

    Gao, Xing-Bao; Liao, Li-Zhi; Qi, Liqun

    2005-11-01

    Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on the sufficient and necessary conditions of the solution, this paper presents a novel neural network model for solving variational inequalities with linear and nonlinear constraints. Three sufficient conditions are provided to ensure that the proposed network with an asymmetric mapping is stable in the sense of Lyapunov and converges to an exact solution of the original problem. Meanwhile, the proposed network with a gradient mapping is also proved to be stable in the sense of Lyapunov and to have a finite-time convergence under some mild condition by using a new energy function. Compared with the existing neural networks, the new model can be applied to solve some nonmonotone problems, has no adjustable parameter, and has lower complexity. Thus, the structure of the proposed network is very simple. Since the proposed network can be used to solve a broad class of optimization problems, it has great application potential. The validity and transient behavior of the proposed neural network are demonstrated by several numerical examples.

  15. Metastability and Inter-Band Frequency Modulation in Networks of Oscillating Spiking Neuron Populations

    PubMed Central

    Bhowmik, David; Shanahan, Murray

    2013-01-01

    Groups of neurons firing synchronously are hypothesized to underlie many cognitive functions such as attention, associative learning, memory, and sensory selection. Recent theories suggest that transient periods of synchronization and desynchronization provide a mechanism for dynamically integrating and forming coalitions of functionally related neural areas, and that at these times conditions are optimal for information transfer. Oscillating neural populations display a great amount of spectral complexity, with several rhythms temporally coexisting in different structures and interacting with each other. This paper explores inter-band frequency modulation between neural oscillators using models of quadratic integrate-and-fire neurons and Hodgkin-Huxley neurons. We vary the structural connectivity in a network of neural oscillators, assess the spectral complexity, and correlate the inter-band frequency modulation. We contrast this correlation against measures of metastable coalition entropy and synchrony. Our results show that oscillations in different neural populations modulate each other so as to change frequency, and that the interaction of these fluctuating frequencies in the network as a whole is able to drive different neural populations towards episodes of synchrony. Further to this, we locate an area in the connectivity space in which the system directs itself in this way so as to explore a large repertoire of synchronous coalitions. We suggest that such dynamics facilitate versatile exploration, integration, and communication between functionally related neural areas, and thereby supports sophisticated cognitive processing in the brain. PMID:23614040

  16. Genetic learning in rule-based and neural systems

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  17. Simbrain 3.0: A flexible, visually-oriented neural network simulator.

    PubMed

    Tosi, Zachary; Yoshimi, Jeffrey

    2016-11-01

    Simbrain 3.0 is a software package for neural network design and analysis, which emphasizes flexibility (arbitrarily complex networks can be built using a suite of basic components) and a visually rich, intuitive interface. These features support both students and professionals. Students can study all of the major classes of neural networks in a familiar graphical setting, and can easily modify simulations, experimenting with networks and immediately seeing the results of their interventions. With the 3.0 release, Simbrain supports models on the order of thousands of neurons and a million synapses. This allows the same features that support education to support research professionals, who can now use the tool to quickly design, run, and analyze the behavior of large, highly customizable simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Preconditioning electromyographic data for an upper extremity model using neural networks

    NASA Technical Reports Server (NTRS)

    Roberson, D. J.; Fernjallah, M.; Barr, R. E.; Gonzalez, R. V.

    1994-01-01

    A back propagation neural network has been employed to precondition the electromyographic signal (EMG) that drives a computational model of the human upper extremity. This model is used to determine the complex relationship between EMG and muscle activation, and generates an optimal muscle activation scheme that simulates the actual activation. While the experimental and model predicted results of the ballistic muscle movement are very similar, the activation function between the start and the finish is not. This neural network preconditions the signal in an attempt to more closely model the actual activation function over the entire course of the muscle movement.

  19. Neuromorphic neural interfaces: from neurophysiological inspiration to biohybrid coupling with nervous systems

    NASA Astrophysics Data System (ADS)

    Broccard, Frédéric D.; Joshi, Siddharth; Wang, Jun; Cauwenberghs, Gert

    2017-08-01

    Objective. Computation in nervous systems operates with different computational primitives, and on different hardware, than traditional digital computation and is thus subjected to different constraints from its digital counterpart regarding the use of physical resources such as time, space and energy. In an effort to better understand neural computation on a physical medium with similar spatiotemporal and energetic constraints, the field of neuromorphic engineering aims to design and implement electronic systems that emulate in very large-scale integration (VLSI) hardware the organization and functions of neural systems at multiple levels of biological organization, from individual neurons up to large circuits and networks. Mixed analog/digital neuromorphic VLSI systems are compact, consume little power and operate in real time independently of the size and complexity of the model. Approach. This article highlights the current efforts to interface neuromorphic systems with neural systems at multiple levels of biological organization, from the synaptic to the system level, and discusses the prospects for future biohybrid systems with neuromorphic circuits of greater complexity. Main results. Single silicon neurons have been interfaced successfully with invertebrate and vertebrate neural networks. This approach allowed the investigation of neural properties that are inaccessible with traditional techniques while providing a realistic biological context not achievable with traditional numerical modeling methods. At the network level, populations of neurons are envisioned to communicate bidirectionally with neuromorphic processors of hundreds or thousands of silicon neurons. Recent work on brain-machine interfaces suggests that this is feasible with current neuromorphic technology. Significance. Biohybrid interfaces between biological neurons and VLSI neuromorphic systems of varying complexity have started to emerge in the literature. Primarily intended as a computational tool for investigating fundamental questions related to neural dynamics, the sophistication of current neuromorphic systems now allows direct interfaces with large neuronal networks and circuits, resulting in potentially interesting clinical applications for neuroengineering systems, neuroprosthetics and neurorehabilitation.

  20. Hopf bifurcation of an (n + 1) -neuron bidirectional associative memory neural network model with delays.

    PubMed

    Xiao, Min; Zheng, Wei Xing; Cao, Jinde

    2013-01-01

    Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.

  1. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  2. Biological neural networks as model systems for designing future parallel processing computers

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.

    1991-01-01

    One of the more interesting debates of the present day centers on whether human intelligence can be simulated by computer. The author works under the premise that neurons individually are not smart at all. Rather, they are physical units which are impinged upon continuously by other matter that influences the direction of voltage shifts across the units membranes. It is only the action of a great many neurons, billions in the case of the human nervous system, that intelligent behavior emerges. What is required to understand even the simplest neural system is painstaking analysis, bit by bit, of the architecture and the physiological functioning of its various parts. The biological neural network studied, the vestibular utricular and saccular maculas of the inner ear, are among the most simple of the mammalian neural networks to understand and model. While there is still a long way to go to understand even this most simple neural network in sufficient detail for extrapolation to computers and robots, a start was made. Moreover, the insights obtained and the technologies developed help advance the understanding of the more complex neural networks that underlie human intelligence.

  3. Low-complexity nonlinear adaptive filter based on a pipelined bilinear recurrent neural network.

    PubMed

    Zhao, Haiquan; Zeng, Xiangping; He, Zhengyou

    2011-09-01

    To reduce the computational complexity of the bilinear recurrent neural network (BLRNN), a novel low-complexity nonlinear adaptive filter with a pipelined bilinear recurrent neural network (PBLRNN) is presented in this paper. The PBLRNN, inheriting the modular architectures of the pipelined RNN proposed by Haykin and Li, comprises a number of BLRNN modules that are cascaded in a chained form. Each module is implemented by a small-scale BLRNN with internal dynamics. Since those modules of the PBLRNN can be performed simultaneously in a pipelined parallelism fashion, it would result in a significant improvement of computational efficiency. Moreover, due to nesting module, the performance of the PBLRNN can be further improved. To suit for the modular architectures, a modified adaptive amplitude real-time recurrent learning algorithm is derived on the gradient descent approach. Extensive simulations are carried out to evaluate the performance of the PBLRNN on nonlinear system identification, nonlinear channel equalization, and chaotic time series prediction. Experimental results show that the PBLRNN provides considerably better performance compared to the single BLRNN and RNN models.

  4. The dynamical analysis of modified two-compartment neuron model and FPGA implementation

    NASA Astrophysics Data System (ADS)

    Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao

    2017-10-01

    The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrada, J.J.; Osborne-Lee, I.W.; Grizzaffi, P.A.

    Expert systems are known to be useful in capturing expertise and applying knowledge to chemical engineering problems such as diagnosis, process control, process simulation, and process advisory. However, expert system applications are traditionally limited to knowledge domains that are heuristic and involve only simple mathematics. Neural networks, on the other hand, represent an emerging technology capable of rapid recognition of patterned behavior without regard to mathematical complexity. Although useful in problem identification, neural networks are not very efficient in providing in-depth solutions and typically do not promote full understanding of the problem or the reasoning behind its solutions. Hence, applicationsmore » of neural networks have certain limitations. This paper explores the potential for expanding the scope of chemical engineering areas where neural networks might be utilized by incorporating expert systems and neural networks into the same application, a process called hybridization. In addition, hybrid applications are compared with those using more traditional approaches, the results of the different applications are analyzed, and the feasibility of converting the preliminary prototypes described herein into useful final products is evaluated. 12 refs., 8 figs.« less

  6. Development of a computational model on the neural activity patterns of a visual working memory in a hierarchical feedforward Network

    NASA Astrophysics Data System (ADS)

    An, Soyoung; Choi, Woochul; Paik, Se-Bum

    2015-11-01

    Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.

  7. Pharmacological Tools to Study the Role of Astrocytes in Neural Network Functions.

    PubMed

    Peña-Ortega, Fernando; Rivera-Angulo, Ana Julia; Lorea-Hernández, Jonathan Julio

    2016-01-01

    Despite that astrocytes and microglia do not communicate by electrical impulses, they can efficiently communicate among them, with each other and with neurons, to participate in complex neural functions requiring broad cell-communication and long-lasting regulation of brain function. Glial cells express many receptors in common with neurons; secrete gliotransmitters as well as neurotrophic and neuroinflammatory factors, which allow them to modulate synaptic transmission and neural excitability. All these properties allow glial cells to influence the activity of neuronal networks. Thus, the incorporation of glial cell function into the understanding of nervous system dynamics will provide a more accurate view of brain function. Our current knowledge of glial cell biology is providing us with experimental tools to explore their participation in neural network modulation. In this chapter, we review some of the classical, as well as some recent, pharmacological tools developed for the study of astrocyte's influence in neural function. We also provide some examples of the use of these pharmacological agents to understand the role of astrocytes in neural network function and dysfunction.

  8. Architecture and biological applications of artificial neural networks: a tuberculosis perspective.

    PubMed

    Darsey, Jerry A; Griffin, William O; Joginipelli, Sravanthi; Melapu, Venkata Kiran

    2015-01-01

    Advancement of science and technology has prompted researchers to develop new intelligent systems that can solve a variety of problems such as pattern recognition, prediction, and optimization. The ability of the human brain to learn in a fashion that tolerates noise and error has attracted many researchers and provided the starting point for the development of artificial neural networks: the intelligent systems. Intelligent systems can acclimatize to the environment or data and can maximize the chances of success or improve the efficiency of a search. Due to massive parallelism with large numbers of interconnected processers and their ability to learn from the data, neural networks can solve a variety of challenging computational problems. Neural networks have the ability to derive meaning from complicated and imprecise data; they are used in detecting patterns, and trends that are too complex for humans, or other computer systems. Solutions to the toughest problems will not be found through one narrow specialization; therefore we need to combine interdisciplinary approaches to discover the solutions to a variety of problems. Many researchers in different disciplines such as medicine, bioinformatics, molecular biology, and pharmacology have successfully applied artificial neural networks. This chapter helps the reader in understanding the basics of artificial neural networks, their applications, and methodology; it also outlines the network learning process and architecture. We present a brief outline of the application of neural networks to medical diagnosis, drug discovery, gene identification, and protein structure prediction. We conclude with a summary of the results from our study on tuberculosis data using neural networks, in diagnosing active tuberculosis, and predicting chronic vs. infiltrative forms of tuberculosis.

  9. Artificial Neural Network Based Group Contribution Method for Estimating Cetane and Octane Numbers of Hydrocarbons and Oxygenated Organic Compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubic, William Louis; Jenkins, Rhodri W.; Moore, Cameron M.

    Chemical pathways for converting biomass into fuels produce compounds for which key physical and chemical property data are unavailable. We developed an artificial neural network based group contribution method for estimating cetane and octane numbers that captures the complex dependence of fuel properties of pure compounds on chemical structure and is statistically superior to current methods.

  10. Artificial Neural Network Based Group Contribution Method for Estimating Cetane and Octane Numbers of Hydrocarbons and Oxygenated Organic Compounds

    DOE PAGES

    Kubic, William Louis; Jenkins, Rhodri W.; Moore, Cameron M.; ...

    2017-09-28

    Chemical pathways for converting biomass into fuels produce compounds for which key physical and chemical property data are unavailable. We developed an artificial neural network based group contribution method for estimating cetane and octane numbers that captures the complex dependence of fuel properties of pure compounds on chemical structure and is statistically superior to current methods.

  11. Disordered models of acquired dyslexia

    NASA Astrophysics Data System (ADS)

    Virasoro, M. A.

    We show that certain specific correlations in the probability of errors observed in dyslexic patients that are normally explained by introducing additional complexity in the model for the reading process are typical of any Neural Network system that has learned to deal with a quasiregular environment. On the other hand we show that in Neural Networks the more regular behavior does not become naturally the default behavior.

  12. Stimulus Sensitivity of a Spiking Neural Network Model

    NASA Astrophysics Data System (ADS)

    Chevallier, Julien

    2018-02-01

    Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.

  13. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

    PubMed

    Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias

    2008-12-01

    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.

  14. RBF neural network based PI pitch controller for a class of 5-MW wind turbines using particle swarm optimization algorithm.

    PubMed

    Poultangari, Iman; Shahnazi, Reza; Sheikhan, Mansour

    2012-09-01

    In order to control the pitch angle of blades in wind turbines, commonly the proportional and integral (PI) controller due to its simplicity and industrial usability is employed. The neural networks and evolutionary algorithms are tools that provide a suitable ground to determine the optimal PI gains. In this paper, a radial basis function (RBF) neural network based PI controller is proposed for collective pitch control (CPC) of a 5-MW wind turbine. In order to provide an optimal dataset to train the RBF neural network, particle swarm optimization (PSO) evolutionary algorithm is used. The proposed method does not need the complexities, nonlinearities and uncertainties of the system under control. The simulation results show that the proposed controller has satisfactory performance. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Modeling level of urban taxi services using neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J.; Wong, S.C.; Tong, C.O.

    1999-05-01

    This paper is concerned with the modeling of the complex demand-supply relationship in urban taxi services. A neural network model is developed, based on a taxi service situation observed in the urban area of Hong Kong. The input consists of several exogenous variables including number of licensed taxis, incremental charge of taxi fare, average occupied taxi journey time, average disposable income, and population and customer price index; the output consists of a set of endogenous variables including daily taxi passenger demand, passenger waiting time, vacant taxi headway, average percentage of occupied taxis, taxi utilization, and average taxi waiting time. Comparisonsmore » of the estimation accuracy are made between the neural network model and the simultaneous equations model. The results show that the neural network-based macro taxi model can obtain much more accurate information of the taxi services than the simultaneous equations model does. Although the data set used for training the neural network is small, the results obtained thus far are very encouraging. The neural network model can be used as a policy tool by regulator to assist with the decisions concerning the restriction over the number of taxi licenses and the fixing of the taxi fare structure as well as a range of service quality control.« less

  16. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  17. [Application of artificial neural networks on the prediction of surface ozone concentrations].

    PubMed

    Shen, Lu-Lu; Wang, Yu-Xuan; Duan, Lei

    2011-08-01

    Ozone is an important secondary air pollutant in the lower atmosphere. In order to predict the hourly maximum ozone one day in advance based on the meteorological variables for the Wanqingsha site in Guangzhou, Guangdong province, a neural network model (Multi-Layer Perceptron) and a multiple linear regression model were used and compared. Model inputs are meteorological parameters (wind speed, wind direction, air temperature, relative humidity, barometric pressure and solar radiation) of the next day and hourly maximum ozone concentration of the previous day. The OBS (optimal brain surgeon) was adopted to prune the neutral work, to reduce its complexity and to improve its generalization ability. We find that the pruned neural network has the capacity to predict the peak ozone, with an agreement index of 92.3%, the root mean square error of 0.0428 mg/m3, the R-square of 0.737 and the success index of threshold exceedance 77.0% (the threshold O3 mixing ratio of 0.20 mg/m3). When the neural classifier was added to the neural network model, the success index of threshold exceedance increased to 83.6%. Through comparison of the performance indices between the multiple linear regression model and the neural network model, we conclud that that neural network is a better choice to predict peak ozone from meteorological forecast, which may be applied to practical prediction of ozone concentration.

  18. Application of machine learning methods for traffic signs recognition

    NASA Astrophysics Data System (ADS)

    Filatov, D. V.; Ignatev, K. V.; Deviatkin, A. V.; Serykh, E. V.

    2018-02-01

    This paper focuses on solving a relevant and pressing safety issue on intercity roads. Two approaches were considered for solving the problem of traffic signs recognition; the approaches involved neural networks to analyze images obtained from a camera in the real-time mode. The first approach is based on a sequential image processing. At the initial stage, with the help of color filters and morphological operations (dilatation and erosion), the area containing the traffic sign is located on the image, then the selected and scaled fragment of the image is analyzed using a feedforward neural network to determine the meaning of the found traffic sign. Learning of the neural network in this approach is carried out using a backpropagation method. The second approach involves convolution neural networks at both stages, i.e. when searching and selecting the area of the image containing the traffic sign, and when determining its meaning. Learning of the neural network in the second approach is carried out using the intersection over union function and a loss function. For neural networks to learn and the proposed algorithms to be tested, a series of videos from a dash cam were used that were shot under various weather and illumination conditions. As a result, the proposed approaches for traffic signs recognition were analyzed and compared by key indicators such as recognition rate percentage and the complexity of neural networks’ learning process.

  19. Use of neural networks to model complex immunogenetic associations of disease: human leukocyte antigen impact on the progression of human immunodeficiency virus infection.

    PubMed

    Ioannidis, J P; McQueen, P G; Goedert, J J; Kaslow, R A

    1998-03-01

    Complex immunogenetic associations of disease involving a large number of gene products are difficult to evaluate with traditional statistical methods and may require complex modeling. The authors evaluated the performance of feed-forward backpropagation neural networks in predicting rapid progression to acquired immunodeficiency syndrome (AIDS) for patients with human immunodeficiency virus (HIV) infection on the basis of major histocompatibility complex variables. Networks were trained on data from patients from the Multicenter AIDS Cohort Study (n = 139) and then validated on patients from the DC Gay cohort (n = 102). The outcome of interest was rapid disease progression, defined as progression to AIDS in <6 years from seroconversion. Human leukocyte antigen (HLA) variables were selected as network inputs with multivariate regression and a previously described algorithm selecting markers with extreme point estimates for progression risk. Network performance was compared with that of logistic regression. Networks with 15 HLA inputs and a single hidden layer of five nodes achieved a sensitivity of 87.5% and specificity of 95.6% in the training set, vs. 77.0% and 76.9%, respectively, achieved by logistic regression. When validated on the DC Gay cohort, networks averaged a sensitivity of 59.1% and specificity of 74.3%, vs. 53.1% and 61.4%, respectively, for logistic regression. Neural networks offer further support to the notion that HIV disease progression may be dependent on complex interactions between different class I and class II alleles and transporters associated with antigen processing variants. The effect in the current models is of moderate magnitude, and more data as well as other host and pathogen variables may need to be considered to improve the performance of the models. Artificial intelligence methods may complement linear statistical methods for evaluating immunogenetic associations of disease.

  20. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  1. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  2. Design of hybrid radial basis function neural networks (HRBFNNs) realized with the aid of hybridization of fuzzy clustering method (FCM) and polynomial neural networks (PNNs).

    PubMed

    Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold

    2014-12-01

    In this study, we propose Hybrid Radial Basis Function Neural Networks (HRBFNNs) realized with the aid of fuzzy clustering method (Fuzzy C-Means, FCM) and polynomial neural networks. Fuzzy clustering used to form information granulation is employed to overcome a possible curse of dimensionality, while the polynomial neural network is utilized to build local models. Furthermore, genetic algorithm (GA) is exploited here to optimize the essential design parameters of the model (including fuzzification coefficient, the number of input polynomial fuzzy neurons (PFNs), and a collection of the specific subset of input PFNs) of the network. To reduce dimensionality of the input space, principal component analysis (PCA) is considered as a sound preprocessing vehicle. The performance of the HRBFNNs is quantified through a series of experiments, in which we use several modeling benchmarks of different levels of complexity (different number of input variables and the number of available data). A comparative analysis reveals that the proposed HRBFNNs exhibit higher accuracy in comparison to the accuracy produced by some models reported previously in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. DCS-Neural-Network Program for Aircraft Control and Testing

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    2006-01-01

    A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.

  4. Application of neural networks with orthogonal activation functions in control of dynamical systems

    NASA Astrophysics Data System (ADS)

    Nikolić, Saša S.; Antić, Dragan S.; Milojković, Marko T.; Milovanović, Miroslav B.; Perić, Staniša Lj.; Mitić, Darko B.

    2016-04-01

    In this article, we present a new method for the synthesis of almost and quasi-orthogonal polynomials of arbitrary order. Filters designed on the bases of these functions are generators of generalised quasi-orthogonal signals for which we derived and presented necessary mathematical background. Based on theoretical results, we designed and practically implemented generalised first-order (k = 1) quasi-orthogonal filter and proved its quasi-orthogonality via performed experiments. Designed filters can be applied in many scientific areas. In this article, generated functions were successfully implemented in Nonlinear Auto Regressive eXogenous (NARX) neural network as activation functions. One practical application of the designed orthogonal neural network is demonstrated through the example of control of the complex technical non-linear system - laboratory magnetic levitation system. Obtained results were compared with neural networks with standard activation functions and orthogonal functions of trigonometric shape. The proposed network demonstrated superiority over existing solutions in the sense of system performances.

  5. Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks

    PubMed Central

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480

  6. Bankruptcy prediction based on financial ratios using Jordan Recurrent Neural Networks: a case study in Polish companies

    NASA Astrophysics Data System (ADS)

    Hardinata, Lingga; Warsito, Budi; Suparti

    2018-05-01

    Complexity of bankruptcy causes the accurate models of bankruptcy prediction difficult to be achieved. Various prediction models have been developed to improve the accuracy of bankruptcy predictions. Machine learning has been widely used to predict because of its adaptive capabilities. Artificial Neural Networks (ANN) is one of machine learning which proved able to complete inference tasks such as prediction and classification especially in data mining. In this paper, we propose the implementation of Jordan Recurrent Neural Networks (JRNN) to classify and predict corporate bankruptcy based on financial ratios. Feedback interconnection in JRNN enable to make the network keep important information well allowing the network to work more effectively. The result analysis showed that JRNN works very well in bankruptcy prediction with average success rate of 81.3785%.

  7. Computational models of neuron-astrocyte interactions lead to improved efficacy in the performance of neural networks.

    PubMed

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.

  8. Restricted Complexity Framework for Nonlinear Adaptive Control in Complex Systems

    NASA Astrophysics Data System (ADS)

    Williams, Rube B.

    2004-02-01

    Control law adaptation that includes implicit or explicit adaptive state estimation, can be a fundamental underpinning for the success of intelligent control in complex systems, particularly during subsystem failures, where vital system states and parameters can be impractical or impossible to measure directly. A practical algorithm is proposed for adaptive state filtering and control in nonlinear dynamic systems when the state equations are unknown or are too complex to model analytically. The state equations and inverse plant model are approximated by using neural networks. A framework for a neural network based nonlinear dynamic inversion control law is proposed, as an extrapolation of prior developed restricted complexity methodology used to formulate the adaptive state filter. Examples of adaptive filter performance are presented for an SSME simulation with high pressure turbine failure to support extrapolations to adaptive control problems.

  9. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  10. Quick fuzzy backpropagation algorithm.

    PubMed

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  11. Neuromorphic device architectures with global connectivity through electrolyte gating

    NASA Astrophysics Data System (ADS)

    Gkoupidenis, Paschalis; Koutsouras, Dimitrios A.; Malliaras, George G.

    2017-05-01

    Information processing in the brain takes place in a network of neurons that are connected with each other by an immense number of synapses. At the same time, neurons are immersed in a common electrochemical environment, and global parameters such as concentrations of various hormones regulate the overall network function. This computational paradigm of global regulation, also known as homeoplasticity, has important implications in the overall behaviour of large neural ensembles and is barely addressed in neuromorphic device architectures. Here, we demonstrate the global control of an array of organic devices based on poly(3,4ethylenedioxythiophene):poly(styrene sulf) that are immersed in an electrolyte, a behaviour that resembles homeoplasticity phenomena of the neural environment. We use this effect to produce behaviour that is reminiscent of the coupling between local activity and global oscillations in the biological neural networks. We further show that the electrolyte establishes complex connections between individual devices, and leverage these connections to implement coincidence detection. These results demonstrate that electrolyte gating offers significant advantages for the realization of networks of neuromorphic devices of higher complexity and with minimal hardwired connectivity.

  12. Hexacopter trajectory control using a neural network

    NASA Astrophysics Data System (ADS)

    Artale, V.; Collotta, M.; Pau, G.; Ricciardello, A.

    2013-10-01

    The modern flight control systems are complex due to their non-linear nature. In fact, modern aerospace vehicles are expected to have non-conventional flight envelopes and, then, they must guarantee a high level of robustness and adaptability in order to operate in uncertain environments. Neural Networks (NN), with real-time learning capability, for flight control can be used in applications with manned or unmanned aerial vehicles. Indeed, using proven lower level control algorithms with adaptive elements that exhibit long term learning could help in achieving better adaptation performance while performing aggressive maneuvers. In this paper we show a mathematical modeling and a Neural Network for a hexacopter dynamics in order to develop proper methods for stabilization and trajectory control.

  13. Active Control of Complex Systems via Dynamic (Recurrent) Neural Networks

    DTIC Science & Technology

    1992-05-30

    course, to on-going changes brought about by learning processes. As research in neurodynamics proceeded, the concept of reverberatory information flows...Microstructure of Cognition . Vol. 1: Foundations, M.I.T. Press, Cambridge, Massachusetts, pp. 354-361, 1986. 100 I Schwarz, G., "Estimating the dimension of a...Continually Running Fully Recurrent Neural Networks, ICS Report 8805, Institute of Cognitive Science, University of California at San Diego, 1988. 10 II

  14. Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors.

    PubMed

    Mahdiani, Hamid Reza; Fakhraie, Sied Mehdi; Lucas, Caro

    2012-08-01

    Reliability should be identified as the most important challenge in future nano-scale very large scale integration (VLSI) implementation technologies for the development of complex integrated systems. Normally, fault tolerance (FT) in a conventional system is achieved by increasing its redundancy, which also implies higher implementation costs and lower performance that sometimes makes it even infeasible. In contrast to custom approaches, a new class of applications is categorized in this paper, which is inherently capable of absorbing some degrees of vulnerability and providing FT based on their natural properties. Neural networks are good indicators of imprecision-tolerant applications. We have also proposed a new class of FT techniques called relaxed fault-tolerant (RFT) techniques which are developed for VLSI implementation of imprecision-tolerant applications. The main advantage of RFT techniques with respect to traditional FT solutions is that they exploit inherent FT of different applications to reduce their implementation costs while improving their performance. To show the applicability as well as the efficiency of the RFT method, the experimental results for implementation of a face-recognition computationally intensive neural network and its corresponding RFT realization are presented in this paper. The results demonstrate promising higher performance of artificial neural network VLSI solutions for complex applications in faulty nano-scale implementation environments.

  15. Phase Transitions in Living Neural Networks

    NASA Astrophysics Data System (ADS)

    Williams-Garcia, Rashid Vladimir

    Our nervous systems are composed of intricate webs of interconnected neurons interacting in complex ways. These complex interactions result in a wide range of collective behaviors with implications for features of brain function, e.g., information processing. Under certain conditions, such interactions can drive neural network dynamics towards critical phase transitions, where power-law scaling is conjectured to allow optimal behavior. Recent experimental evidence is consistent with this idea and it seems plausible that healthy neural networks would tend towards optimality. This hypothesis, however, is based on two problematic assumptions, which I describe and for which I present alternatives in this thesis. First, critical transitions may vanish due to the influence of an environment, e.g., a sensory stimulus, and so living neural networks may be incapable of achieving "critical" optimality. I develop a framework known as quasicriticality, in which a relative optimality can be achieved depending on the strength of the environmental influence. Second, the power-law scaling supporting this hypothesis is based on statistical analysis of cascades of activity known as neuronal avalanches, which conflate causal and non-causal activity, thus confounding important dynamical information. In this thesis, I present a new method to unveil causal links, known as causal webs, between neuronal activations, thus allowing for experimental tests of the quasicriticality hypothesis and other practical applications.

  16. Optimization Methods for Spiking Neurons and Networks

    PubMed Central

    Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph

    2011-01-01

    Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265

  17. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    PubMed

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  18. Artificial Neural Network approach to develop unique Classification and Raga identification tools for Pattern Recognition in Carnatic Music

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Parimala, Y. G.

    2011-12-01

    A unique approach has been developed to study patterns in ragas of Carnatic Classical music based on artificial neural networks. Ragas in Carnatic music which have found their roots in the Vedic period, have grown on a Scientific foundation over thousands of years. However owing to its vastness and complexities it has always been a challenge for scientists and musicologists to give an all encompassing perspective both qualitatively and quantitatively. Cognition, comprehension and perception of ragas in Indian classical music have always been the subject of intensive research, highly intriguing and many facets of these are hitherto not unravelled. This paper is an attempt to view the melakartha ragas with a cognitive perspective using artificial neural network based approach which has given raise to very interesting results. The 72 ragas of the melakartha system were defined through the combination of frequencies occurring in each of them. The data sets were trained using several neural networks. 100% accurate pattern recognition and classification was obtained using linear regression, TLRN, MLP and RBF networks. Performance of the different network topologies, by varying various network parameters, were compared. Linear regression was found to be the best performing network.

  19. A hybrid modeling approach for option pricing

    NASA Astrophysics Data System (ADS)

    Hajizadeh, Ehsan; Seifi, Abbas

    2011-11-01

    The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.

  20. Overview of artificial neural networks.

    PubMed

    Zou, Jinming; Han, Yi; So, Sung-Sau

    2008-01-01

    The artificial neural network (ANN), or simply neural network, is a machine learning method evolved from the idea of simulating the human brain. The data explosion in modem drug discovery research requires sophisticated analysis methods to uncover the hidden causal relationships between single or multiple responses and a large set of properties. The ANN is one of many versatile tools to meet the demand in drug discovery modeling. Compared to a traditional regression approach, the ANN is capable of modeling complex nonlinear relationships. The ANN also has excellent fault tolerance and is fast and highly scalable with parallel processing. This chapter introduces the background of ANN development and outlines the basic concepts crucially important for understanding more sophisticated ANN. Several commonly used learning methods and network setups are discussed briefly at the end of the chapter.

  1. Dynamics of coupled mode solitons in bursting neural networks

    NASA Astrophysics Data System (ADS)

    Nfor, N. Oma; Ghomsi, P. Guemkam; Moukam Kakmeni, F. M.

    2018-02-01

    Using an electrically coupled chain of Hindmarsh-Rose neural models, we analytically derived the nonlinearly coupled complex Ginzburg-Landau equations. This is realized by superimposing the lower and upper cutoff modes of wave propagation and by employing the multiple scale expansions in the semidiscrete approximation. We explore the modified Hirota method to analytically obtain the bright-bright pulse soliton solutions of our nonlinearly coupled equations. With these bright solitons as initial conditions of our numerical scheme, and knowing that electrical signals are the basis of information transfer in the nervous system, it is found that prior to collisions at the boundaries of the network, neural information is purely conveyed by bisolitons at lower cutoff mode. After collision, the bisolitons are completely annihilated and neural information is now relayed by the upper cutoff mode via the propagation of plane waves. It is also shown that the linear gain of the system is inextricably linked to the complex physiological mechanisms of ion mobility, since the speeds and spatial profiles of the coupled nerve impulses vary with the gain. A linear stability analysis performed on the coupled system mainly confirms the instability of plane waves in the neural network, with a glaring example of the transition of weak plane waves into a dark soliton and then static kinks. Numerical simulations have confirmed the annihilation phenomenon subsequent to collision in neural systems. They equally showed that the symmetry breaking of the pulse solution of the system leaves in the network static internal modes, sometime referred to as Goldstone modes.

  2. Dynamics of coupled mode solitons in bursting neural networks.

    PubMed

    Nfor, N Oma; Ghomsi, P Guemkam; Moukam Kakmeni, F M

    2018-02-01

    Using an electrically coupled chain of Hindmarsh-Rose neural models, we analytically derived the nonlinearly coupled complex Ginzburg-Landau equations. This is realized by superimposing the lower and upper cutoff modes of wave propagation and by employing the multiple scale expansions in the semidiscrete approximation. We explore the modified Hirota method to analytically obtain the bright-bright pulse soliton solutions of our nonlinearly coupled equations. With these bright solitons as initial conditions of our numerical scheme, and knowing that electrical signals are the basis of information transfer in the nervous system, it is found that prior to collisions at the boundaries of the network, neural information is purely conveyed by bisolitons at lower cutoff mode. After collision, the bisolitons are completely annihilated and neural information is now relayed by the upper cutoff mode via the propagation of plane waves. It is also shown that the linear gain of the system is inextricably linked to the complex physiological mechanisms of ion mobility, since the speeds and spatial profiles of the coupled nerve impulses vary with the gain. A linear stability analysis performed on the coupled system mainly confirms the instability of plane waves in the neural network, with a glaring example of the transition of weak plane waves into a dark soliton and then static kinks. Numerical simulations have confirmed the annihilation phenomenon subsequent to collision in neural systems. They equally showed that the symmetry breaking of the pulse solution of the system leaves in the network static internal modes, sometime referred to as Goldstone modes.

  3. The use of neural network technology to model swimming performance.

    PubMed

    Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida

    2007-01-01

    to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports sciences application allowed us to create very realistic models for swimming performance prediction based on previous selected criterions that were related with the dependent variable (performance).

  4. Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed

    NASA Astrophysics Data System (ADS)

    Arif, N.; Danoedoro, P.; Hartono

    2017-12-01

    Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.

  5. Solving the quantum many-body problem with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Carleo, Giuseppe; Troyer, Matthias

    2017-02-01

    The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the nontrivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons. A reinforcement-learning scheme we demonstrate is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems. Our approach achieves high accuracy in describing prototypical interacting spins models in one and two dimensions.

  6. A loop-based neural architecture for structured behavior encoding and decoding.

    PubMed

    Gisiger, Thomas; Boukadoum, Mounir

    2018-02-01

    We present a new type of artificial neural network that generalizes on anatomical and dynamical aspects of the mammal brain. Its main novelty lies in its topological structure which is built as an array of interacting elementary motifs shaped like loops. These loops come in various types and can implement functions such as gating, inhibitory or executive control, or encoding of task elements to name a few. Each loop features two sets of neurons and a control region, linked together by non-recurrent projections. The two neural sets do the bulk of the loop's computations while the control unit specifies the timing and the conditions under which the computations implemented by the loop are to be performed. By functionally linking many such loops together, a neural network is obtained that may perform complex cognitive computations. To demonstrate the potential offered by such a system, we present two neural network simulations. The first illustrates the structure and dynamics of a single loop implementing a simple gating mechanism. The second simulation shows how connecting four loops in series can produce neural activity patterns that are sufficient to pass a simplified delayed-response task. We also show that this network reproduces electrophysiological measurements gathered in various regions of the brain of monkeys performing similar tasks. We also demonstrate connections between this type of neural network and recurrent or long short-term memory network models, and suggest ways to generalize them for future artificial intelligence research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Evaluation of Deep Learning Models for Predicting CO2 Flux

    NASA Astrophysics Data System (ADS)

    Halem, M.; Nguyen, P.; Frankel, D.

    2017-12-01

    Artificial neural networks have been employed to calculate surface flux measurements from station data because they are able to fit highly nonlinear relations between input and output variables without knowing the detail relationships between the variables. However, the accuracy in performing neural net estimates of CO2 flux from observations of CO2 and other atmospheric variables is influenced by the architecture of the neural model, the availability, and complexity of interactions between physical variables such as wind, temperature, and indirect variables like latent heat, and sensible heat, etc. We evaluate two deep learning models, feed forward and recurrent neural network models to learn how they each respond to the physical measurements, time dependency of the measurements of CO2 concentration, humidity, pressure, temperature, wind speed etc. for predicting the CO2 flux. In this paper, we focus on a) building neural network models for estimating CO2 flux based on DOE data from tower Atmospheric Radiation Measurement data; b) evaluating the impact of choosing the surface variables and model hyper-parameters on the accuracy and predictions of surface flux; c) assessing the applicability of the neural network models on estimate CO2 flux by using OCO-2 satellite data; d) studying the efficiency of using GPU-acceleration for neural network performance using IBM Power AI deep learning software and packages on IBM Minsky system.

  8. Modeling of the pyruvate production with Escherichia coli: comparison of mechanistic and neural networks-based models.

    PubMed

    Zelić, B; Bolf, N; Vasić-Racki, D

    2006-06-01

    Three different models: the unstructured mechanistic black-box model, the input-output neural network-based model and the externally recurrent neural network model were used to describe the pyruvate production process from glucose and acetate using the genetically modified Escherichia coli YYC202 ldhA::Kan strain. The experimental data were used from the recently described batch and fed-batch experiments [ Zelić B, Study of the process development for Escherichia coli-based pyruvate production. PhD Thesis, University of Zagreb, Faculty of Chemical Engineering and Technology, Zagreb, Croatia, July 2003. (In English); Zelić et al. Bioproc Biosyst Eng 26:249-258 (2004); Zelić et al. Eng Life Sci 3:299-305 (2003); Zelić et al Biotechnol Bioeng 85:638-646 (2004)]. The neural networks were built out of the experimental data obtained in the fed-batch pyruvate production experiments with the constant glucose feed rate. The model validation was performed using the experimental results obtained from the batch and fed-batch pyruvate production experiments with the constant acetate feed rate. Dynamics of the substrate and product concentration changes was estimated using two neural network-based models for biomass and pyruvate. It was shown that neural networks could be used for the modeling of complex microbial fermentation processes, even in conditions in which mechanistic unstructured models cannot be applied.

  9. Estimating tree bole volume using artificial neural network models for four species in Turkey.

    PubMed

    Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V

    2010-01-01

    Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.

  10. Artificial Neural Networks: A Novel Approach to Analysing the Nutritional Ecology of a Blowfly Species, Chrysomya megacephala

    PubMed Central

    Bianconi, André; Zuben, Cláudio J. Von; Serapião, Adriane B. de S.; Govone, José S.

    2010-01-01

    Bionomic features of blowflies may be clarified and detailed by the deployment of appropriate modelling techniques such as artificial neural networks, which are mathematical tools widely applied to the resolution of complex biological problems. The principal aim of this work was to use three well-known neural networks, namely Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), and Adaptive Neural Network-Based Fuzzy Inference System (ANFIS), to ascertain whether these tools would be able to outperform a classical statistical method (multiple linear regression) in the prediction of the number of resultant adults (survivors) of experimental populations of Chrysomya megacephala (F.) (Diptera: Calliphoridae), based on initial larval density (number of larvae), amount of available food, and duration of immature stages. The coefficient of determination (R2) derived from the RBF was the lowest in the testing subset in relation to the other neural networks, even though its R2 in the training subset exhibited virtually a maximum value. The ANFIS model permitted the achievement of the best testing performance. Hence this model was deemed to be more effective in relation to MLP and RBF for predicting the number of survivors. All three networks outperformed the multiple linear regression, indicating that neural models could be taken as feasible techniques for predicting bionomic variables concerning the nutritional dynamics of blowflies. PMID:20569135

  11. Neural network fusion capabilities for efficient implementation of tracking algorithms

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.; Amoozegar, Farid

    1997-03-01

    The ability to efficiently fuse information of different forms to facilitate intelligent decision making is one of the major capabilities of trained multilayer neural networks that is now being recognized. While development of innovative adaptive control algorithms for nonlinear dynamical plants that attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. We describe the capabilities and functionality of neural network algorithms for data fusion and implementation of tracking filters. To discuss details and to serve as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target- tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes from the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. A system architecture that efficiently integrates the fusion capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described. The innovation lies in the way the fusion of multisensor data is accomplished to facilitate improved estimation without increasing the computational complexity of the dynamical state estimator itself.

  12. Neural associative memories for the integration of language, vision and action in an autonomous agent.

    PubMed

    Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G

    2009-03-01

    Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.

  13. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions

    PubMed Central

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage. PMID:27468262

  14. SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

    PubMed

    Zenke, Friedemann; Ganguli, Surya

    2018-06-01

    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.

  15. Probabilistic Models and Generative Neural Networks: Towards an Unified Framework for Modeling Normal and Impaired Neurocognitive Functions.

    PubMed

    Testolin, Alberto; Zorzi, Marco

    2016-01-01

    Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.

  16. Embracing the comparative approach: how robust phylogenies and broader developmental sampling impacts the understanding of nervous system evolution.

    PubMed

    Hejnol, Andreas; Lowe, Christopher J

    2015-12-19

    Molecular biology has provided a rich dataset to develop hypotheses of nervous system evolution. The startling patterning similarities between distantly related animals during the development of their central nervous system (CNS) have resulted in the hypothesis that a CNS with a single centralized medullary cord and a partitioned brain is homologous across bilaterians. However, the ability to precisely reconstruct ancestral neural architectures from molecular genetic information requires that these gene networks specifically map with particular neural anatomies. A growing body of literature representing the development of a wider range of metazoan neural architectures demonstrates that patterning gene network complexity is maintained in animals with more modest levels of neural complexity. Furthermore, a robust phylogenetic framework that provides the basis for testing the congruence of these homology hypotheses has been lacking since the advent of the field of 'evo-devo'. Recent progress in molecular phylogenetics is refining the necessary framework to test previous homology statements that span large evolutionary distances. In this review, we describe recent advances in animal phylogeny and exemplify for two neural characters-the partitioned brain of arthropods and the ventral centralized nerve cords of annelids-a test for congruence using this framework. The sequential sister taxa at the base of Ecdysozoa and Spiralia comprise small, interstitial groups. This topology is not consistent with the hypothesis of homology of tripartitioned brain of arthropods and vertebrates as well as the ventral arthropod and rope-like ladder nervous system of annelids. There can be exquisite conservation of gene regulatory networks between distantly related groups with contrasting levels of nervous system centralization and complexity. Consequently, the utility of molecular characters to reconstruct ancestral neural organization in deep time is limited. © 2015 The Authors.

  17. Embracing the comparative approach: how robust phylogenies and broader developmental sampling impacts the understanding of nervous system evolution

    PubMed Central

    Hejnol, Andreas; Lowe, Christopher J.

    2015-01-01

    Molecular biology has provided a rich dataset to develop hypotheses of nervous system evolution. The startling patterning similarities between distantly related animals during the development of their central nervous system (CNS) have resulted in the hypothesis that a CNS with a single centralized medullary cord and a partitioned brain is homologous across bilaterians. However, the ability to precisely reconstruct ancestral neural architectures from molecular genetic information requires that these gene networks specifically map with particular neural anatomies. A growing body of literature representing the development of a wider range of metazoan neural architectures demonstrates that patterning gene network complexity is maintained in animals with more modest levels of neural complexity. Furthermore, a robust phylogenetic framework that provides the basis for testing the congruence of these homology hypotheses has been lacking since the advent of the field of ‘evo-devo’. Recent progress in molecular phylogenetics is refining the necessary framework to test previous homology statements that span large evolutionary distances. In this review, we describe recent advances in animal phylogeny and exemplify for two neural characters—the partitioned brain of arthropods and the ventral centralized nerve cords of annelids—a test for congruence using this framework. The sequential sister taxa at the base of Ecdysozoa and Spiralia comprise small, interstitial groups. This topology is not consistent with the hypothesis of homology of tripartitioned brain of arthropods and vertebrates as well as the ventral arthropod and rope-like ladder nervous system of annelids. There can be exquisite conservation of gene regulatory networks between distantly related groups with contrasting levels of nervous system centralization and complexity. Consequently, the utility of molecular characters to reconstruct ancestral neural organization in deep time is limited. PMID:26554039

  18. Dynamic decomposition of spatiotemporal neural signals

    PubMed Central

    2017-01-01

    Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals. PMID:28558039

  19. A real time neural net estimator of fatigue life

    NASA Technical Reports Server (NTRS)

    Troudet, T.; Merrill, W.

    1990-01-01

    A neural network architecture is proposed to estimate, in real-time, the fatigue life of mechanical components, as part of the intelligent Control System for Reusable Rocket Engines. Arbitrary component loading values were used as input to train a two hidden-layer feedforward neural net to estimate component fatigue damage. The ability of the net to learn, based on a local strain approach, the mapping between load sequence and fatigue damage has been demonstrated for a uniaxial specimen. Because of its demonstrated performance, the neural computation may be extended to complex cases where the loads are biaxial or triaxial, and the geometry of the component is complex (e.g., turbopumps blades). The generality of the approach is such that load/damage mappings can be directly extracted from experimental data without requiring any knowledge of the stress/strain profile of the component. In addition, the parallel network architecture allows real-time life calculations even for high-frequency vibrations. Owing to its distributed nature, the neural implementation will be robust and reliable, enabling its use in hostile environments such as rocket engines.

  20. Electric Power Engineering Cost Predicting Model Based on the PCA-GA-BP

    NASA Astrophysics Data System (ADS)

    Wen, Lei; Yu, Jiake; Zhao, Xin

    2017-10-01

    In this paper a hybrid prediction algorithm: PCA-GA-BP model is proposed. PCA algorithm is established to reduce the correlation between indicators of original data and decrease difficulty of BP neural network in complex dimensional calculation. The BP neural network is established to estimate the cost of power transmission project. The results show that PCA-GA-BP algorithm can improve result of prediction of electric power engineering cost.

  1. Improved result on stability analysis of discrete stochastic neural networks with time delay

    NASA Astrophysics Data System (ADS)

    Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng

    2009-04-01

    This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.

  2. Neural networks for structural design - An integrated system implementation

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han

    1992-01-01

    The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.

  3. Dynamic changes in neural circuit topology following mild mechanical injury in vitro.

    PubMed

    Patel, Tapan P; Ventre, Scott C; Meaney, David F

    2012-01-01

    Despite its enormous incidence, mild traumatic brain injury is not well understood. One aspect that needs more definition is how the mechanical energy during injury affects neural circuit function. Recent developments in cellular imaging probes provide an opportunity to assess the dynamic state of neural networks with single-cell resolution. In this article, we developed imaging methods to assess the state of dissociated cortical networks exposed to mild injury. We estimated the imaging conditions needed to achieve accurate measures of network properties, and applied these methodologies to evaluate if mild mechanical injury to cortical neurons produces graded changes to either spontaneous network activity or altered network topology. We found that modest injury produced a transient increase in calcium activity that dissipated within 1 h after injury. Alternatively, moderate mechanical injury produced immediate disruption in network synchrony, loss in excitatory tone, and increased modular topology. A calcium-activated neutral protease (calpain) was a key intermediary in these changes; blocking calpain activation restored the network nearly completely to its pre-injury state. Together, these findings show a more complex change in neural circuit behavior than previously reported for mild mechanical injury, and highlight at least one important early mechanism responsible for these changes.

  4. Landslide Susceptibility Index Determination Using Aritificial Neural Network

    NASA Astrophysics Data System (ADS)

    Kawabata, D.; Bandibas, J.; Urai, M.

    2004-12-01

    The occurrence of landslide is the result of the interaction of complex and diverse environmental factors. The geomorphic features, rock types and geologic structure are especially important base factors of the landslide occurrence. Generating landslide susceptibility index by defining the relationship between landslide occurrence and that base factors using conventional mathematical and statistical methods is very difficult and inaccurate. This study focuses on generating landslide susceptibility index using artificial neural networks in Southern Japanese Alps. The training data are geomorphic (e.g. altitude, slope and aspect) and geologic parameters (e.g. rock type, distance from geologic boundary and geologic dip-strike angle) and landslides. Artificial neural network structure and training scheme are formulated to generate the index. Data from areas with and without landslide occurrences are used to train the network. The network is trained to output 1 when the input data are from areas with landslides and 0 when no landslide occurred. The trained network generates an output ranging from 0 to 1 reflecting the possibility of landslide occurrence based on the inputted data. Output values nearer to 1 means higher possibility of landslide occurrence. The artificial neural network model is incorporated into the GIS software to generate a landslide susceptibility map.

  5. Spintronic characteristics of self-assembled neurotransmitter acetylcholine molecular complexes enable quantum information processing in neural networks and brain

    NASA Astrophysics Data System (ADS)

    Tamulis, Arvydas; Majauskaite, Kristina; Kairys, Visvaldas; Zborowski, Krzysztof; Adhikari, Kapil; Krisciukaitis, Sarunas

    2016-09-01

    Implementation of liquid state quantum information processing based on spatially localized electronic spin in the neurotransmitter stable acetylcholine (ACh) neutral molecular radical is discussed. Using DFT quantum calculations we proved that this molecule possesses stable localized electron spin, which may represent a qubit in quantum information processing. The necessary operating conditions for ACh molecule are formulated in self-assembled dimer and more complex systems. The main quantum mechanical research result of this paper is that the neurotransmitter ACh systems, which were proposed, include the use of quantum molecular spintronics arrays to control the neurotransmission in neural networks.

  6. Neural Correlates Associated with Successful Working Memory Performance in Older Adults as Revealed by Spatial ICA

    PubMed Central

    Saliasi, Emi; Geerligs, Linda; Lorist, Monicque M.; Maurits, Natasha M.

    2014-01-01

    To investigate which neural correlates are associated with successful working memory performance, fMRI was recorded in healthy younger and older adults during performance on an n-back task with varying task demands. To identify functional networks supporting working memory processes, we used independent component analysis (ICA) decomposition of the fMRI data. Compared to younger adults, older adults showed a larger neural (BOLD) response in the more complex (2-back) than in the baseline (0-back) task condition, in the ventral lateral prefrontal cortex (VLPFC) and in the right fronto-parietal network (FPN). Our results indicated that a higher BOLD response in the VLPFC was associated with increased performance accuracy in older adults, in both the baseline and the more complex task condition. This ‘BOLD-performance’ relationship suggests that the neural correlates linked with successful performance in the older adults are not uniquely related to specific working memory processes present in the complex but not in the baseline task condition. Furthermore, the selective presence of this relationship in older but not in younger adults suggests that increased neural activity in the VLPFC serves a compensatory role in the aging brain which benefits task performance in the elderly. PMID:24911016

  7. Adaptive identifier for uncertain complex nonlinear systems based on continuous neural networks.

    PubMed

    Alfaro-Ponce, Mariel; Cruz, Amadeo Argüelles; Chairez, Isaac

    2014-03-01

    This paper presents the design of a complex-valued differential neural network identifier for uncertain nonlinear systems defined in the complex domain. This design includes the construction of an adaptive algorithm to adjust the parameters included in the identifier. The algorithm is obtained based on a special class of controlled Lyapunov functions. The quality of the identification process is characterized using the practical stability framework. Indeed, the region where the identification error converges is derived by the same Lyapunov method. This zone is defined by the power of uncertainties and perturbations affecting the complex-valued uncertain dynamics. Moreover, this convergence zone is reduced to its lowest possible value using ideas related to the so-called ellipsoid methodology. Two simple but informative numerical examples are developed to show how the identifier proposed in this paper can be used to approximate uncertain nonlinear systems valued in the complex domain.

  8. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks

    PubMed Central

    Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. PMID:28223930

  9. From embodied mind to embodied robotics: humanities and system theoretical aspects.

    PubMed

    Mainzer, Klaus

    2009-01-01

    After an introduction (1) the article analyzes the evolution of the embodied mind (2), the innovation of embodied robotics (3), and finally discusses conclusions of embodied robotics for human responsibility (4). Considering the evolution of the embodied mind (2), we start with an introduction of complex systems and nonlinear dynamics (2.1), apply this approach to neural self-organization (2.2), distinguish degrees of complexity of the brain (2.3), explain the emergence of cognitive states by complex systems dynamics (2.4), and discuss criteria for modeling the brain as complex nonlinear system (2.5). The innovation of embodied robotics (3) is a challenge of future technology. We start with the distinction of symbolic and embodied AI (3.1) and explain embodied robots as dynamical systems (3.2). Self-organization needs self-control of technical systems (3.3). Cellular neural networks (CNN) are an example of self-organizing technical systems offering new avenues for neurobionics (3.4). In general, technical neural networks support different kinds of learning robots (3.5). Finally, embodied robotics aim at the development of cognitive and conscious robots (3.6).

  10. Do neural nets learn statistical laws behind natural language?

    PubMed

    Takahashi, Shuntaro; Tanaka-Ishii, Kumiko

    2017-01-01

    The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM) effectively reproduces Zipf's law and Heaps' law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf's law and Heaps' law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks.

  11. Do neural nets learn statistical laws behind natural language?

    PubMed Central

    Takahashi, Shuntaro

    2017-01-01

    The performance of deep learning in natural language processing has been spectacular, but the reasons for this success remain unclear because of the inherent complexity of deep learning. This paper provides empirical evidence of its effectiveness and of a limitation of neural networks for language engineering. Precisely, we demonstrate that a neural language model based on long short-term memory (LSTM) effectively reproduces Zipf’s law and Heaps’ law, two representative statistical properties underlying natural language. We discuss the quality of reproducibility and the emergence of Zipf’s law and Heaps’ law as training progresses. We also point out that the neural language model has a limitation in reproducing long-range correlation, another statistical property of natural language. This understanding could provide a direction for improving the architectures of neural networks. PMID:29287076

  12. Nature-Inspired Cognitive Evolution to Play MS. Pac-Man

    NASA Astrophysics Data System (ADS)

    Tan, Tse Guan; Teo, Jason; Anthony, Patricia

    Recent developments in nature-inspired computation have heightened the need for research into the three main areas of scientific, engineering and industrial applications. Some approaches have reported that it is able to solve dynamic problems and very useful for improving the performance of various complex systems. So far however, there has been little discussion about the effectiveness of the application of these models to computer and video games in particular. The focus of this research is to explore the hybridization of nature-inspired computation methods for optimization of neural network-based cognition in video games, in this case the combination of a neural network with an evolutionary algorithm. In essence, a neural network is an attempt to mimic the extremely complex human brain system, which is building an artificial brain that is able to self-learn intelligently. On the other hand, an evolutionary algorithm is to simulate the biological evolutionary processes that evolve potential solutions in order to solve the problems or tasks by applying the genetic operators such as crossover, mutation and selection into the solutions. This paper investigates the abilities of Evolution Strategies (ES) to evolve feed-forward artificial neural network's internal parameters (i.e. weight and bias values) for automatically generating Ms. Pac-man controllers. The main objective of this game is to clear a maze of dots while avoiding the ghosts and to achieve the highest possible score. The experimental results show that an ES-based system can be successfully applied to automatically generate artificial intelligence for a complex, dynamic and highly stochastic video game environment.

  13. Intelligent Foreign Particle Inspection Machine for Injection Liquid Examination Based on Modified Pulse-Coupled Neural Networks

    PubMed Central

    Ge, Ji; Wang, YaoNan; Zhou, BoWen; Zhang, Hui

    2009-01-01

    A biologically inspired spiking neural network model, called pulse-coupled neural networks (PCNN), has been applied in an automatic inspection machine to detect visible foreign particles intermingled in glucose or sodium chloride injection liquids. Proper mechanisms and improved spin/stop techniques are proposed to avoid the appearance of air bubbles, which increases the algorithms' complexity. Modified PCNN is adopted to segment the difference images, judging the existence of foreign particles according to the continuity and smoothness properties of their moving traces. Preliminarily experimental results indicate that the inspection machine can detect the visible foreign particles effectively and the detection speed, accuracy and correct detection rate also satisfying the needs of medicine preparation. PMID:22412318

  14. Computer-assisted cervical cancer screening using neural networks.

    PubMed

    Mango, L J

    1994-03-15

    A practical and effective system for the computer-assisted screening of conventionally prepared cervical smears is presented and described. Recent developments in neural network technology have made computerized analysis of the complex cellular scenes found on Pap smears possible. The PAPNET Cytological Screening System uses neural networks to automatically analyze conventional smears by locating and recognizing potentially abnormal cells. It then displays images of these objects for review and final diagnosis by qualified cytologists. The results of the studies presented indicate that the PAPNET system could be a useful tool for both the screening and rescreening of cervical smears. In addition, the system has been shown to be sensitive to some types of abnormalities which have gone undetected during manual screening.

  15. Toward a More Robust Pruning Procedure for MLP Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    Choosing a proper neural network architecture is a problem of great practical importance. Smaller models mean not only simpler designs but also lower variance for parameter estimation and network prediction. The widespread utilization of neural networks in modeling highlights an issue in human factors. The procedure of building neural models should find an appropriate level of model complexity in a more or less automatic fashion to make it less prone to human subjectivity. In this paper we present a Singular Value Decomposition based node elimination technique and enhanced implementation of the Optimal Brain Surgeon algorithm. Combining both methods creates a powerful pruning engine that can be used for tuning feedforward connectionist models. The performance of the proposed method is demonstrated by adjusting the structure of a multi-input multi-output model used to calibrate a six-component wind tunnel strain gage.

  16. Passivity of Directed and Undirected Complex Dynamical Networks With Adaptive Coupling Weights.

    PubMed

    Wang, Jin-Liang; Wu, Huai-Ning; Huang, Tingwen; Ren, Shun-Yan; Wu, Jigang

    2017-08-01

    A complex dynamical network consisting of N identical neural networks with reaction-diffusion terms is considered in this paper. First, several passivity definitions for the systems with different dimensions of input and output are given. By utilizing some inequality techniques, several criteria are presented, ensuring the passivity of the complex dynamical network under the designed adaptive law. Then, we discuss the relationship between the synchronization and output strict passivity of the proposed network model. Furthermore, these results are extended to the case when the topological structure of the network is undirected. Finally, two examples with numerical simulations are provided to illustrate the correctness and effectiveness of the proposed results.

  17. Implementations of back propagation algorithm in ecosystems applications

    NASA Astrophysics Data System (ADS)

    Ali, Khalda F.; Sulaiman, Riza; Elamir, Amir Mohamed

    2015-05-01

    Artificial Neural Networks (ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is in solving problems which are too complex for conventional technologies, that do not have an algorithmic solutions or their algorithmic Solutions is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are developed from concept that evolved in the late twentieth century neuro-physiological experiments on the cells of the human brain to overcome the perceived inadequacies with conventional ecological data analysis methods. ANNs have gained increasing attention in ecosystems applications, because of ANN's capacity to detect patterns in data through non-linear relationships, this characteristic confers them a superior predictive ability. In this research, ANNs is applied in an ecological system analysis. The neural networks use the well known Back Propagation (BP) Algorithm with the Delta Rule for adaptation of the system. The Back Propagation (BP) training Algorithm is an effective analytical method for adaptation of the ecosystems applications, the main reason because of their capacity to detect patterns in data through non-linear relationships. This characteristic confers them a superior predicting ability. The BP algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error is calculated. The idea of the back propagation algorithm is to reduce this error, until the ANNs learns the training data. The training begins with random weights, and the goal is to adjust them so that the error will be minimal. This research evaluated the use of artificial neural networks (ANNs) techniques in an ecological system analysis and modeling. The experimental results from this research demonstrate that an artificial neural network system can be trained to act as an expert ecosystem analyzer for many applications in ecological fields. The pilot ecosystem analyzer shows promising ability for generalization and requires further tuning and refinement of the basis neural network system for optimal performance.

  18. Emerging Frontiers of Neuroengineering: A Network Science of Brain Connectivity

    PubMed Central

    Bassett, Danielle S.; Khambhati, Ankit N.; Grafton, Scott T.

    2018-01-01

    Neuroengineering is faced with unique challenges in repairing or replacing complex neural systems that are composed of many interacting parts. These interactions form intricate patterns over large spatiotemporal scales and produce emergent behaviors that are difficult to predict from individual elements. Network science provides a particularly appropriate framework in which to study and intervene in such systems by treating neural elements (cells, volumes) as nodes in a graph and neural interactions (synapses, white matter tracts) as edges in that graph. Here, we review the emerging discipline of network neuroscience, which uses and develops tools from graph theory to better understand and manipulate neural systems from micro- to macroscales. We present examples of how human brain imaging data are being modeled with network analysis and underscore potential pitfalls. We then highlight current computational and theoretical frontiers and emphasize their utility in informing diagnosis and monitoring, brain–machine interfaces, and brain stimulation. A flexible and rapidly evolving enterprise, network neuroscience provides a set of powerful approaches and fundamental insights that are critical for the neuroengineer’s tool kit. PMID:28375650

  19. Comparison of RF spectrum prediction methods for dynamic spectrum access

    NASA Astrophysics Data System (ADS)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  20. Use of statistical and neural net approaches in predicting toxicity of chemicals.

    PubMed

    Basak, S C; Grunwald, G D; Gute, B D; Balasubramanian, K; Opitz, D

    2000-01-01

    Hierarchical quantitative structure-activity relationships (H-QSAR) have been developed as a new approach in constructing models for estimating physicochemical, biomedicinal, and toxicological properties of interest. This approach uses increasingly more complex molecular descriptors in a graduated approach to model building. In this study, statistical and neural network methods have been applied to the development of H-QSAR models for estimating the acute aquatic toxicity (LC50) of 69 benzene derivatives to Pimephales promelas (fathead minnow). Topostructural, topochemical, geometrical, and quantum chemical indices were used as the four levels of the hierarchical method. It is clear from both the statistical and neural network models that topostructural indices alone cannot adequately model this set of congeneric chemicals. Not surprisingly, topochemical indices greatly increase the predictive power of both statistical and neural network models. Quantum chemical indices also add significantly to the modeling of this set of acute aquatic toxicity data.

  1. A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing.

    PubMed

    Hu, Bin; Yue, Shigang; Zhang, Zhuhong

    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.

  2. Circuit variability interacts with excitatory-inhibitory diversity of interneurons to regulate network encoding capacity.

    PubMed

    Tsai, Kuo-Ting; Hu, Chin-Kun; Li, Kuan-Wei; Hwang, Wen-Liang; Chou, Ya-Hui

    2018-05-23

    Local interneurons (LNs) in the Drosophila olfactory system exhibit neuronal diversity and variability, yet it is still unknown how these features impact information encoding capacity and reliability in a complex LN network. We employed two strategies to construct a diverse excitatory-inhibitory neural network beginning with a ring network structure and then introduced distinct types of inhibitory interneurons and circuit variability to the simulated network. The continuity of activity within the node ensemble (oscillation pattern) was used as a readout to describe the temporal dynamics of network activity. We found that inhibitory interneurons enhance the encoding capacity by protecting the network from extremely short activation periods when the network wiring complexity is very high. In addition, distinct types of interneurons have differential effects on encoding capacity and reliability. Circuit variability may enhance the encoding reliability, with or without compromising encoding capacity. Therefore, we have described how circuit variability of interneurons may interact with excitatory-inhibitory diversity to enhance the encoding capacity and distinguishability of neural networks. In this work, we evaluate the effects of different types and degrees of connection diversity on a ring model, which may simulate interneuron networks in the Drosophila olfactory system or other biological systems.

  3. Real-Time Adaptive Color Segmentation by Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2004-01-01

    Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural network and algorithm is that each update of synaptic weights takes place in conjunction with the addition of another hidden unit, which then remains in place as still other hidden units are added on subsequent iterations. For a given training pattern, the synaptic weight between (1) the inputs and the previously added hidden units and (2) the newly added hidden unit is updated by an amount proportional to the partial derivative of a quadratic error function with respect to the synaptic weight. The synaptic weight between the newly added hidden unit and each output unit is given by a more complex function that involves the errors between the outputs and their target values, the transfer functions (hyperbolic tangents) of the neural units, and the derivatives of the transfer functions.

  4. The C. elegans Connectome Consists of Homogenous Circuits with Defined Functional Roles

    PubMed Central

    Azulay, Aharon; Zaslaver, Alon

    2016-01-01

    A major goal of systems neuroscience is to decipher the structure-function relationship in neural networks. Here we study network functionality in light of the common-neighbor-rule (CNR) in which a pair of neurons is more likely to be connected the more common neighbors it shares. Focusing on the fully-mapped neural network of C. elegans worms, we establish that the CNR is an emerging property in this connectome. Moreover, sets of common neighbors form homogenous structures that appear in defined layers of the network. Simulations of signal propagation reveal their potential functional roles: signal amplification and short-term memory at the sensory/inter-neuron layer, and synchronized activity at the motoneuron layer supporting coordinated movement. A coarse-grained view of the neural network based on homogenous connected sets alone reveals a simple modular network architecture that is intuitive to understand. These findings provide a novel framework for analyzing larger, more complex, connectomes once these become available. PMID:27606684

  5. Learning Universal Computations with Spikes

    PubMed Central

    Thalmeier, Dominik; Uhlmann, Marvin; Kappen, Hilbert J.; Memmesheimer, Raoul-Martin

    2016-01-01

    Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them. PMID:27309381

  6. The brainstem reticular formation is a small-world, not scale-free, network

    PubMed Central

    Humphries, M.D; Gurney, K; Prescott, T.J

    2005-01-01

    Recently, it has been demonstrated that several complex systems may have simple graph-theoretic characterizations as so-called ‘small-world’ and ‘scale-free’ networks. These networks have also been applied to the gross neural connectivity between primate cortical areas and the nervous system of Caenorhabditis elegans. Here, we extend this work to a specific neural circuit of the vertebrate brain—the medial reticular formation (RF) of the brainstem—and, in doing so, we have made three key contributions. First, this work constitutes the first model (and quantitative review) of this important brain structure for over three decades. Second, we have developed the first graph-theoretic analysis of vertebrate brain connectivity at the neural network level. Third, we propose simple metrics to quantitatively assess the extent to which the networks studied are small-world or scale-free. We conclude that the medial RF is configured to create small-world (implying coherent rapid-processing capabilities), but not scale-free, type networks under assumptions which are amenable to quantitative measurement. PMID:16615219

  7. A novel multi-model neuro-fuzzy-based MPPT for three-phase grid-connected photovoltaic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaouachi, Aymen; Kamel, Rashad M.; Nagasaka, Ken

    This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three multi-layered feed forwarded Artificial Neural Networks (ANN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated ANN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology,more » comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and nonlinear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network and the Perturb and Observe (P and O) algorithm dispositive. (author)« less

  8. Artificial Neural Networks for differential diagnosis of breast lesions in MR-Mammography: a systematic approach addressing the influence of network architecture on diagnostic performance using a large clinical database.

    PubMed

    Dietzel, Matthias; Baltzer, Pascal A T; Dietzel, Andreas; Zoubi, Ramy; Gröschel, Tobias; Burmeister, Hartmut P; Bogdan, Martin; Kaiser, Werner A

    2012-07-01

    Differential diagnosis of lesions in MR-Mammography (MRM) remains a complex task. The aim of this MRM study was to design and to test robustness of Artificial Neural Network architectures to predict malignancy using a large clinical database. For this IRB-approved investigation standardized protocols and study design were applied (T1w-FLASH; 0.1 mmol/kgBW Gd-DTPA; T2w-TSE; histological verification after MRM). All lesions were evaluated by two experienced (>500 MRM) radiologists in consensus. In every lesion, 18 previously published descriptors were assessed and documented in the database. An Artificial Neural Network (ANN) was developed to process this database (The-MathWorks/Inc., feed-forward-architecture/resilient back-propagation-algorithm). All 18 descriptors were set as input variables, whereas histological results (malignant vs. benign) was defined as classification variable. Initially, the ANN was optimized in terms of "Training Epochs" (TE), "Hidden Layers" (HL), "Learning Rate" (LR) and "Neurons" (N). Robustness of the ANN was addressed by repeated evaluation cycles (n: 9) with receiver operating characteristics (ROC) analysis of the results applying 4-fold Cross Validation. The best network architecture was identified comparing the corresponding Area under the ROC curve (AUC). Histopathology revealed 436 benign and 648 malignant lesions. Enhancing the level of complexity could not increase diagnostic accuracy of the network (P: n.s.). The optimized ANN architecture (TE: 20, HL: 1, N: 5, LR: 1.2) was accurate (mean-AUC 0.888; P: <0.001) and robust (CI: 0.885-0.892; range: 0.880-0.898). The optimized neural network showed robust performance and high diagnostic accuracy for prediction of malignancy on unknown data. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. A link prediction method for heterogeneous networks based on BP neural network

    NASA Astrophysics Data System (ADS)

    Li, Ji-chao; Zhao, Dan-ling; Ge, Bing-Feng; Yang, Ke-Wei; Chen, Ying-Wu

    2018-04-01

    Most real-world systems, composed of different types of objects connected via many interconnections, can be abstracted as various complex heterogeneous networks. Link prediction for heterogeneous networks is of great significance for mining missing links and reconfiguring networks according to observed information, with considerable applications in, for example, friend and location recommendations and disease-gene candidate detection. In this paper, we put forward a novel integrated framework, called MPBP (Meta-Path feature-based BP neural network model), to predict multiple types of links for heterogeneous networks. More specifically, the concept of meta-path is introduced, followed by the extraction of meta-path features for heterogeneous networks. Next, based on the extracted meta-path features, a supervised link prediction model is built with a three-layer BP neural network. Then, the solution algorithm of the proposed link prediction model is put forward to obtain predicted results by iteratively training the network. Last, numerical experiments on the dataset of examples of a gene-disease network and a combat network are conducted to verify the effectiveness and feasibility of the proposed MPBP. It shows that the MPBP with very good performance is superior to the baseline methods.

  10. Science of the science, drug discovery and artificial neural networks.

    PubMed

    Patel, Jigneshkumar

    2013-03-01

    Drug discovery process many times encounters complex problems, which may be difficult to solve by human intelligence. Artificial Neural Networks (ANNs) are one of the Artificial Intelligence (AI) technologies used for solving such complex problems. ANNs are widely used for primary virtual screening of compounds, quantitative structure activity relationship studies, receptor modeling, formulation development, pharmacokinetics and in all other processes involving complex mathematical modeling. Despite having such advanced technologies and enough understanding of biological systems, drug discovery is still a lengthy, expensive, difficult and inefficient process with low rate of new successful therapeutic discovery. In this paper, author has discussed the drug discovery science and ANN from very basic angle, which may be helpful to understand the application of ANN for drug discovery to improve efficiency.

  11. A fast button surface defects detection method based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Liu, Lizhe; Cao, Danhua; Wu, Songlin; Wu, Yubin; Wei, Taoran

    2018-01-01

    Considering the complexity of the button surface texture and the variety of buttons and defects, we propose a fast visual method for button surface defect detection, based on convolutional neural network (CNN). CNN has the ability to extract the essential features by training, avoiding designing complex feature operators adapted to different kinds of buttons, textures and defects. Firstly, we obtain the normalized button region and then use HOG-SVM method to identify the front and back side of the button. Finally, a convolutional neural network is developed to recognize the defects. Aiming at detecting the subtle defects, we propose a network structure with multiple feature channels input. To deal with the defects of different scales, we take a strategy of multi-scale image block detection. The experimental results show that our method is valid for a variety of buttons and able to recognize all kinds of defects that have occurred, including dent, crack, stain, hole, wrong paint and uneven. The detection rate exceeds 96%, which is much better than traditional methods based on SVM and methods based on template match. Our method can reach the speed of 5 fps on DSP based smart camera with 600 MHz frequency.

  12. Complexity Measures in Magnetoencephalography: Measuring "Disorder" in Schizophrenia

    PubMed Central

    Brookes, Matthew J.; Hall, Emma L.; Robson, Siân E.; Price, Darren; Palaniyappan, Lena; Liddle, Elizabeth B.; Liddle, Peter F.; Robinson, Stephen E.; Morris, Peter G.

    2015-01-01

    This paper details a methodology which, when applied to magnetoencephalography (MEG) data, is capable of measuring the spatio-temporal dynamics of ‘disorder’ in the human brain. Our method, which is based upon signal entropy, shows that spatially separate brain regions (or networks) generate temporally independent entropy time-courses. These time-courses are modulated by cognitive tasks, with an increase in local neural processing characterised by localised and transient increases in entropy in the neural signal. We explore the relationship between entropy and the more established time-frequency decomposition methods, which elucidate the temporal evolution of neural oscillations. We observe a direct but complex relationship between entropy and oscillatory amplitude, which suggests that these metrics are complementary. Finally, we provide a demonstration of the clinical utility of our method, using it to shed light on aberrant neurophysiological processing in schizophrenia. We demonstrate significantly increased task induced entropy change in patients (compared to controls) in multiple brain regions, including a cingulo-insula network, bilateral insula cortices and a right fronto-parietal network. These findings demonstrate potential clinical utility for our method and support a recent hypothesis that schizophrenia can be characterised by abnormalities in the salience network (a well characterised distributed network comprising bilateral insula and cingulate cortices). PMID:25886553

  13. On the Role of Situational Stressors in the Disruption of Global Neural Network Stability during Problem Solving.

    PubMed

    Liu, Mengting; Amey, Rachel C; Forbes, Chad E

    2017-12-01

    When individuals are placed in stressful situations, they are likely to exhibit deficits in cognitive capacity over and above situational demands. Despite this, individuals may still persevere and ultimately succeed in these situations. Little is known, however, about neural network properties that instantiate success or failure in both neutral and stressful situations, particularly with respect to regions integral for problem-solving processes that are necessary for optimal performance on more complex tasks. In this study, we outline how hidden Markov modeling based on multivoxel pattern analysis can be used to quantify unique brain states underlying complex network interactions that yield either successful or unsuccessful problem solving in more neutral or stressful situations. We provide evidence that brain network stability and states underlying synchronous interactions in regions integral for problem-solving processes are key predictors of whether individuals succeed or fail in stressful situations. Findings also suggested that individuals utilize discriminate neural patterns in successfully solving problems in stressful or neutral situations. Findings overall highlight how hidden Markov modeling can provide myriad possibilities for quantifying and better understanding the role of global network interactions in the problem-solving process and how the said interactions predict success or failure in different contexts.

  14. Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.

    PubMed

    Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong

    2015-03-01

    This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique.

  15. Neural network fusion capabilities for efficient implementation of tracking algorithms

    NASA Astrophysics Data System (ADS)

    Sundareshan, Malur K.; Amoozegar, Farid

    1996-05-01

    The ability to efficiently fuse information of different forms for facilitating intelligent decision-making is one of the major capabilities of trained multilayer neural networks that is being recognized int eh recent times. While development of innovative adaptive control algorithms for nonlinear dynamical plants which attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. In this paper we describe the capabilities and functionality of neural network algorithms for data fusion and implementation of nonlinear tracking filters. For a discussion of details and for serving as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes form the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. Such an approach results in an overall nonlinear tracking filter which has several advantages over the popular efforts at designing nonlinear estimation algorithms for tracking applications, the principle one being the reduction of mathematical and computational complexities. A system architecture that efficiently integrates the processing capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described in this paper.

  16. New application of intelligent agents in sporadic amyotrophic lateral sclerosis identifies unexpected specific genetic background.

    PubMed

    Penco, Silvana; Buscema, Massimo; Patrosso, Maria Cristina; Marocchi, Alessandro; Grossi, Enzo

    2008-05-30

    Few genetic factors predisposing to the sporadic form of amyotrophic lateral sclerosis (ALS) have been identified, but the pathology itself seems to be a true multifactorial disease in which complex interactions between environmental and genetic susceptibility factors take place. The purpose of this study was to approach genetic data with an innovative statistical method such as artificial neural networks to identify a possible genetic background predisposing to the disease. A DNA multiarray panel was applied to genotype more than 60 polymorphisms within 35 genes selected from pathways of lipid and homocysteine metabolism, regulation of blood pressure, coagulation, inflammation, cellular adhesion and matrix integrity, in 54 sporadic ALS patients and 208 controls. Advanced intelligent systems based on novel coupling of artificial neural networks and evolutionary algorithms have been applied. The results obtained have been compared with those derived from the use of standard neural networks and classical statistical analysis Advanced intelligent systems based on novel coupling of artificial neural networks and evolutionary algorithms have been applied. The results obtained have been compared with those derived from the use of standard neural networks and classical statistical analysis. An unexpected discovery of a strong genetic background in sporadic ALS using a DNA multiarray panel and analytical processing of the data with advanced artificial neural networks was found. The predictive accuracy obtained with Linear Discriminant Analysis and Standard Artificial Neural Networks ranged from 70% to 79% (average 75.31%) and from 69.1 to 86.2% (average 76.6%) respectively. The corresponding value obtained with Advanced Intelligent Systems reached an average of 96.0% (range 94.4 to 97.6%). This latter approach allowed the identification of seven genetic variants essential to differentiate cases from controls: apolipoprotein E arg158cys; hepatic lipase -480 C/T; endothelial nitric oxide synthase 690 C/T and glu298asp; vitamin K-dependent coagulation factor seven arg353glu, glycoprotein Ia/IIa 873 G/A and E-selectin ser128arg. This study provides an alternative and reliable method to approach complex diseases. Indeed, the application of a novel artificial intelligence-based method offers a new insight into genetic markers of sporadic ALS pointing out the existence of a strong genetic background.

  17. Application of Neural Networks for classification of Patau, Edwards, Down, Turner and Klinefelter Syndrome based on first trimester maternal serum screening data, ultrasonographic findings and patient demographics.

    PubMed

    Catic, Aida; Gurbeta, Lejla; Kurtovic-Kozaric, Amina; Mehmedbasic, Senad; Badnjevic, Almir

    2018-02-13

    The usage of Artificial Neural Networks (ANNs) for genome-enabled classifications and establishing genome-phenotype correlations have been investigated more extensively over the past few years. The reason for this is that ANNs are good approximates of complex functions, so classification can be performed without the need for explicitly defined input-output model. This engineering tool can be applied for optimization of existing methods for disease/syndrome classification. Cytogenetic and molecular analyses are the most frequent tests used in prenatal diagnostic for the early detection of Turner, Klinefelter, Patau, Edwards and Down syndrome. These procedures can be lengthy, repetitive; and often employ invasive techniques so a robust automated method for classifying and reporting prenatal diagnostics would greatly help the clinicians with their routine work. The database consisted of data collected from 2500 pregnant woman that came to the Institute of Gynecology, Infertility and Perinatology "Mehmedbasic" for routine antenatal care between January 2000 and December 2016. During first trimester all women were subject to screening test where values of maternal serum pregnancy-associated plasma protein A (PAPP-A) and free beta human chorionic gonadotropin (β-hCG) were measured. Also, fetal nuchal translucency thickness and the presence or absence of the nasal bone was observed using ultrasound. The architectures of linear feedforward and feedback neural networks were investigated for various training data distributions and number of neurons in hidden layer. Feedback neural network architecture out performed feedforward neural network architecture in predictive ability for all five aneuploidy prenatal syndrome classes. Feedforward neural network with 15 neurons in hidden layer achieved classification sensitivity of 92.00%. Classification sensitivity of feedback (Elman's) neural network was 99.00%. Average accuracy of feedforward neural network was 89.6% and for feedback was 98.8%. The results presented in this paper prove that an expert diagnostic system based on neural networks can be efficiently used for classification of five aneuploidy syndromes, covered with this study, based on first trimester maternal serum screening data, ultrasonographic findings and patient demographics. Developed Expert System proved to be simple, robust, and powerful in properly classifying prenatal aneuploidy syndromes.

  18. The Use of Neural Network Technology to Model Swimming Performance

    PubMed Central

    Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida

    2007-01-01

    The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports sciences application allowed us to create very realistic models for swimming performance prediction based on previous selected criterions that were related with the dependent variable (performance). PMID:24149233

  19. Thermal non-equilibrium in porous medium adjacent to vertical plate: ANN approach

    NASA Astrophysics Data System (ADS)

    Ahmed, N. J. Salman; Ahamed, K. S. Nazim; Al-Rashed, Abdullah A. A. A.; Kamangar, Sarfaraz; Athani, Abdulgaphur

    2018-05-01

    Thermal non-equilibrium in porous medium is a condition that refers to temperature discrepancy in solid matrix and fluid of porous medium. This type of flow is complex flow requiring complex set of partial differential equations that govern the flow behavior. The current work is undertaken to predict the thermal non-equilibrium behavior of porous medium adjacent to vertical plate using artificial neural network. A set of neurons in 3 layers are trained to predict the heat transfer characteristics. It is found that the thermal non-equilibrium heat transfer behavior in terms of Nusselt number of fluid as well as solid phase can be predicted accurately by using well-trained neural network.

  20. Mittag-Leffler synchronization of fractional neural networks with time-varying delays and reaction-diffusion terms using impulsive and linear controllers.

    PubMed

    Stamova, Ivanka; Stamov, Gani

    2017-12-01

    In this paper, we propose a fractional-order neural network system with time-varying delays and reaction-diffusion terms. We first develop a new Mittag-Leffler synchronization strategy for the controlled nodes via impulsive controllers. Using the fractional Lyapunov method sufficient conditions are given. We also study the global Mittag-Leffler synchronization of two identical fractional impulsive reaction-diffusion neural networks using linear controllers, which was an open problem even for integer-order models. Since the Mittag-Leffler stability notion is a generalization of the exponential stability concept for fractional-order systems, our results extend and improve the exponential impulsive control theory of neural network system with time-varying delays and reaction-diffusion terms to the fractional-order case. The fractional-order derivatives allow us to model the long-term memory in the neural networks, and thus the present research provides with a conceptually straightforward mathematical representation of rather complex processes. Illustrative examples are presented to show the validity of the obtained results. We show that by means of appropriate impulsive controllers we can realize the stability goal and to control the qualitative behavior of the states. An image encryption scheme is extended using fractional derivatives. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks

    PubMed Central

    Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan C.; van Schaik, André

    2015-01-01

    We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform. PMID:26041985

  2. A New Stochastic Technique for Painlevé Equation-I Using Neural Network Optimized with Swarm Intelligence

    PubMed Central

    Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor

    2012-01-01

    A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371

  3. Hydraulic and separation characteristics of an industrial gas centrifuge calculated with neural networks

    NASA Astrophysics Data System (ADS)

    Butov, Vladimir; Timchenko, Sergey; Ushakov, Ivan; Golovkov, Nikita; Poberezhnikov, Andrey

    2018-03-01

    Single gas centrifuge (GC) is generally used for the separation of binary mixtures of isotopes. Processes taking place within the centrifuge are complex and non-linear. Their characteristics can change over time with long-term operation due to wear of the main structural elements of the GC construction. The paper is devoted to the determination of basic operation parameters of the centrifuge with the help of neural networks. We have developed a method for determining the parameters of the industrial GC operation by processing statistical data. In this work, we have constructed a neural network that is capable of determining the main hydraulic and separation characteristics of the gas centrifuge, depending on the geometric dimensions of the gas centrifuge, load value, and rotor speed.

  4. Dense Matching Comparison Between Census and a Convolutional Neural Network Algorithm for Plant Reconstruction

    NASA Astrophysics Data System (ADS)

    Xia, Y.; Tian, J.; d'Angelo, P.; Reinartz, P.

    2018-05-01

    3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  5. Robust fixed-time synchronization for uncertain complex-valued neural networks with discontinuous activation functions.

    PubMed

    Ding, Xiaoshuai; Cao, Jinde; Alsaedi, Ahmed; Alsaadi, Fuad E; Hayat, Tasawar

    2017-06-01

    This paper is concerned with the fixed-time synchronization for a class of complex-valued neural networks in the presence of discontinuous activation functions and parameter uncertainties. Fixed-time synchronization not only claims that the considered master-slave system realizes synchronization within a finite time segment, but also requires a uniform upper bound for such time intervals for all initial synchronization errors. To accomplish the target of fixed-time synchronization, a novel feedback control procedure is designed for the slave neural networks. By means of the Filippov discontinuity theories and Lyapunov stability theories, some sufficient conditions are established for the selection of control parameters to guarantee synchronization within a fixed time, while an upper bound of the settling time is acquired as well, which allows to be modulated to predefined values independently on initial conditions. Additionally, criteria of modified controller for assurance of fixed-time anti-synchronization are also derived for the same system. An example is included to illustrate the proposed methodologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Predicting the spatial distribution of soil profile in Adapazari/Turkey by artificial neural networks using CPT data

    NASA Astrophysics Data System (ADS)

    Arel, Ersin

    2012-06-01

    The infamous soils of Adapazari, Turkey, that failed extensively during the 46-s long magnitude 7.4 earthquake in 1999 have since been the subject of a research program. Boreholes, piezocone soundings and voluminous laboratory testing have enabled researchers to apply sophisticated methods to determine the soil profiles in the city using the existing database. This paper describes the use of the artificial neural network (ANN) model to predict the complex soil profiles of Adapazari, based on cone penetration test (CPT) results. More than 3236 field CPT readings have been collected from 117 soundings spread over an area of 26 km2. An attempt has been made to develop the ANN model using multilayer perceptrons trained with a feed-forward back-propagation algorithm. The results show that the ANN model is fairly accurate in predicting complex soil profiles. Soil identification using CPT test results has principally been based on the Robertson charts. Applying neural network systems using the chart offers a powerful and rapid route to reliable prediction of the soil profiles.

  7. Automatic delineation and 3D visualization of the human ventricular system using probabilistic neural networks

    NASA Astrophysics Data System (ADS)

    Hatfield, Fraser N.; Dehmeshki, Jamshid

    1998-09-01

    Neurosurgery is an extremely specialized area of medical practice, requiring many years of training. It has been suggested that virtual reality models of the complex structures within the brain may aid in the training of neurosurgeons as well as playing an important role in the preparation for surgery. This paper focuses on the application of a probabilistic neural network to the automatic segmentation of the ventricles from magnetic resonance images of the brain, and their three dimensional visualization.

  8. Learning Perfectly Secure Cryptography to Protect Communications with Adversarial Neural Cryptography

    PubMed Central

    2018-01-01

    Researches in Artificial Intelligence (AI) have achieved many important breakthroughs, especially in recent years. In some cases, AI learns alone from scratch and performs human tasks faster and better than humans. With the recent advances in AI, it is natural to wonder whether Artificial Neural Networks will be used to successfully create or break cryptographic algorithms. Bibliographic review shows the main approach to this problem have been addressed throughout complex Neural Networks, but without understanding or proving the security of the generated model. This paper presents an analysis of the security of cryptographic algorithms generated by a new technique called Adversarial Neural Cryptography (ANC). Using the proposed network, we show limitations and directions to improve the current approach of ANC. Training the proposed Artificial Neural Network with the improved model of ANC, we show that artificially intelligent agents can learn the unbreakable One-Time Pad (OTP) algorithm, without human knowledge, to communicate securely through an insecure communication channel. This paper shows in which conditions an AI agent can learn a secure encryption scheme. However, it also shows that, without a stronger adversary, it is more likely to obtain an insecure one. PMID:29695066

  9. Learning Perfectly Secure Cryptography to Protect Communications with Adversarial Neural Cryptography.

    PubMed

    Coutinho, Murilo; de Oliveira Albuquerque, Robson; Borges, Fábio; García Villalba, Luis Javier; Kim, Tai-Hoon

    2018-04-24

    Researches in Artificial Intelligence (AI) have achieved many important breakthroughs, especially in recent years. In some cases, AI learns alone from scratch and performs human tasks faster and better than humans. With the recent advances in AI, it is natural to wonder whether Artificial Neural Networks will be used to successfully create or break cryptographic algorithms. Bibliographic review shows the main approach to this problem have been addressed throughout complex Neural Networks, but without understanding or proving the security of the generated model. This paper presents an analysis of the security of cryptographic algorithms generated by a new technique called Adversarial Neural Cryptography (ANC). Using the proposed network, we show limitations and directions to improve the current approach of ANC. Training the proposed Artificial Neural Network with the improved model of ANC, we show that artificially intelligent agents can learn the unbreakable One-Time Pad (OTP) algorithm, without human knowledge, to communicate securely through an insecure communication channel. This paper shows in which conditions an AI agent can learn a secure encryption scheme. However, it also shows that, without a stronger adversary, it is more likely to obtain an insecure one.

  10. Power prediction in mobile communication systems using an optimal neural-network structure.

    PubMed

    Gao, X M; Gao, X Z; Tanskanen, J A; Ovaska, S J

    1997-01-01

    Presents a novel neural-network-based predictor for received power level prediction in direct sequence code division multiple access (DS/CDMA) systems. The predictor consists of an adaptive linear element (Adaline) followed by a multilayer perceptron (MLP). An important but difficult problem in designing such a cascade predictor is to determine the complexity of the networks. We solve this problem by using the predictive minimum description length (PMDL) principle to select the optimal numbers of input and hidden nodes. This approach results in a predictor with both good noise attenuation and excellent generalization capability. The optimized neural networks are used for predictive filtering of very noisy Rayleigh fading signals with 1.8 GHz carrier frequency. Our results show that the optimal neural predictor can provide smoothed in-phase and quadrature signals with signal-to-noise ratio (SNR) gains of about 12 and 7 dB at the urban mobile speeds of 5 and 50 km/h, respectively. The corresponding power signal SNR gains are about 11 and 5 dB. Therefore, the neural predictor is well suitable for power control applications where ldquodelaylessrdquo noise attenuation and efficient reduction of fast fading are required.

  11. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  12. Robust autoassociative memory with coupled networks of Kuramoto-type oscillators

    NASA Astrophysics Data System (ADS)

    Heger, Daniel; Krischer, Katharina

    2016-08-01

    Uncertain recognition success, unfavorable scaling of connection complexity, or dependence on complex external input impair the usefulness of current oscillatory neural networks for pattern recognition or restrict technical realizations to small networks. We propose a network architecture of coupled oscillators for pattern recognition which shows none of the mentioned flaws. Furthermore we illustrate the recognition process with simulation results and analyze the dynamics analytically: Possible output patterns are isolated attractors of the system. Additionally, simple criteria for recognition success are derived from a lower bound on the basins of attraction.

  13. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.

    PubMed

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

  14. Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns

    PubMed Central

    Cruchet, Steeve; Gustafson, Kyle; Benton, Richard; Floreano, Dario

    2015-01-01

    The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs—locomotor bouts—matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior. PMID:26600381

  15. Introducing ab initio based neural networks for transition-rate prediction in kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Messina, Luca; Castin, Nicolas; Domain, Christophe; Olsson, Pär

    2017-02-01

    The quality of kinetic Monte Carlo (KMC) simulations of microstructure evolution in alloys relies on the parametrization of point-defect migration rates, which are complex functions of the local chemical composition and can be calculated accurately with ab initio methods. However, constructing reliable models that ensure the best possible transfer of physical information from ab initio to KMC is a challenging task. This work presents an innovative approach, where the transition rates are predicted by artificial neural networks trained on a database of 2000 migration barriers, obtained with density functional theory (DFT) in place of interatomic potentials. The method is tested on copper precipitation in thermally aged iron alloys, by means of a hybrid atomistic-object KMC model. For the object part of the model, the stability and mobility properties of copper-vacancy clusters are analyzed by means of independent atomistic KMC simulations, driven by the same neural networks. The cluster diffusion coefficients and mean free paths are found to increase with size, confirming the dominant role of coarsening of medium- and large-sized clusters in the precipitation kinetics. The evolution under thermal aging is in better agreement with experiments with respect to a previous interatomic-potential model, especially concerning the experiment time scales. However, the model underestimates the solubility of copper in iron due to the excessively high solution energy predicted by the chosen DFT method. Nevertheless, this work proves the capability of neural networks to transfer complex ab initio physical properties to higher-scale models, and facilitates the extension to systems with increasing chemical complexity, setting the ground for reliable microstructure evolution simulations in a wide range of alloys and applications.

  16. Global cluster synchronization in nonlinearly coupled community networks with heterogeneous coupling delays.

    PubMed

    Tseng, Jui-Pin

    2017-02-01

    This investigation establishes the global cluster synchronization of complex networks with a community structure based on an iterative approach. The units comprising the network are described by differential equations, and can be non-autonomous and involve time delays. In addition, units in the different communities can be governed by different equations. The coupling configuration of the network is rather general. The coupling terms can be non-diffusive, nonlinear, asymmetric, and with heterogeneous coupling delays. Based on this approach, both delay-dependent and delay-independent criteria for global cluster synchronization are derived. We implement the present approach for a nonlinearly coupled neural network with heterogeneous coupling delays. Two numerical examples are given to show that neural networks can behave in a variety of new collective ways under the synchronization criteria. These examples also demonstrate that neural networks remain synchronized in spite of coupling delays between neurons across different communities; however, they may lose synchrony if the coupling delays between the neurons within the same community are too large, such that the synchronization criteria are violated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Experiments on neural network architectures for fuzzy logic

    NASA Technical Reports Server (NTRS)

    Keller, James M.

    1991-01-01

    The use of fuzzy logic to model and manage uncertainty in a rule-based system places high computational demands on an inference engine. In an earlier paper, the authors introduced a trainable neural network structure for fuzzy logic. These networks can learn and extrapolate complex relationships between possibility distributions for the antecedents and consequents in the rules. Here, the power of these networks is further explored. The insensitivity of the output to noisy input distributions (which are likely if the clauses are generated from real data) is demonstrated as well as the ability of the networks to internalize multiple conjunctive clause and disjunctive clause rules. Since different rules with the same variables can be encoded in a single network, this approach to fuzzy logic inference provides a natural mechanism for rule conflict resolution.

  18. Propagating waves can explain irregular neural dynamics.

    PubMed

    Keane, Adam; Gong, Pulin

    2015-01-28

    Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. Copyright © 2015 the authors 0270-6474/15/351591-15$15.00/0.

  19. Semantic Segmentation of Convolutional Neural Network for Supervised Classification of Multispectral Remote Sensing

    NASA Astrophysics Data System (ADS)

    Xue, L.; Liu, C.; Wu, Y.; Li, H.

    2018-04-01

    Semantic segmentation is a fundamental research in remote sensing image processing. Because of the complex maritime environment, the classification of roads, vegetation, buildings and water from remote Sensing Imagery is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there are a few of works using CNN for ground object segmentation and the results could be further improved. This paper used convolution neural network named U-Net, its structure has a contracting path and an expansive path to get high resolution output. In the network , We added BN layers, which is more conducive to the reverse pass. Moreover, after upsampling convolution , we add dropout layers to prevent overfitting. They are promoted to get more precise segmentation results. To verify this network architecture, we used a Kaggle dataset. Experimental results show that U-Net achieved good performance compared with other architectures, especially in high-resolution remote sensing imagery.

  20. "Scientific roots" of dualism in neuroscience.

    PubMed

    Arshavsky, Yuri I

    2006-07-01

    Although the dualistic concept is unpopular among neuroscientists involved in experimental studies of the brain, neurophysiological literature is full of covert dualistic statements on the possibility of understanding neural mechanisms of human consciousness. Particularly, the covert dualistic attitude is exhibited in the unwillingness to discuss neural mechanisms of consciousness, leaving the problem of consciousness to psychologists and philosophers. This covert dualism seems to be rooted in the main paradigm of neuroscience that suggests that cognitive functions, such as language production and comprehension, face recognition, declarative memory, emotions, etc., are performed by neural networks consisting of simple elements. I argue that neural networks of any complexity consisting of neurons whose function is limited to the generation of electrical potentials and the transmission of signals to other neurons are hardly capable of producing human mental activity, including consciousness. Based on results obtained in physiological, morphological, clinical, and genetic studies of cognitive functions (mainly linguistic ones), I advocate the hypothesis that the performance of cognitive functions is based on complex cooperative activity of "complex" neurons that are carriers of "elementary cognition." The uniqueness of human cognitive functions, which has a genetic basis, is determined by the specificity of genes expressed by these "complex" neurons. The main goal of the review is to show that the identification of the genes implicated in cognitive functions and the understanding of a functional role of their products is a possible way to overcome covert dualism in neuroscience.

  1. Computational Models and Emergent Properties of Respiratory Neural Networks

    PubMed Central

    Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.

    2012-01-01

    Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564

  2. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    PubMed

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  3. Goal-seeking neural net for recall and recognition

    NASA Astrophysics Data System (ADS)

    Omidvar, Omid M.

    1990-07-01

    Neural networks have been used to mimic cognitive processes which take place in animal brains. The learning capability inherent in neural networks makes them suitable candidates for adaptive tasks such as recall and recognition. The synaptic reinforcements create a proper condition for adaptation, which results in memorization, formation of perception, and higher order information processing activities. In this research a model of a goal seeking neural network is studied and the operation of the network with regard to recall and recognition is analyzed. In these analyses recall is defined as retrieval of stored information where little or no matching is involved. On the other hand recognition is recall with matching; therefore it involves memorizing a piece of information with complete presentation. This research takes the generalized view of reinforcement in which all the signals are potential reinforcers. The neuronal response is considered to be the source of the reinforcement. This local approach to adaptation leads to the goal seeking nature of the neurons as network components. In the proposed model all the synaptic strengths are reinforced in parallel while the reinforcement among the layers is done in a distributed fashion and pipeline mode from the last layer inward. A model of complex neuron with varying threshold is developed to account for inhibitory and excitatory behavior of real neuron. A goal seeking model of a neural network is presented. This network is utilized to perform recall and recognition tasks. The performance of the model with regard to the assigned tasks is presented.

  4. Identification of the connections in biologically inspired neural networks

    NASA Technical Reports Server (NTRS)

    Demuth, H.; Leung, K.; Beale, M.; Hicklin, J.

    1990-01-01

    We developed an identification method to find the strength of the connections between neurons from their behavior in small biologically-inspired artificial neural networks. That is, given the network external inputs and the temporal firing pattern of the neurons, we can calculate a solution for the strengths of the connections between neurons and the initial neuron activations if a solution exists. The method determines directly if there is a solution to a particular neural network problem. No training of the network is required. It should be noted that this is a first pass at the solution of a difficult problem. The neuron and network models chosen are related to biology but do not contain all of its complexities, some of which we hope to add to the model in future work. A variety of new results have been obtained. First, the method has been tailored to produce connection weight matrix solutions for networks with important features of biological neural (bioneural) networks. Second, a computationally efficient method of finding a robust central solution has been developed. This later method also enables us to find the most consistent solution in the presence of noisy data. Prospects of applying our method to identify bioneural network connections are exciting because such connections are almost impossible to measure in the laboratory. Knowledge of such connections would facilitate an understanding of bioneural networks and would allow the construction of the electronic counterparts of bioneural networks on very large scale integrated (VLSI) circuits.

  5. Fault detection and isolation for complex system

    NASA Astrophysics Data System (ADS)

    Jing, Chan Shi; Bayuaji, Luhur; Samad, R.; Mustafa, M.; Abdullah, N. R. H.; Zain, Z. M.; Pebrianti, Dwi

    2017-07-01

    Fault Detection and Isolation (FDI) is a method to monitor, identify, and pinpoint the type and location of system fault in a complex multiple input multiple output (MIMO) non-linear system. A two wheel robot is used as a complex system in this study. The aim of the research is to construct and design a Fault Detection and Isolation algorithm. The proposed method for the fault identification is using hybrid technique that combines Kalman filter and Artificial Neural Network (ANN). The Kalman filter is able to recognize the data from the sensors of the system and indicate the fault of the system in the sensor reading. Error prediction is based on the fault magnitude and the time occurrence of fault. Additionally, Artificial Neural Network (ANN) is another algorithm used to determine the type of fault and isolate the fault in the system.

  6. Machine learning phases of matter

    NASA Astrophysics Data System (ADS)

    Carrasquilla, Juan; Melko, Roger G.

    2017-02-01

    Condensed-matter physics is the study of the collective behaviour of infinitely complex assemblies of electrons, nuclei, magnetic moments, atoms or qubits. This complexity is reflected in the size of the state space, which grows exponentially with the number of particles, reminiscent of the `curse of dimensionality' commonly encountered in machine learning. Despite this curse, the machine learning community has developed techniques with remarkable abilities to recognize, classify, and characterize complex sets of data. Here, we show that modern machine learning architectures, such as fully connected and convolutional neural networks, can identify phases and phase transitions in a variety of condensed-matter Hamiltonians. Readily programmable through modern software libraries, neural networks can be trained to detect multiple types of order parameter, as well as highly non-trivial states with no conventional order, directly from raw state configurations sampled with Monte Carlo.

  7. Antenna analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern shaping. The interesting thing about D-C synthesis is that the side lobes have the same amplitude. Five-element arrays were used. Again, 41 pattern samples were used for the input. Nine actual D-C patterns ranging from -10 dB to -30 dB side lobe levels were used to train the network. A comparison between simulated and actual D-C techniques for a pattern with -22 dB side lobe level is shown. The goal for this research was to evaluate the performance of neural network computing with antennas. Future applications will employ the backpropagation training algorithm to drastically reduce the computational complexity involved in performing EM compensation for surface errors in large space reflector antennas.

  8. Antenna analysis using neural networks

    NASA Astrophysics Data System (ADS)

    Smith, William T.

    1992-09-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary).

  9. A neural network approach for image reconstruction in electron magnetic resonance tomography.

    PubMed

    Durairaj, D Christopher; Krishna, Murali C; Murugesan, Ramachandran

    2007-10-01

    An object-oriented, artificial neural network (ANN) based, application system for reconstruction of two-dimensional spatial images in electron magnetic resonance (EMR) tomography is presented. The standard back propagation algorithm is utilized to train a three-layer sigmoidal feed-forward, supervised, ANN to perform the image reconstruction. The network learns the relationship between the 'ideal' images that are reconstructed using filtered back projection (FBP) technique and the corresponding projection data (sinograms). The input layer of the network is provided with a training set that contains projection data from various phantoms as well as in vivo objects, acquired from an EMR imager. Twenty five different network configurations are investigated to test the ability of the generalization of the network. The trained ANN then reconstructs two-dimensional temporal spatial images that present the distribution of free radicals in biological systems. Image reconstruction by the trained neural network shows better time complexity than the conventional iterative reconstruction algorithms such as multiplicative algebraic reconstruction technique (MART). The network is further explored for image reconstruction from 'noisy' EMR data and the results show better performance than the FBP method. The network is also tested for its ability to reconstruct from limited-angle EMR data set.

  10. Learning free energy landscapes using artificial neural networks.

    PubMed

    Sidky, Hythem; Whitmer, Jonathan K

    2018-03-14

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  11. Nonlinear channel equalization for QAM signal constellation using artificial neural networks.

    PubMed

    Patra, J C; Pal, R N; Baliarsingh, R; Panda, G

    1999-01-01

    Application of artificial neural networks (ANN's) to adaptive channel equalization in a digital communication system with 4-QAM signal constellation is reported in this paper. A novel computationally efficient single layer functional link ANN (FLANN) is proposed for this purpose. This network has a simple structure in which the nonlinearity is introduced by functional expansion of the input pattern by trigonometric polynomials. Because of input pattern enhancement, the FLANN is capable of forming arbitrarily nonlinear decision boundaries and can perform complex pattern classification tasks. Considering channel equalization as a nonlinear classification problem, the FLANN has been utilized for nonlinear channel equalization. The performance of the FLANN is compared with two other ANN structures [a multilayer perceptron (MLP) and a polynomial perceptron network (PPN)] along with a conventional linear LMS-based equalizer for different linear and nonlinear channel models. The effect of eigenvalue ratio (EVR) of input correlation matrix on the equalizer performance has been studied. The comparison of computational complexity involved for the three ANN structures is also provided.

  12. Learning free energy landscapes using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Sidky, Hythem; Whitmer, Jonathan K.

    2018-03-01

    Existing adaptive bias techniques, which seek to estimate free energies and physical properties from molecular simulations, are limited by their reliance on fixed kernels or basis sets which hinder their ability to efficiently conform to varied free energy landscapes. Further, user-specified parameters are in general non-intuitive yet significantly affect the convergence rate and accuracy of the free energy estimate. Here we propose a novel method, wherein artificial neural networks (ANNs) are used to develop an adaptive biasing potential which learns free energy landscapes. We demonstrate that this method is capable of rapidly adapting to complex free energy landscapes and is not prone to boundary or oscillation problems. The method is made robust to hyperparameters and overfitting through Bayesian regularization which penalizes network weights and auto-regulates the number of effective parameters in the network. ANN sampling represents a promising innovative approach which can resolve complex free energy landscapes in less time than conventional approaches while requiring minimal user input.

  13. Implementing Signature Neural Networks with Spiking Neurons

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence of inhibitory connections. These parameters also modulate the memory capabilities of the network. The dynamical modes observed in the different informational dimensions in a given moment are independent and they only depend on the parameters shaping the information processing in this dimension. In view of these results, we argue that plasticity mechanisms inside individual cells and multicoding strategies can provide additional computational properties to spiking neural networks, which could enhance their capacity and performance in a wide variety of real-world tasks. PMID:28066221

  14. Implementing Signature Neural Networks with Spiking Neurons.

    PubMed

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence of inhibitory connections. These parameters also modulate the memory capabilities of the network. The dynamical modes observed in the different informational dimensions in a given moment are independent and they only depend on the parameters shaping the information processing in this dimension. In view of these results, we argue that plasticity mechanisms inside individual cells and multicoding strategies can provide additional computational properties to spiking neural networks, which could enhance their capacity and performance in a wide variety of real-world tasks.

  15. Variable synaptic strengths controls the firing rate distribution in feedforward neural networks.

    PubMed

    Ly, Cheng; Marsat, Gary

    2018-02-01

    Heterogeneity of firing rate statistics is known to have severe consequences on neural coding. Recent experimental recordings in weakly electric fish indicate that the distribution-width of superficial pyramidal cell firing rates (trial- and time-averaged) in the electrosensory lateral line lobe (ELL) depends on the stimulus, and also that network inputs can mediate changes in the firing rate distribution across the population. We previously developed theoretical methods to understand how two attributes (synaptic and intrinsic heterogeneity) interact and alter the firing rate distribution in a population of integrate-and-fire neurons with random recurrent coupling. Inspired by our experimental data, we extend these theoretical results to a delayed feedforward spiking network that qualitatively capture the changes of firing rate heterogeneity observed in in-vivo recordings. We demonstrate how heterogeneous neural attributes alter firing rate heterogeneity, accounting for the effect with various sensory stimuli. The model predicts how the strength of the effective network connectivity is related to intrinsic heterogeneity in such delayed feedforward networks: the strength of the feedforward input is positively correlated with excitability (threshold value for spiking) when firing rate heterogeneity is low and is negatively correlated with excitability with high firing rate heterogeneity. We also show how our theory can be used to predict effective neural architecture. We demonstrate that neural attributes do not interact in a simple manner but rather in a complex stimulus-dependent fashion to control neural heterogeneity and discuss how it can ultimately shape population codes.

  16. Fractal Patterns of Neural Activity Exist within the Suprachiasmatic Nucleus and Require Extrinsic Network Interactions

    PubMed Central

    Hu, Kun; Meijer, Johanna H.; Shea, Steven A.; vanderLeest, Henk Tjebbe; Pittman-Polletta, Benjamin; Houben, Thijs; van Oosterhout, Floor; Deboer, Tom; Scheer, Frank A. J. L.

    2012-01-01

    The mammalian central circadian pacemaker (the suprachiasmatic nucleus, SCN) contains thousands of neurons that are coupled through a complex network of interactions. In addition to the established role of the SCN in generating rhythms of ∼24 hours in many physiological functions, the SCN was recently shown to be necessary for normal self-similar/fractal organization of motor activity and heart rate over a wide range of time scales—from minutes to 24 hours. To test whether the neural network within the SCN is sufficient to generate such fractal patterns, we studied multi-unit neural activity of in vivo and in vitro SCNs in rodents. In vivo SCN-neural activity exhibited fractal patterns that are virtually identical in mice and rats and are similar to those in motor activity at time scales from minutes up to 10 hours. In addition, these patterns remained unchanged when the main afferent signal to the SCN, namely light, was removed. However, the fractal patterns of SCN-neural activity are not autonomous within the SCN as these patterns completely broke down in the isolated in vitro SCN despite persistence of circadian rhythmicity. Thus, SCN-neural activity is fractal in the intact organism and these fractal patterns require network interactions between the SCN and extra-SCN nodes. Such a fractal control network could underlie the fractal regulation observed in many physiological functions that involve the SCN, including motor control and heart rate regulation. PMID:23185285

  17. Hybrid expert system for decision supporting in the medical area: complexity and cognitive computing.

    PubMed

    Brasil, L M; de Azevedo, F M; Barreto, J M

    2001-09-01

    This paper proposes a hybrid expert system (HES) to minimise some complexity problems pervasive to the artificial intelligence such as: the knowledge elicitation process, known as the bottleneck of expert systems; the model choice for knowledge representation to code human reasoning; the number of neurons in the hidden layer and the topology used in the connectionist approach; the difficulty to obtain the explanation on how the network arrived to a conclusion. Two algorithms applied to developing of HES are also suggested. One of them is used to train the fuzzy neural network and the other to obtain explanations on how the fuzzy neural network attained a conclusion. To overcome these difficulties the cognitive computing was integrated to the developed system. A case study is presented (e.g. epileptic crisis) with the problem definition and simulations. Results are also discussed.

  18. Potential implementation of reservoir computing models based on magnetic skyrmions

    NASA Astrophysics Data System (ADS)

    Bourianoff, George; Pinna, Daniele; Sitte, Matthias; Everschor-Sitte, Karin

    2018-05-01

    Reservoir Computing is a type of recursive neural network commonly used for recognizing and predicting spatio-temporal events relying on a complex hierarchy of nested feedback loops to generate a memory functionality. The Reservoir Computing paradigm does not require any knowledge of the reservoir topology or node weights for training purposes and can therefore utilize naturally existing networks formed by a wide variety of physical processes. Most efforts to implement reservoir computing prior to this have focused on utilizing memristor techniques to implement recursive neural networks. This paper examines the potential of magnetic skyrmion fabrics and the complex current patterns which form in them as an attractive physical instantiation for Reservoir Computing. We argue that their nonlinear dynamical interplay resulting from anisotropic magnetoresistance and spin-torque effects allows for an effective and energy efficient nonlinear processing of spatial temporal events with the aim of event recognition and prediction.

  19. Human activity recognition based on feature selection in smart home using back-propagation algorithm.

    PubMed

    Fang, Hongqing; He, Lei; Si, Hao; Liu, Peng; Xie, Xiaolei

    2014-09-01

    In this paper, Back-propagation(BP) algorithm has been used to train the feed forward neural network for human activity recognition in smart home environments, and inter-class distance method for feature selection of observed motion sensor events is discussed and tested. And then, the human activity recognition performances of neural network using BP algorithm have been evaluated and compared with other probabilistic algorithms: Naïve Bayes(NB) classifier and Hidden Markov Model(HMM). The results show that different feature datasets yield different activity recognition accuracy. The selection of unsuitable feature datasets increases the computational complexity and degrades the activity recognition accuracy. Furthermore, neural network using BP algorithm has relatively better human activity recognition performances than NB classifier and HMM. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. A robust cloud registration method based on redundant data reduction using backpropagation neural network and shift window

    NASA Astrophysics Data System (ADS)

    Xin, Meiting; Li, Bing; Yan, Xiao; Chen, Lei; Wei, Xiang

    2018-02-01

    A robust coarse-to-fine registration method based on the backpropagation (BP) neural network and shift window technology is proposed in this study. Specifically, there are three steps: coarse alignment between the model data and measured data, data simplification based on the BP neural network and point reservation in the contour region of point clouds, and fine registration with the reweighted iterative closest point algorithm. In the process of rough alignment, the initial rotation matrix and the translation vector between the two datasets are obtained. After performing subsequent simplification operations, the number of points can be reduced greatly. Therefore, the time and space complexity of the accurate registration can be significantly reduced. The experimental results show that the proposed method improves the computational efficiency without loss of accuracy.

  1. Systems Engineering Design Via Experimental Operation Research: Complex Organizational Metric for Programmatic Risk Environments (COMPRE)

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    1999-01-01

    Unique and innovative graph theory, neural network, organizational modeling, and genetic algorithms are applied to the design and evolution of programmatic and organizational architectures. Graph theory representations of programs and organizations increase modeling capabilities and flexibility, while illuminating preferable programmatic/organizational design features. Treating programs and organizations as neural networks results in better system synthesis, and more robust data modeling. Organizational modeling using covariance structures enhances the determination of organizational risk factors. Genetic algorithms improve programmatic evolution characteristics, while shedding light on rulebase requirements for achieving specified technological readiness levels, given budget and schedule resources. This program of research improves the robustness and verifiability of systems synthesis tools, including the Complex Organizational Metric for Programmatic Risk Environments (COMPRE).

  2. Applications of Artificial Neural Networks in Structural Engineering with Emphasis on Continuum Models

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Liu, Youhua

    1998-01-01

    The use of continuum models for the analysis of discrete built-up complex aerospace structures is an attractive idea especially at the conceptual and preliminary design stages. But the diversity of available continuum models and hard-to-use qualities of these models have prevented them from finding wide applications. In this regard, Artificial Neural Networks (ANN or NN) may have a great potential as these networks are universal approximators that can realize any continuous mapping, and can provide general mechanisms for building models from data whose input-output relationship can be highly nonlinear. The ultimate aim of the present work is to be able to build high fidelity continuum models for complex aerospace structures using the ANN. As a first step, the concepts and features of ANN are familiarized through the MATLAB NN Toolbox by simulating some representative mapping examples, including some problems in structural engineering. Then some further aspects and lessons learned about the NN training are discussed, including the performances of Feed-Forward and Radial Basis Function NN when dealing with noise-polluted data and the technique of cross-validation. Finally, as an example of using NN in continuum models, a lattice structure with repeating cells is represented by a continuum beam whose properties are provided by neural networks.

  3. Optogenetic stimulation of multiwell MEA plates for neural and cardiac applications

    NASA Astrophysics Data System (ADS)

    Clements, Isaac P.; Millard, Daniel C.; Nicolini, Anthony M.; Preyer, Amanda J.; Grier, Robert; Heckerling, Andrew; Blum, Richard A.; Tyler, Phillip; McSweeney, K. M.; Lu, Yi-Fan; Hall, Diana; Ross, James D.

    2016-03-01

    Microelectrode array (MEA) technology enables advanced drug screening and "disease-in-a-dish" modeling by measuring the electrical activity of cultured networks of neural or cardiac cells. Recent developments in human stem cell technologies, advancements in genetic models, and regulatory initiatives for drug screening have increased the demand for MEA-based assays. In response, Axion Biosystems previously developed a multiwell MEA platform, providing up to 96 MEA culture wells arrayed into a standard microplate format. Multiwell MEA-based assays would be further enhanced by optogenetic stimulation, which enables selective excitation and inhibition of targeted cell types. This capability for selective control over cell culture states would allow finer pacing and probing of cell networks for more reliable and complete characterization of complex network dynamics. Here we describe a system for independent optogenetic stimulation of each well of a 48-well MEA plate. The system enables finely graded control of light delivery during simultaneous recording of network activity in each well. Using human induced pluripotent stem cell (hiPSC) derived cardiomyocytes and rodent primary neuronal cultures, we demonstrate high channel-count light-based excitation and suppression in several proof-of-concept experimental models. Our findings demonstrate advantages of combining multiwell optical stimulation and MEA recording for applications including cardiac safety screening, neural toxicity assessment, and advanced characterization of complex neuronal diseases.

  4. Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network

    NASA Astrophysics Data System (ADS)

    Singh, U. K.; Tiwari, R. K.; Singh, S. B.

    2010-02-01

    The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.

  5. Forecasting daily source air quality using multivariate statistical analysis and radial basis function networks.

    PubMed

    Sun, Gang; Hoff, Steven J; Zelle, Brian C; Nelson, Minda A

    2008-12-01

    It is vital to forecast gas and particle matter concentrations and emission rates (GPCER) from livestock production facilities to assess the impact of airborne pollutants on human health, ecological environment, and global warming. Modeling source air quality is a complex process because of abundant nonlinear interactions between GPCER and other factors. The objective of this study was to introduce statistical methods and radial basis function (RBF) neural network to predict daily source air quality in Iowa swine deep-pit finishing buildings. The results show that four variables (outdoor and indoor temperature, animal units, and ventilation rates) were identified as relative important model inputs using statistical methods. It can be further demonstrated that only two factors, the environment factor and the animal factor, were capable of explaining more than 94% of the total variability after performing principal component analysis. The introduction of fewer uncorrelated variables to the neural network would result in the reduction of the model structure complexity, minimize computation cost, and eliminate model overfitting problems. The obtained results of RBF network prediction were in good agreement with the actual measurements, with values of the correlation coefficient between 0.741 and 0.995 and very low values of systemic performance indexes for all the models. The good results indicated the RBF network could be trained to model these highly nonlinear relationships. Thus, the RBF neural network technology combined with multivariate statistical methods is a promising tool for air pollutant emissions modeling.

  6. Individual nodeʼs contribution to the mesoscale of complex networks

    NASA Astrophysics Data System (ADS)

    Klimm, Florian; Borge-Holthoefer, Javier; Wessel, Niels; Kurths, Jürgen; Zamora-López, Gorka

    2014-12-01

    The analysis of complex networks is devoted to the statistical characterization of the topology of graphs at different scales of organization in order to understand their functionality. While the modular structure of networks has become an essential element to better apprehend their complexity, the efforts to characterize the mesoscale of networks have focused on the identification of the modules rather than describing the mesoscale in an informative manner. Here we propose a framework to characterize the position every node takes within the modular configuration of complex networks and to evaluate their function accordingly. For illustration, we apply this framework to a set of synthetic networks, empirical neural networks, and to the transcriptional regulatory network of the Mycobacterium tuberculosis. We find that the architecture of both neuronal and transcriptional networks are optimized for the processing of multisensory information with the coexistence of well-defined modules of specialized components and the presence of hubs conveying information from and to the distinct functional domains.

  7. Deep learning based hand gesture recognition in complex scenes

    NASA Astrophysics Data System (ADS)

    Ni, Zihan; Sang, Nong; Tan, Cheng

    2018-03-01

    Recently, region-based convolutional neural networks(R-CNNs) have achieved significant success in the field of object detection, but their accuracy is not too high for small objects and similar objects, such as the gestures. To solve this problem, we present an online hard example testing(OHET) technology to evaluate the confidence of the R-CNNs' outputs, and regard those outputs with low confidence as hard examples. In this paper, we proposed a cascaded networks to recognize the gestures. Firstly, we use the region-based fully convolutional neural network(R-FCN), which is capable of the detection for small object, to detect the gestures, and then use the OHET to select the hard examples. To enhance the accuracy of the gesture recognition, we re-classify the hard examples through VGG-19 classification network to obtain the final output of the gesture recognition system. Through the contrast experiments with other methods, we can see that the cascaded networks combined with the OHET reached to the state-of-the-art results of 99.3% mAP on small and similar gestures in complex scenes.

  8. Impulsivity and the Modular Organization of Resting-State Neural Networks

    PubMed Central

    Davis, F. Caroline; Knodt, Annchen R.; Sporns, Olaf; Lahey, Benjamin B.; Zald, David H.; Brigidi, Bart D.; Hariri, Ahmad R.

    2013-01-01

    Impulsivity is a complex trait associated with a range of maladaptive behaviors, including many forms of psychopathology. Previous research has implicated multiple neural circuits and neurotransmitter systems in impulsive behavior, but the relationship between impulsivity and organization of whole-brain networks has not yet been explored. Using graph theory analyses, we characterized the relationship between impulsivity and the functional segregation (“modularity”) of the whole-brain network architecture derived from resting-state functional magnetic resonance imaging (fMRI) data. These analyses revealed remarkable differences in network organization across the impulsivity spectrum. Specifically, in highly impulsive individuals, regulatory structures including medial and lateral regions of the prefrontal cortex were isolated from subcortical structures associated with appetitive drive, whereas these brain areas clustered together within the same module in less impulsive individuals. Further exploration of the modular organization of whole-brain networks revealed novel shifts in the functional connectivity between visual, sensorimotor, cortical, and subcortical structures across the impulsivity spectrum. The current findings highlight the utility of graph theory analyses of resting-state fMRI data in furthering our understanding of the neurobiological architecture of complex behaviors. PMID:22645253

  9. Resource constrained design of artificial neural networks using comparator neural network

    NASA Technical Reports Server (NTRS)

    Wah, Benjamin W.; Karnik, Tanay S.

    1992-01-01

    We present a systematic design method executed under resource constraints for automating the design of artificial neural networks using the back error propagation algorithm. Our system aims at finding the best possible configuration for solving the given application with proper tradeoff between the training time and the network complexity. The design of such a system is hampered by three related problems. First, there are infinitely many possible network configurations, each may take an exceedingly long time to train; hence, it is impossible to enumerate and train all of them to completion within fixed time, space, and resource constraints. Second, expert knowledge on predicting good network configurations is heuristic in nature and is application dependent, rendering it difficult to characterize fully in the design process. A learning procedure that refines this knowledge based on examples on training neural networks for various applications is, therefore, essential. Third, the objective of the network to be designed is ill-defined, as it is based on a subjective tradeoff between the training time and the network cost. A design process that proposes alternate configurations under different cost-performance tradeoff is important. We have developed a Design System which schedules the available time, divided into quanta, for testing alternative network configurations. Its goal is to select/generate and test alternative network configurations in each quantum, and find the best network when time is expended. Since time is limited, a dynamic schedule that determines the network configuration to be tested in each quantum is developed. The schedule is based on relative comparison of predicted training times of alternative network configurations using comparator network paradigm. The comparator network has been trained to compare training times for a large variety of traces of TSSE-versus-time collected during back-propagation learning of various applications.

  10. Algorithm for predicting the evolution of series of dynamics of complex systems in solving information problems

    NASA Astrophysics Data System (ADS)

    Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.

    2018-03-01

    In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.

  11. Neural network modeling of the kinetics of SO2 removal by fly ash-based sorbent.

    PubMed

    Raymond-Ooi, E H; Lee, K T; Mohamed, A R; Chu, K H

    2006-01-01

    The mechanistic modeling of the sulfation reaction between fly ash-based sorbent and SO2 is a challenging task due to a variety reasons including the complexity of the reaction itself and the inability to measure some of the key parameters of the reaction. In this work, the possibility of modeling the sulfation reaction kinetics using a purely data-driven neural network was investigated. Experiments on SO2 removal by a sorbent prepared from coal fly ash/CaO/CaSO4 were conducted using a fixed bed reactor to generate a database to train and validate the neural network model. Extensive SO2 removal data points were obtained by varying three process variables, namely, SO2 inlet concentration (500-2000 mg/L), reaction temperature (60-80 degreesC), and relative humidity (50-70%), as a function of reaction time (0-60 min). Modeling results show that the neural network can provide excellent fits to the SO2 removal data after considerable training and can be successfully used to predict the extent of SO2 removal as a function of time even when the process variables are outside the training domain. From a modeling standpoint, the suitably trained and validated neural network with excellent interpolation and extrapolation properties could have immediate practical benefits in the absence of a theoretical model.

  12. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model

    PubMed Central

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854

  13. Generalized Recurrent Neural Network accommodating Dynamic Causal Modeling for functional MRI analysis.

    PubMed

    Wang, Yuan; Wang, Yao; Lui, Yvonne W

    2018-05-18

    Dynamic Causal Modeling (DCM) is an advanced biophysical model which explicitly describes the entire process from experimental stimuli to functional magnetic resonance imaging (fMRI) signals via neural activity and cerebral hemodynamics. To conduct a DCM study, one needs to represent the experimental stimuli as a compact vector-valued function of time, which is hard in complex tasks such as book reading and natural movie watching. Deep learning provides the state-of-the-art signal representation solution, encoding complex signals into compact dense vectors while preserving the essence of the original signals. There is growing interest in using Recurrent Neural Networks (RNNs), a major family of deep learning techniques, in fMRI modeling. However, the generic RNNs used in existing studies work as black boxes, making the interpretation of results in a neuroscience context difficult and obscure. In this paper, we propose a new biophysically interpretable RNN built on DCM, DCM-RNN. We generalize the vanilla RNN and show that DCM can be cast faithfully as a special form of the generalized RNN. DCM-RNN uses back propagation for parameter estimation. We believe DCM-RNN is a promising tool for neuroscience. It can fit seamlessly into classical DCM studies. We demonstrate face validity of DCM-RNN in two principal applications of DCM: causal brain architecture hypotheses testing and effective connectivity estimation. We also demonstrate construct validity of DCM-RNN in an attention-visual experiment. Moreover, DCM-RNN enables end-to-end training of DCM and representation learning deep neural networks, extending DCM studies to complex tasks. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Classification of volcanic ash particles using a convolutional neural network and probability.

    PubMed

    Shoji, Daigo; Noguchi, Rina; Otsuki, Shizuka; Hino, Hideitsu

    2018-05-25

    Analyses of volcanic ash are typically performed either by qualitatively classifying ash particles by eye or by quantitatively parameterizing its shape and texture. While complex shapes can be classified through qualitative analyses, the results are subjective due to the difficulty of categorizing complex shapes into a single class. Although quantitative analyses are objective, selection of shape parameters is required. Here, we applied a convolutional neural network (CNN) for the classification of volcanic ash. First, we defined four basal particle shapes (blocky, vesicular, elongated, rounded) generated by different eruption mechanisms (e.g., brittle fragmentation), and then trained the CNN using particles composed of only one basal shape. The CNN could recognize the basal shapes with over 90% accuracy. Using the trained network, we classified ash particles composed of multiple basal shapes based on the output of the network, which can be interpreted as a mixing ratio of the four basal shapes. Clustering of samples by the averaged probabilities and the intensity is consistent with the eruption type. The mixing ratio output by the CNN can be used to quantitatively classify complex shapes in nature without categorizing forcibly and without the need for shape parameters, which may lead to a new taxonomy.

  15. Local community detection as pattern restoration by attractor dynamics of recurrent neural networks.

    PubMed

    Okamoto, Hiroshi

    2016-08-01

    Densely connected parts in networks are referred to as "communities". Community structure is a hallmark of a variety of real-world networks. Individual communities in networks form functional modules of complex systems described by networks. Therefore, finding communities in networks is essential to approaching and understanding complex systems described by networks. In fact, network science has made a great deal of effort to develop effective and efficient methods for detecting communities in networks. Here we put forward a type of community detection, which has been little examined so far but will be practically useful. Suppose that we are given a set of source nodes that includes some (but not all) of "true" members of a particular community; suppose also that the set includes some nodes that are not the members of this community (i.e., "false" members of the community). We propose to detect the community from this "imperfect" and "inaccurate" set of source nodes using attractor dynamics of recurrent neural networks. Community detection by the proposed method can be viewed as restoration of the original pattern from a deteriorated pattern, which is analogous to cue-triggered recall of short-term memory in the brain. We demonstrate the effectiveness of the proposed method using synthetic networks and real social networks for which correct communities are known. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Neural network submodel as an abstraction tool: relating network performance to combat outcome

    NASA Astrophysics Data System (ADS)

    Jablunovsky, Greg; Dorman, Clark; Yaworsky, Paul S.

    2000-06-01

    Simulation of Command and Control (C2) networks has historically emphasized individual system performance with little architectural context or credible linkage to `bottom- line' measures of combat outcomes. Renewed interest in modeling C2 effects and relationships stems from emerging network intensive operational concepts. This demands improved methods to span the analytical hierarchy between C2 system performance models and theater-level models. Neural network technology offers a modeling approach that can abstract the essential behavior of higher resolution C2 models within a campaign simulation. The proposed methodology uses off-line learning of the relationships between network state and campaign-impacting performance of a complex C2 architecture and then approximation of that performance as a time-varying parameter in an aggregated simulation. Ultimately, this abstraction tool offers an increased fidelity of C2 system simulation that captures dynamic network dependencies within a campaign context.

  17. Magnetoencephalographic imaging of deep corticostriatal network activity during a rewards paradigm.

    PubMed

    Kanal, Eliezer Y; Sun, Mingui; Ozkurt, Tolga E; Jia, Wenyan; Sclabassi, Robert

    2009-01-01

    The human rewards network is a complex system spanning both cortical and subcortical regions. While much is known about the functions of the various components of the network, research on the behavior of the network as a whole has been stymied due to an inability to detect signals at a high enough temporal resolution from both superficial and deep network components simultaneously. In this paper, we describe the application of magnetoencephalographic imaging (MEG) combined with advanced signal processing techniques to this problem. Using data collected while subjects performed a rewards-related gambling paradigm demonstrated to activate the rewards network, we were able to identify neural signals which correspond to deep network activity. We also show that this signal was not observable prior to filtration. These results suggest that MEG imaging may be a viable tool for the detection of deep neural activity.

  18. Delay-dependent dynamical analysis of complex-valued memristive neural networks: Continuous-time and discrete-time cases.

    PubMed

    Wang, Jinling; Jiang, Haijun; Ma, Tianlong; Hu, Cheng

    2018-05-01

    This paper considers the delay-dependent stability of memristive complex-valued neural networks (MCVNNs). A novel linear mapping function is presented to transform the complex-valued system into the real-valued system. Under such mapping function, both continuous-time and discrete-time MCVNNs are analyzed in this paper. Firstly, when activation functions are continuous but not Lipschitz continuous, an extended matrix inequality is proved to ensure the stability of continuous-time MCVNNs. Furthermore, if activation functions are discontinuous, a discontinuous adaptive controller is designed to acquire its stability by applying Lyapunov-Krasovskii functionals. Secondly, compared with techniques in continuous-time MCVNNs, the Halanay-type inequality and comparison principle are firstly used to exploit the dynamical behaviors of discrete-time MCVNNs. Finally, the effectiveness of theoretical results is illustrated through numerical examples. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. An Intelligent Gear Fault Diagnosis Methodology Using a Complex Wavelet Enhanced Convolutional Neural Network.

    PubMed

    Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng

    2017-07-12

    As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.

  20. Mdm2 mediates FMRP- and Gp1 mGluR-dependent protein translation and neural network activity.

    PubMed

    Liu, Dai-Chi; Seimetz, Joseph; Lee, Kwan Young; Kalsotra, Auinash; Chung, Hee Jung; Lu, Hua; Tsai, Nien-Pei

    2017-10-15

    Activating Group 1 (Gp1) metabotropic glutamate receptors (mGluRs), including mGluR1 and mGluR5, elicits translation-dependent neural plasticity mechanisms that are crucial to animal behavior and circuit development. Dysregulated Gp1 mGluR signaling has been observed in numerous neurological and psychiatric disorders. However, the molecular pathways underlying Gp1 mGluR-dependent plasticity mechanisms are complex and have been elusive. In this study, we identified a novel mechanism through which Gp1 mGluR mediates protein translation and neural plasticity. Using a multi-electrode array (MEA) recording system, we showed that activating Gp1 mGluR elevates neural network activity, as demonstrated by increased spontaneous spike frequency and burst activity. Importantly, we validated that elevating neural network activity requires protein translation and is dependent on fragile X mental retardation protein (FMRP), the protein that is deficient in the most common inherited form of mental retardation and autism, fragile X syndrome (FXS). In an effort to determine the mechanism by which FMRP mediates protein translation and neural network activity, we demonstrated that a ubiquitin E3 ligase, murine double minute-2 (Mdm2), is required for Gp1 mGluR-induced translation and neural network activity. Our data showed that Mdm2 acts as a translation suppressor, and FMRP is required for its ubiquitination and down-regulation upon Gp1 mGluR activation. These data revealed a novel mechanism by which Gp1 mGluR and FMRP mediate protein translation and neural network activity, potentially through de-repressing Mdm2. Our results also introduce an alternative way for understanding altered protein translation and brain circuit excitability associated with Gp1 mGluR in neurological diseases such as FXS. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation

    PubMed Central

    Nowke, Christian; Diaz-Pier, Sandra; Weyers, Benjamin; Hentschel, Bernd; Morrison, Abigail; Kuhlen, Torsten W.; Peyser, Alexander

    2018-01-01

    Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed. PMID:29937723

  2. Self-organization of network dynamics into local quantized states.

    PubMed

    Nicolaides, Christos; Juanes, Ruben; Cueto-Felgueroso, Luis

    2016-02-17

    Self-organization and pattern formation in network-organized systems emerges from the collective activation and interaction of many interconnected units. A striking feature of these non-equilibrium structures is that they are often localized and robust: only a small subset of the nodes, or cell assembly, is activated. Understanding the role of cell assemblies as basic functional units in neural networks and socio-technical systems emerges as a fundamental challenge in network theory. A key open question is how these elementary building blocks emerge, and how they operate, linking structure and function in complex networks. Here we show that a network analogue of the Swift-Hohenberg continuum model-a minimal-ingredients model of nodal activation and interaction within a complex network-is able to produce a complex suite of localized patterns. Hence, the spontaneous formation of robust operational cell assemblies in complex networks can be explained as the result of self-organization, even in the absence of synaptic reinforcements.

  3. Assessing the effect of quantitative and qualitative predictors on gastric cancer individuals survival using hierarchical artificial neural network models.

    PubMed

    Amiri, Zohreh; Mohammad, Kazem; Mahmoudi, Mahmood; Parsaeian, Mahbubeh; Zeraati, Hojjat

    2013-01-01

    There are numerous unanswered questions in the application of artificial neural network models for analysis of survival data. In most studies, independent variables have been studied as qualitative dichotomous variables, and results of using discrete and continuous quantitative, ordinal, or multinomial categorical predictive variables in these models are not well understood in comparison to conventional models. This study was designed and conducted to examine the application of these models in order to determine the survival of gastric cancer patients, in comparison to the Cox proportional hazards model. We studied the postoperative survival of 330 gastric cancer patients who suffered surgery at a surgical unit of the Iran Cancer Institute over a five-year period. Covariates of age, gender, history of substance abuse, cancer site, type of pathology, presence of metastasis, stage, and number of complementary treatments were entered in the models, and survival probabilities were calculated at 6, 12, 18, 24, 36, 48, and 60 months using the Cox proportional hazards and neural network models. We estimated coefficients of the Cox model and the weights in the neural network (with 3, 5, and 7 nodes in the hidden layer) in the training group, and used them to derive predictions in the study group. Predictions with these two methods were compared with those of the Kaplan-Meier product limit estimator as the gold standard. Comparisons were performed with the Friedman and Kruskal-Wallis tests. Survival probabilities at different times were determined using the Cox proportional hazards and a neural network with three nodes in the hidden layer; the ratios of standard errors with these two methods to the Kaplan-Meier method were 1.1593 and 1.0071, respectively, revealed a significant difference between Cox and Kaplan-Meier (P < 0.05) and no significant difference between Cox and the neural network, and the neural network and the standard (Kaplan-Meier), as well as better accuracy for the neural network (with 3 nodes in the hidden layer). Probabilities of survival were calculated using three neural network models with 3, 5, and 7 nodes in the hidden layer, and it has been observed that none of the predictions was significantly different from results with the Kaplan-Meier method and they appeared more comparable towards the last months (fifth year). However, we observed better accuracy using the neural network with 5 nodes in the hidden layer. Using the Cox proportional hazards and a neural network with 3 nodes in the hidden layer, we found enhanced accuracy with the neural network model. Neural networks can provide more accurate predictions for survival probabilities compared to the Cox proportional hazards mode, especially now that advances in computer sciences have eliminated limitations associated with complex computations. It is not recommended in order to adding too many hidden layer nodes because sample size related effects can reduce the accuracy. We recommend increasing the number of nodes to a point that increased accuracy continues (decrease in mean standard error), however increasing nodes should cease when a change in this trend is observed.

  4. Continuous monitoring of the lunar or Martian subsurface using on-board pattern recognition and neural processing of Rover geophysical data

    NASA Technical Reports Server (NTRS)

    Mcgill, J. W.; Glass, C. E.; Sternberg, B. K.

    1990-01-01

    The ultimate goal is to create an extraterrestrial unmanned system for subsurface mapping and exploration. Neural networks are to be used to recognize anomalies in the profiles that correspond to potentially exploitable subsurface features. The ground penetrating radar (GPR) techniques are likewise identical. Hence, the preliminary research focus on GPR systems will be directly applicable to seismic systems once such systems can be designed for continuous operation. The original GPR profile may be very complex due to electrical behavior of the background, targets, and antennas, much as the seismic record is made complex by multiple reflections, ghosting, and ringing. Because the format of the GPR data is similar to the format of seismic data, seismic processing software may be applied to GPR data to help enhance the data. A neural network may then be trained to more accurately identify anomalies from the processed record than from the original record.

  5. Attraction Basins as Gauges of Robustness against Boundary Conditions in Biological Complex Systems

    PubMed Central

    Demongeot, Jacques; Goles, Eric; Morvan, Michel; Noual, Mathilde; Sené, Sylvain

    2010-01-01

    One fundamental concept in the context of biological systems on which researches have flourished in the past decade is that of the apparent robustness of these systems, i.e., their ability to resist to perturbations or constraints induced by external or boundary elements such as electromagnetic fields acting on neural networks, micro-RNAs acting on genetic networks and even hormone flows acting both on neural and genetic networks. Recent studies have shown the importance of addressing the question of the environmental robustness of biological networks such as neural and genetic networks. In some cases, external regulatory elements can be given a relevant formal representation by assimilating them to or modeling them by boundary conditions. This article presents a generic mathematical approach to understand the influence of boundary elements on the dynamics of regulation networks, considering their attraction basins as gauges of their robustness. The application of this method on a real genetic regulation network will point out a mathematical explanation of a biological phenomenon which has only been observed experimentally until now, namely the necessity of the presence of gibberellin for the flower of the plant Arabidopsis thaliana to develop normally. PMID:20700525

  6. An adaptable neural-network model for recursive nonlinear traffic prediction and modeling of MPEG video sources.

    PubMed

    Doulamis, A D; Doulamis, N D; Kollias, S D

    2003-01-01

    Multimedia services and especially digital video is expected to be the major traffic component transmitted over communication networks [such as internet protocol (IP)-based networks]. For this reason, traffic characterization and modeling of such services are required for an efficient network operation. The generated models can be used as traffic rate predictors, during the network operation phase (online traffic modeling), or as video generators for estimating the network resources, during the network design phase (offline traffic modeling). In this paper, an adaptable neural-network architecture is proposed covering both cases. The scheme is based on an efficient recursive weight estimation algorithm, which adapts the network response to current conditions. In particular, the algorithm updates the network weights so that 1) the network output, after the adaptation, is approximately equal to current bit rates (current traffic statistics) and 2) a minimal degradation over the obtained network knowledge is provided. It can be shown that the proposed adaptable neural-network architecture simulates a recursive nonlinear autoregressive model (RNAR) similar to the notation used in the linear case. The algorithm presents low computational complexity and high efficiency in tracking traffic rates in contrast to conventional retraining schemes. Furthermore, for the problem of offline traffic modeling, a novel correlation mechanism is proposed for capturing the burstness of the actual MPEG video traffic. The performance of the model is evaluated using several real-life MPEG coded video sources of long duration and compared with other linear/nonlinear techniques used for both cases. The results indicate that the proposed adaptable neural-network architecture presents better performance than other examined techniques.

  7. Efficient Transmission of Subthreshold Signals in Complex Networks of Spiking Neurons

    PubMed Central

    Torres, Joaquin J.; Elices, Irene; Marro, J.

    2015-01-01

    We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances—that naturally balances the network with excitatory and inhibitory synapses—and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest. PMID:25799449

  8. Two's company, three (or more) is a simplex : Algebraic-topological tools for understanding higher-order structure in neural data.

    PubMed

    Giusti, Chad; Ghrist, Robert; Bassett, Danielle S

    2016-08-01

    The language of graph theory, or network science, has proven to be an exceptional tool for addressing myriad problems in neuroscience. Yet, the use of networks is predicated on a critical simplifying assumption: that the quintessential unit of interest in a brain is a dyad - two nodes (neurons or brain regions) connected by an edge. While rarely mentioned, this fundamental assumption inherently limits the types of neural structure and function that graphs can be used to model. Here, we describe a generalization of graphs that overcomes these limitations, thereby offering a broad range of new possibilities in terms of modeling and measuring neural phenomena. Specifically, we explore the use of simplicial complexes: a structure developed in the field of mathematics known as algebraic topology, of increasing applicability to real data due to a rapidly growing computational toolset. We review the underlying mathematical formalism as well as the budding literature applying simplicial complexes to neural data, from electrophysiological recordings in animal models to hemodynamic fluctuations in humans. Based on the exceptional flexibility of the tools and recent ground-breaking insights into neural function, we posit that this framework has the potential to eclipse graph theory in unraveling the fundamental mysteries of cognition.

  9. Relationships between music training, speech processing, and word learning: a network perspective.

    PubMed

    Elmer, Stefan; Jäncke, Lutz

    2018-03-15

    Numerous studies have documented the behavioral advantages conferred on professional musicians and children undergoing music training in processing speech sounds varying in the spectral and temporal dimensions. These beneficial effects have previously often been associated with local functional and structural changes in the auditory cortex (AC). However, this perspective is oversimplified, in that it does not take into account the intrinsic organization of the human brain, namely, neural networks and oscillatory dynamics. Therefore, we propose a new framework for extending these previous findings to a network perspective by integrating multimodal imaging, electrophysiology, and neural oscillations. In particular, we provide concrete examples of how functional and structural connectivity can be used to model simple neural circuits exerting a modulatory influence on AC activity. In addition, we describe how such a network approach can be used for better comprehending the beneficial effects of music training on more complex speech functions, such as word learning. © 2018 New York Academy of Sciences.

  10. Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks

    NASA Astrophysics Data System (ADS)

    Dana, Hod; Marom, Anat; Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-06-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bioengineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes per second of structures with mm-scale dimensions containing a network of over 1,000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances.

  11. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    PubMed

    Zhang, Junming; Wu, Yan

    2018-03-28

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  12. Medical image processing using neural networks based on multivalued and universal binary neurons

    NASA Astrophysics Data System (ADS)

    Aizenberg, Igor N.; Aizenberg, Naum N.; Gotko, Eugen S.; Sochka, Vladimir A.

    1998-06-01

    Cellular Neural Networks (CNN) has become a very good mean for solution of the different kind of image processing problems. CNN based on multi-valued neurons (CNN-MVN) and CNN based on universal binary neurons (CNN-UBN) are the specific kinds of the CNN. MVN and UBN are neurons with complex-valued weights, and complex internal arithmetic. Their main feature is possibility of implementation of the arbitrary mapping between inputs and output described by the MVN, and arbitrary (not only threshold) Boolean function (UBN). Great advantage of the CNN is possibility of implementation of the any linear and many non-linear filters in spatial domain. Together with noise removing using CNN it is possible to implement filters, which can amplify high and medium frequencies. These filters are a very good mean for solution of the enhancement problem, and problem of details extraction against complex background. So, CNN make it possible to organize all the processing process from filtering until extraction of the important details. Organization of this process for medical image processing is considered in the paper. A major attention will be concentrated on the processing of the x-ray and ultrasound images corresponding to different oncology (or closed to oncology) pathologies. Additionally we will consider new structure of the neural network for solution of the problem of differential diagnostics of breast cancer.

  13. Neural control of magnetic suspension systems

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1993-01-01

    The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.

  14. Signal processing and neural network toolbox and its application to failure diagnosis and prognosis

    NASA Astrophysics Data System (ADS)

    Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.

    2001-07-01

    Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.

  15. Multi-objective evolutionary optimization for constructing neural networks for virtual reality visual data mining: application to geophysical prospecting.

    PubMed

    Valdés, Julio J; Barton, Alan J

    2007-05-01

    A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.

  16. Genetic attack on neural cryptography.

    PubMed

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido

    2006-03-01

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.

  17. Flexible timing by temporal scaling of cortical responses

    PubMed Central

    Wang, Jing; Narain, Devika; Hosseini, Eghbal A.; Jazayeri, Mehrdad

    2017-01-01

    Musicians can perform at different tempos, speakers can control the cadence of their speech, and children can flexibly vary their temporal expectations of events. To understand the neural basis of such flexibility, we recorded from the medial frontal cortex of nonhuman primates trained to produce different time intervals with different effectors. Neural responses were heterogeneous, nonlinear and complex, and exhibited a remarkable form of temporal invariance: firing rate profiles were temporally scaled to match the produced intervals. Recording from downstream neurons in the caudate and thalamic neurons projecting to the medial frontal cortex indicated that this phenomenon originates within cortical networks. Recurrent neural network models trained to perform the task revealed that temporal scaling emerges from nonlinearities in the network and degree of scaling is controlled by the strength of external input. These findings demonstrate a simple and general mechanism for conferring temporal flexibility upon sensorimotor and cognitive functions. PMID:29203897

  18. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  19. Movement decoupling control for two-axis fast steering mirror

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Qiao, Yongming; Lv, Tao

    2017-02-01

    Based on flexure hinge and piezoelectric actuator of two-axis fast steering mirror is a complex system with time varying, uncertain and strong coupling. It is extremely difficult to achieve high precision decoupling control with the traditional PID control method. The feedback error learning method was established an inverse hysteresis model which was based inner product dynamic neural network nonlinear and no-smooth for piezo-ceramic. In order to improve the actuator high precision, a method was proposed, which was based piezo-ceramic inverse model of two dynamic neural network adaptive control. The experiment result indicated that, compared with two neural network adaptive movement decoupling control algorithm, static relative error is reduced from 4.44% to 0.30% and coupling degree is reduced from 12.71% to 0.60%, while dynamic relative error is reduced from 13.92% to 2.85% and coupling degree is reduced from 2.63% to 1.17%.

  20. Genetic attack on neural cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka

    2006-03-15

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold formore » the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.« less

  1. Genetic attack on neural cryptography

    NASA Astrophysics Data System (ADS)

    Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido

    2006-03-01

    Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.

  2. Deep Convolutional Neural Networks for Classifying Body Constitution Based on Face Image.

    PubMed

    Huan, Er-Yang; Wen, Gui-Hua; Zhang, Shi-Jun; Li, Dan-Yang; Hu, Yang; Chang, Tian-Yuan; Wang, Qing; Huang, Bing-Lin

    2017-01-01

    Body constitution classification is the basis and core content of traditional Chinese medicine constitution research. It is to extract the relevant laws from the complex constitution phenomenon and finally build the constitution classification system. Traditional identification methods have the disadvantages of inefficiency and low accuracy, for instance, questionnaires. This paper proposed a body constitution recognition algorithm based on deep convolutional neural network, which can classify individual constitution types according to face images. The proposed model first uses the convolutional neural network to extract the features of face image and then combines the extracted features with the color features. Finally, the fusion features are input to the Softmax classifier to get the classification result. Different comparison experiments show that the algorithm proposed in this paper can achieve the accuracy of 65.29% about the constitution classification. And its performance was accepted by Chinese medicine practitioners.

  3. An Artificial Neural Network Evaluation of Tuberculosis Using Genetic and Physiological Patient Data

    NASA Astrophysics Data System (ADS)

    Griffin, William O.; Hanna, Josh; Razorilova, Svetlana; Kitaev, Mikhael; Alisherov, Avtandiil; Darsey, Jerry A.; Tarasenko, Olga

    2010-04-01

    When doctors see more cases of patients with tell-tale symptoms of a disease, it is hoped that they will be able to recognize an infection administer treatment appropriately, thereby speeding up recovery for sick patients. We hope that our studies can aid in the detection of tuberculosis by using a computer model called an artificial neural network. Our model looks at patients with and without tuberculosis (TB). The data that the neural network examined came from the following: patient' age, gender, place, of birth, blood type, Rhesus (Rh) factor, and genes of the human Leukocyte Antigens (HLA) system (9q34.1) present in the Major Histocompatibility Complex. With availability in genetic data and good research, we hope to give them an advantage in the detection of tuberculosis. We try to mimic the doctor's experience with a computer test, which will learn from patient data the factors that contribute to TB.

  4. Approximating quantum many-body wave functions using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Cai, Zi; Liu, Jinguo

    2018-01-01

    In this paper, we demonstrate the expressibility of artificial neural networks (ANNs) in quantum many-body physics by showing that a feed-forward neural network with a small number of hidden layers can be trained to approximate with high precision the ground states of some notable quantum many-body systems. We consider the one-dimensional free bosons and fermions, spinless fermions on a square lattice away from half-filling, as well as frustrated quantum magnetism with a rapidly oscillating ground-state characteristic function. In the latter case, an ANN with a standard architecture fails, while that with a slightly modified one successfully learns the frustration-induced complex sign rule in the ground state and approximates the ground states with high precisions. As an example of practical use of our method, we also perform the variational method to explore the ground state of an antiferromagnetic J1-J2 Heisenberg model.

  5. Inferring general relations between network characteristics from specific network ensembles.

    PubMed

    Cardanobile, Stefano; Pernice, Volker; Deger, Moritz; Rotter, Stefan

    2012-01-01

    Different network models have been suggested for the topology underlying complex interactions in natural systems. These models are aimed at replicating specific statistical features encountered in real-world networks. However, it is rarely considered to which degree the results obtained for one particular network class can be extrapolated to real-world networks. We address this issue by comparing different classical and more recently developed network models with respect to their ability to generate networks with large structural variability. In particular, we consider the statistical constraints which the respective construction scheme imposes on the generated networks. After having identified the most variable networks, we address the issue of which constraints are common to all network classes and are thus suitable candidates for being generic statistical laws of complex networks. In fact, we find that generic, not model-related dependencies between different network characteristics do exist. This makes it possible to infer global features from local ones using regression models trained on networks with high generalization power. Our results confirm and extend previous findings regarding the synchronization properties of neural networks. Our method seems especially relevant for large networks, which are difficult to map completely, like the neural networks in the brain. The structure of such large networks cannot be fully sampled with the present technology. Our approach provides a method to estimate global properties of under-sampled networks in good approximation. Finally, we demonstrate on three different data sets (C. elegans neuronal network, R. prowazekii metabolic network, and a network of synonyms extracted from Roget's Thesaurus) that real-world networks have statistical relations compatible with those obtained using regression models.

  6. Modeling Belt-Servomechanism by Chebyshev Functional Recurrent Neuro-Fuzzy Network

    NASA Astrophysics Data System (ADS)

    Huang, Yuan-Ruey; Kang, Yuan; Chu, Ming-Hui; Chang, Yeon-Pun

    A novel Chebyshev functional recurrent neuro-fuzzy (CFRNF) network is developed from a combination of the Takagi-Sugeno-Kang (TSK) fuzzy model and the Chebyshev recurrent neural network (CRNN). The CFRNF network can emulate the nonlinear dynamics of a servomechanism system. The system nonlinearity is addressed by enhancing the input dimensions of the consequent parts in the fuzzy rules due to functional expansion of a Chebyshev polynomial. The back propagation algorithm is used to adjust the parameters of the antecedent membership functions as well as those of consequent functions. To verify the performance of the proposed CFRNF, the experiment of the belt servomechanism is presented in this paper. Both of identification methods of adaptive neural fuzzy inference system (ANFIS) and recurrent neural network (RNN) are also studied for modeling of the belt servomechanism. The analysis and comparison results indicate that CFRNF makes identification of complex nonlinear dynamic systems easier. It is verified that the accuracy and convergence of the CFRNF are superior to those of ANFIS and RNN by the identification results of a belt servomechanism.

  7. Fuzzy logic and neural networks in artificial intelligence and pattern recognition

    NASA Astrophysics Data System (ADS)

    Sanchez, Elie

    1991-10-01

    With the use of fuzzy logic techniques, neural computing can be integrated in symbolic reasoning to solve complex real world problems. In fact, artificial neural networks, expert systems, and fuzzy logic systems, in the context of approximate reasoning, share common features and techniques. A model of Fuzzy Connectionist Expert System is introduced, in which an artificial neural network is designed to construct the knowledge base of an expert system from, training examples (this model can also be used for specifications of rules in fuzzy logic control). Two types of weights are associated with the synaptic connections in an AND-OR structure: primary linguistic weights, interpreted as labels of fuzzy sets, and secondary numerical weights. Cell activation is computed through min-max fuzzy equations of the weights. Learning consists in finding the (numerical) weights and the network topology. This feedforward network is described and first illustrated in a biomedical application (medical diagnosis assistance from inflammatory-syndromes/proteins profiles). Then, it is shown how this methodology can be utilized for handwritten pattern recognition (characters play the role of diagnoses): in a fuzzy neuron describing a number for example, the linguistic weights represent fuzzy sets on cross-detecting lines and the numerical weights reflect the importance (or weakness) of connections between cross-detecting lines and characters.

  8. Applications of neural networks in training science.

    PubMed

    Pfeiffer, Mark; Hohmann, Andreas

    2012-04-01

    Training science views itself as an integrated and applied science, developing practical measures founded on scientific method. Therefore, it demands consideration of a wide spectrum of approaches and methods. Especially in the field of competitive sports, research questions are usually located in complex environments, so that mainly field studies are drawn upon to obtain broad external validity. Here, the interrelations between different variables or variable sets are mostly of a nonlinear character. In these cases, methods like neural networks, e.g., the pattern recognizing methods of Self-Organizing Kohonen Feature Maps or similar instruments to identify interactions might be successfully applied to analyze data. Following on from a classification of data analysis methods in training-science research, the aim of the contribution is to give examples of varied sports in which network approaches can be effectually used in training science. First, two examples are given in which neural networks are employed for pattern recognition. While one investigation deals with the detection of sporting talent in swimming, the other is located in game sports research, identifying tactical patterns in team handball. The third and last example shows how an artificial neural network can be used to predict competitive performance in swimming. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Correcting wave predictions with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Makarynskyy, O.; Makarynska, D.

    2003-04-01

    The predictions of wind waves with different lead times are necessary in a large scope of coastal and open ocean activities. Numerical wave models, which usually provide this information, are based on deterministic equations that do not entirely account for the complexity and uncertainty of the wave generation and dissipation processes. An attempt to improve wave parameters short-term forecasts based on artificial neural networks is reported. In recent years, artificial neural networks have been used in a number of coastal engineering applications due to their ability to approximate the nonlinear mathematical behavior without a priori knowledge of interrelations among the elements within a system. The common multilayer feed-forward networks, with a nonlinear transfer functions in the hidden layers, were developed and employed to forecast the wave characteristics over one hour intervals starting from one up to 24 hours, and to correct these predictions. Three non-overlapping data sets of wave characteristics, both from a buoy, moored roughly 60 miles west of the Aran Islands, west coast of Ireland, were used to train and validate the neural nets involved. The networks were trained with error back propagation algorithm. Time series plots and scatterplots of the wave characteristics as well as tables with statistics show an improvement of the results achieved due to the correction procedure employed.

  10. A generalized locomotion CPG architecture based on oscillatory building blocks.

    PubMed

    Yang, Zhijun; França, Felipe M G

    2003-07-01

    Neural oscillation is one of the most extensively investigated topics of artificial neural networks. Scientific approaches to the functionalities of both natural and artificial intelligences are strongly related to mechanisms underlying oscillatory activities. This paper concerns itself with the assumption of the existence of central pattern generators (CPGs), which are the plausible neural architectures with oscillatory capabilities, and presents a discrete and generalized approach to the functionality of locomotor CPGs of legged animals. Based on scheduling by multiple edge reversal (SMER), a primitive and deterministic distributed algorithm, it is shown how oscillatory building block (OBB) modules can be created and, hence, how OBB-based networks can be formulated as asymmetric Hopfield-like neural networks for the generation of complex coordinated rhythmic patterns observed among pairs of biological motor neurons working during different gait patterns. It is also shown that the resulting Hopfield-like network possesses the property of reproducing the whole spectrum of different gaits intrinsic to the target locomotor CPGs. Although the new approach is not restricted to the understanding of the neurolocomotor system of any particular animal, hexapodal and quadrupedal gait patterns are chosen as illustrations given the wide interest expressed by the ongoing research in the area.

  11. Convolutional Neural Network for Histopathological Analysis of Osteosarcoma.

    PubMed

    Mishra, Rashika; Daescu, Ovidiu; Leavey, Patrick; Rakheja, Dinesh; Sengupta, Anita

    2018-03-01

    Pathologists often deal with high complexity and sometimes disagreement over osteosarcoma tumor classification due to cellular heterogeneity in the dataset. Segmentation and classification of histology tissue in H&E stained tumor image datasets is a challenging task because of intra-class variations, inter-class similarity, crowded context, and noisy data. In recent years, deep learning approaches have led to encouraging results in breast cancer and prostate cancer analysis. In this article, we propose convolutional neural network (CNN) as a tool to improve efficiency and accuracy of osteosarcoma tumor classification into tumor classes (viable tumor, necrosis) versus nontumor. The proposed CNN architecture contains eight learned layers: three sets of stacked two convolutional layers interspersed with max pooling layers for feature extraction and two fully connected layers with data augmentation strategies to boost performance. The use of a neural network results in higher accuracy of average 92% for the classification. We compare the proposed architecture with three existing and proven CNN architectures for image classification: AlexNet, LeNet, and VGGNet. We also provide a pipeline to calculate percentage necrosis in a given whole slide image. We conclude that the use of neural networks can assure both high accuracy and efficiency in osteosarcoma classification.

  12. Objective assessment of MPEG-2 video quality

    NASA Astrophysics Data System (ADS)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  13. Neural network application to aircraft control system design

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Garg, Sanjay; Merrill, Walter C.

    1991-01-01

    The feasibility of using artificial neural networks as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research are identified to enhance the practical applicability of neural networks to flight control design.

  14. Neural network application to aircraft control system design

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Garg, Sanjay; Merrill, Walter C.

    1991-01-01

    The feasibility of using artificial neural network as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research identified to enhance the practical applicability of neural networks to flight control design.

  15. Role of Network Science in the Study of Anesthetic State Transitions.

    PubMed

    Lee, UnCheol; Mashour, George A

    2018-04-23

    The heterogeneity of molecular mechanisms, target neural circuits, and neurophysiologic effects of general anesthetics makes it difficult to develop a reliable and drug-invariant index of general anesthesia. No single brain region or mechanism has been identified as the neural correlate of consciousness, suggesting that consciousness might emerge through complex interactions of spatially and temporally distributed brain functions. The goal of this review article is to introduce the basic concepts of networks and explain why the application of network science to general anesthesia could be a pathway to discover a fundamental mechanism of anesthetic-induced unconsciousness. This article reviews data suggesting that reduced network efficiency, constrained network repertoires, and changes in cortical dynamics create inhospitable conditions for information processing and transfer, which lead to unconsciousness. This review proposes that network science is not just a useful tool but a necessary theoretical framework and method to uncover common principles of anesthetic-induced unconsciousness.

  16. Self-organization of network dynamics into local quantized states

    DOE PAGES

    Nicolaides, Christos; Juanes, Ruben; Cueto-Felgueroso, Luis

    2016-02-17

    Self-organization and pattern formation in network-organized systems emerges from the collective activation and interaction of many interconnected units. A striking feature of these non-equilibrium structures is that they are often localized and robust: only a small subset of the nodes, or cell assembly, is activated. Understanding the role of cell assemblies as basic functional units in neural networks and socio-technical systems emerges as a fundamental challenge in network theory. A key open question is how these elementary building blocks emerge, and how they operate, linking structure and function in complex networks. Here we show that a network analogue of themore » Swift-Hohenberg continuum model—a minimal-ingredients model of nodal activation and interaction within a complex network—is able to produce a complex suite of localized patterns. Thus, the spontaneous formation of robust operational cell assemblies in complex networks can be explained as the result of self-organization, even in the absence of synaptic reinforcements.« less

  17. Self-organization of network dynamics into local quantized states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicolaides, Christos; Juanes, Ruben; Cueto-Felgueroso, Luis

    Self-organization and pattern formation in network-organized systems emerges from the collective activation and interaction of many interconnected units. A striking feature of these non-equilibrium structures is that they are often localized and robust: only a small subset of the nodes, or cell assembly, is activated. Understanding the role of cell assemblies as basic functional units in neural networks and socio-technical systems emerges as a fundamental challenge in network theory. A key open question is how these elementary building blocks emerge, and how they operate, linking structure and function in complex networks. Here we show that a network analogue of themore » Swift-Hohenberg continuum model—a minimal-ingredients model of nodal activation and interaction within a complex network—is able to produce a complex suite of localized patterns. Thus, the spontaneous formation of robust operational cell assemblies in complex networks can be explained as the result of self-organization, even in the absence of synaptic reinforcements.« less

  18. Structure and function of complex brain networks

    PubMed Central

    Sporns, Olaf

    2013-01-01

    An increasing number of theoretical and empirical studies approach the function of the human brain from a network perspective. The analysis of brain networks is made feasible by the development of new imaging acquisition methods as well as new tools from graph theory and dynamical systems. This review surveys some of these methodological advances and summarizes recent findings on the architecture of structural and functional brain networks. Studies of the structural connectome reveal several modules or network communities that are interlinked by hub regions mediating communication processes between modules. Recent network analyses have shown that network hubs form a densely linked collective called a “rich club,” centrally positioned for attracting and dispersing signal traffic. In parallel, recordings of resting and task-evoked neural activity have revealed distinct resting-state networks that contribute to functions in distinct cognitive domains. Network methods are increasingly applied in a clinical context, and their promise for elucidating neural substrates of brain and mental disorders is discussed. PMID:24174898

  19. Novel maximum-margin training algorithms for supervised neural networks.

    PubMed

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.

  20. On the use of multi-agent systems for the monitoring of industrial systems

    NASA Astrophysics Data System (ADS)

    Rezki, Nafissa; Kazar, Okba; Mouss, Leila Hayet; Kahloul, Laid; Rezki, Djamil

    2016-03-01

    The objective of the current paper is to present an intelligent system for complex process monitoring, based on artificial intelligence technologies. This system aims to realize with success all the complex process monitoring tasks that are: detection, diagnosis, identification and reconfiguration. For this purpose, the development of a multi-agent system that combines multiple intelligences such as: multivariate control charts, neural networks, Bayesian networks and expert systems has became a necessity. The proposed system is evaluated in the monitoring of the complex process Tennessee Eastman process.

  1. Network modulation during complex syntactic processing

    PubMed Central

    den Ouden, Dirk-Bart; Saur, Dorothee; Mader, Wolfgang; Schelter, Björn; Lukic, Sladjana; Wali, Eisha; Timmer, Jens; Thompson, Cynthia K.

    2011-01-01

    Complex sentence processing is supported by a left-lateralized neural network including inferior frontal cortex and posterior superior temporal cortex. This study investigates the pattern of connectivity and information flow within this network. We used fMRI BOLD data derived from 12 healthy participants reported in an earlier study (Thompson, C. K., Den Ouden, D. B., Bonakdarpour, B., Garibaldi, K., & Parrish, T. B. (2010b). Neural plasticity and treatment-induced recovery of sentence processing in agrammatism. Neuropsychologia, 48(11), 3211-3227) to identify activation peaks associated with object-cleft over syntactically less complex subject-cleft processing. Directed Partial Correlation Analysis was conducted on time series extracted from participant-specific activation peaks and showed evidence of functional connectivity between four regions, linearly between premotor cortex, inferior frontal gyrus, posterior superior temporal sulcus and anterior middle temporal gyrus. This pattern served as the basis for Dynamic Causal Modeling of networks with a driving input to posterior superior temporal cortex, which likely supports thematic role assignment, and networks with a driving input to inferior frontal cortex, a core region associated with syntactic computation. The optimal model was determined through both frequentist and Bayesian model selection and turned out to reflect a network with a primary drive from inferior frontal cortex and modulation of the connection between inferior frontal and posterior superior temporal cortex by complex sentence processing. The winning model also showed a substantive role for a feedback mechanism from posterior superior temporal cortex back to inferior frontal cortex. We suggest that complex syntactic processing is driven by word-order analysis, supported by inferior frontal cortex, in an interactive relation with posterior superior temporal cortex, which supports verb argument structure processing. PMID:21820518

  2. Synaptogenesis Is Modulated by Heparan Sulfate in Caenorhabditis elegans

    PubMed Central

    Lázaro-Peña, María I.; Díaz-Balzac, Carlos A.; Bülow, Hannes E.; Emmons, Scott W.

    2018-01-01

    The nervous system regulates complex behaviors through a network of neurons interconnected by synapses. How specific synaptic connections are genetically determined is still unclear. Male mating is the most complex behavior in Caenorhabditis elegans. It is composed of sequential steps that are governed by > 3000 chemical connections. Here, we show that heparan sulfates (HS) play a role in the formation and function of the male neural network. HS, sulfated in position 3 by the HS modification enzyme HST-3.1/HS 3-O-sulfotransferase and attached to the HS proteoglycan glypicans LON-2/glypican and GPN-1/glypican, functions cell-autonomously and nonautonomously for response to hermaphrodite contact during mating. Loss of 3-O sulfation resulted in the presynaptic accumulation of RAB-3, a molecule that localizes to synaptic vesicles, and disrupted the formation of synapses in a component of the mating circuits. We also show that the neural cell adhesion protein NRX-1/neurexin promotes and the neural cell adhesion protein NLG-1/neuroligin inhibits the formation of the same set of synapses in a parallel pathway. Thus, neural cell adhesion proteins and extracellular matrix components act together in the formation of synaptic connections. PMID:29559501

  3. Meteorological, environmental remote sensing and neural network analysis of the epidemiology of malaria transmission in Thailand.

    PubMed

    Kiang, Richard; Adimi, Farida; Soika, Valerii; Nigro, Joseph; Singhasivanon, Pratap; Sirichaisinthop, Jeeraphat; Leemingsawat, Somjai; Apiwathnasorn, Chamnarn; Looareesuwan, Sornchai

    2006-11-01

    In many malarious regions malaria transmission roughly coincides with rainy seasons, which provide for more abundant larval habitats. In addition to precipitation, other meteorological and environmental factors may also influence malaria transmission. These factors can be remotely sensed using earth observing environmental satellites and estimated with seasonal climate forecasts. The use of remote sensing usage as an early warning tool for malaria epidemics have been broadly studied in recent years, especially for Africa, where the majority of the world's malaria occurs. Although the Greater Mekong Subregion (GMS), which includes Thailand and the surrounding countries, is an epicenter of multidrug resistant falciparum malaria, the meteorological and environmental factors affecting malaria transmissions in the GMS have not been examined in detail. In this study, the parasitological data used consisted of the monthly malaria epidemiology data at the provincial level compiled by the Thai Ministry of Public Health. Precipitation, temperature, relative humidity, and vegetation index obtained from both climate time series and satellite measurements were used as independent variables to model malaria. We used neural network methods, an artificial-intelligence technique, to model the dependency of malaria transmission on these variables. The average training accuracy of the neural network analysis for three provinces (Kanchanaburi, Mae Hong Son, and Tak) which are among the provinces most endemic for malaria, is 72.8% and the average testing accuracy is 62.9% based on the 1994-1999 data. A more complex neural network architecture resulted in higher training accuracy but also lower testing accuracy. Taking into account of the uncertainty regarding reported malaria cases, we divided the malaria cases into bands (classes) to compute training accuracy. Using the same neural network architecture on the 19 most endemic provinces for years 1994 to 2000, the mean training accuracy weighted by provincial malaria cases was 73%. Prediction of malaria cases for 2001 using neural networks trained for 1994-2000 gave a weighted accuracy of 53%. Because there was a significant decrease (31%) in the number of malaria cases in the 19 provinces from 2000 to 2001, the networks overestimated malaria transmissions. The decrease in transmission was not due to climatic or environmental changes. Thailand is a country with long borders. Migrant populations from the neighboring countries enlarge the human malaria reservoir because these populations have more limited access to health care. This issue also confounds the complexity of modeling malaria based on meteorological and environmental variables alone. In spite of the relatively low resolution of the data and the impact of migrant populations, we have uncovered a reasonably clear dependency of malaria on meteorological and environmental remote sensing variables. When other contextual determinants do not vary significantly, using neural network analysis along with remote sensing variables to predict malaria endemicity should be feasible.

  4. A feasibility study for long-path multiple detection using a neural network

    NASA Technical Reports Server (NTRS)

    Feuerbacher, G. A.; Moebes, T. A.

    1994-01-01

    Least-squares inverse filters have found widespread use in the deconvolution of seismograms and the removal of multiples. The use of least-squares prediction filters with prediction distances greater than unity leads to the method of predictive deconvolution which can be used for the removal of long path multiples. The predictive technique allows one to control the length of the desired output wavelet by control of the predictive distance, and hence to specify the desired degree of resolution. Events which are periodic within given repetition ranges can be attenuated selectively. The method is thus effective in the suppression of rather complex reverberation patterns. A back propagation(BP) neural network is constructed to perform the detection of first arrivals of the multiples and therefore aid in the more accurate determination of the predictive distance of the multiples. The neural detector is applied to synthetic reflection coefficients and synthetic seismic traces. The processing results show that the neural detector is accurate and should lead to an automated fast method for determining predictive distances across vast amounts of data such as seismic field records. The neural network system used in this study was the NASA Software Technology Branch's NETS system.

  5. A New Neural Network Approach Including First-Guess for Retrieval of Atmospheric Water Vapor, Cloud Liquid Water Path, Surface Temperature and Emissivities Over Land From Satellite Microwave Observations

    NASA Technical Reports Server (NTRS)

    Aires, F.; Prigent, C.; Rossow, W. B.; Rothstein, M.; Hansen, James E. (Technical Monitor)

    2000-01-01

    The analysis of microwave observations over land to determine atmospheric and surface parameters is still limited due to the complexity of the inverse problem. Neural network techniques have already proved successful as the basis of efficient retrieval methods for non-linear cases, however, first-guess estimates, which are used in variational methods to avoid problems of solution non-uniqueness or other forms of solution irregularity, have up to now not been used with neural network methods. In this study, a neural network approach is developed that uses a first-guess. Conceptual bridges are established between the neural network and variational methods. The new neural method retrieves the surface skin temperature, the integrated water vapor content, the cloud liquid water path and the microwave surface emissivities between 19 and 85 GHz over land from SSM/I observations. The retrieval, in parallel, of all these quantities improves the results for consistency reasons. A data base to train the neural network is calculated with a radiative transfer model and a a global collection of coincident surface and atmospheric parameters extracted from the National Center for Environmental Prediction reanalysis, from the International Satellite Cloud Climatology Project data and from microwave emissivity atlases previously calculated. The results of the neural network inversion are very encouraging. The r.m.s. error of the surface temperature retrieval over the globe is 1.3 K in clear sky conditions and 1.6 K in cloudy scenes. Water vapor is retrieved with a r.m.s. error of 3.8 kg/sq m in clear conditions and 4.9 kg/sq m in cloudy situations. The r.m.s. error in cloud liquid water path is 0.08 kg/sq m . The surface emissivities are retrieved with an accuracy of better than 0.008 in clear conditions and 0.010 in cloudy conditions. Microwave land surface temperature retrieval presents a very attractive complement to the infrared estimates in cloudy areas: time record of land surface temperature will be produced.

  6. Natural language acquisition in large scale neural semantic networks

    NASA Astrophysics Data System (ADS)

    Ealey, Douglas

    This thesis puts forward the view that a purely signal- based approach to natural language processing is both plausible and desirable. By questioning the veracity of symbolic representations of meaning, it argues for a unified, non-symbolic model of knowledge representation that is both biologically plausible and, potentially, highly efficient. Processes to generate a grounded, neural form of this model-dubbed the semantic filter-are discussed. The combined effects of local neural organisation, coincident with perceptual maturation, are used to hypothesise its nature. This theoretical model is then validated in light of a number of fundamental neurological constraints and milestones. The mechanisms of semantic and episodic development that the model predicts are then used to explain linguistic properties, such as propositions and verbs, syntax and scripting. To mimic the growth of locally densely connected structures upon an unbounded neural substrate, a system is developed that can grow arbitrarily large, data- dependant structures composed of individual self- organising neural networks. The maturational nature of the data used results in a structure in which the perception of concepts is refined by the networks, but demarcated by subsequent structure. As a consequence, the overall structure shows significant memory and computational benefits, as predicted by the cognitive and neural models. Furthermore, the localised nature of the neural architecture also avoids the increasing error sensitivity and redundancy of traditional systems as the training domain grows. The semantic and episodic filters have been demonstrated to perform as well, or better, than more specialist networks, whilst using significantly larger vocabularies, more complex sentence forms and more natural corpora.

  7. Membership generation using multilayer neural network

    NASA Technical Reports Server (NTRS)

    Kim, Jaeseok

    1992-01-01

    There has been intensive research in neural network applications to pattern recognition problems. Particularly, the back-propagation network has attracted many researchers because of its outstanding performance in pattern recognition applications. In this section, we describe a new method to generate membership functions from training data using a multilayer neural network. The basic idea behind the approach is as follows. The output values of a sigmoid activation function of a neuron bear remarkable resemblance to membership values. Therefore, we can regard the sigmoid activation values as the membership values in fuzzy set theory. Thus, in order to generate class membership values, we first train a suitable multilayer network using a training algorithm such as the back-propagation algorithm. After the training procedure converges, the resulting network can be treated as a membership generation network, where the inputs are feature values and the outputs are membership values in the different classes. This method allows fairly complex membership functions to be generated because the network is highly nonlinear in general. Also, it is to be noted that the membership functions are generated from a classification point of view. For pattern recognition applications, this is highly desirable, although the membership values may not be indicative of the degree of typicality of a feature value in a particular class.

  8. Predicting musically induced emotions from physiological inputs: linear and neural network models.

    PubMed

    Russo, Frank A; Vempala, Naresh N; Sandstrom, Gillian M

    2013-01-01

    Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of "felt" emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants-heart rate (HR), respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA) dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a non-linear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The non-linear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the non-linear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  9. Prediction of hearing loss among the noise-exposed workers in a steel factory using artificial intelligence approach.

    PubMed

    Aliabadi, Mohsen; Farhadian, Maryam; Darvishi, Ebrahim

    2015-08-01

    Prediction of hearing loss in noisy workplaces is considered to be an important aspect of hearing conservation program. Artificial intelligence, as a new approach, can be used to predict the complex phenomenon such as hearing loss. Using artificial neural networks, this study aims to present an empirical model for the prediction of the hearing loss threshold among noise-exposed workers. Two hundred and ten workers employed in a steel factory were chosen, and their occupational exposure histories were collected. To determine the hearing loss threshold, the audiometric test was carried out using a calibrated audiometer. The personal noise exposure was also measured using a noise dosimeter in the workstations of workers. Finally, data obtained five variables, which can influence the hearing loss, were used for the development of the prediction model. Multilayer feed-forward neural networks with different structures were developed using MATLAB software. Neural network structures had one hidden layer with the number of neurons being approximately between 5 and 15 neurons. The best developed neural networks with one hidden layer and ten neurons could accurately predict the hearing loss threshold with RMSE = 2.6 dB and R(2) = 0.89. The results also confirmed that neural networks could provide more accurate predictions than multiple regressions. Since occupational hearing loss is frequently non-curable, results of accurate prediction can be used by occupational health experts to modify and improve noise exposure conditions.

  10. Inference in the brain: Statistics flowing in redundant population codes

    PubMed Central

    Pitkow, Xaq; Angelaki, Dora E

    2017-01-01

    It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors. PMID:28595050

  11. Using an Artificial Neural Bypass to Restore Cortical Control of Rhythmic Movements in a Human with Quadriplegia

    NASA Astrophysics Data System (ADS)

    Sharma, Gaurav; Friedenberg, David A.; Annetta, Nicholas; Glenn, Bradley; Bockbrader, Marcie; Majstorovic, Connor; Domas, Stephanie; Mysiw, W. Jerry; Rezai, Ali; Bouton, Chad

    2016-09-01

    Neuroprosthetic technology has been used to restore cortical control of discrete (non-rhythmic) hand movements in a paralyzed person. However, cortical control of rhythmic movements which originate in the brain but are coordinated by Central Pattern Generator (CPG) neural networks in the spinal cord has not been demonstrated previously. Here we show a demonstration of an artificial neural bypass technology that decodes cortical activity and emulates spinal cord CPG function allowing volitional rhythmic hand movement. The technology uses a combination of signals recorded from the brain, machine-learning algorithms to decode the signals, a numerical model of CPG network, and a neuromuscular electrical stimulation system to evoke rhythmic movements. Using the neural bypass, a quadriplegic participant was able to initiate, sustain, and switch between rhythmic and discrete finger movements, using his thoughts alone. These results have implications in advancing neuroprosthetic technology to restore complex movements in people living with paralysis.

  12. Fuzzy Adaptive Control for Intelligent Autonomous Space Exploration Problems

    NASA Technical Reports Server (NTRS)

    Esogbue, Augustine O.

    1998-01-01

    The principal objective of the research reported here is the re-design, analysis and optimization of our newly developed neural network fuzzy adaptive controller model for complex processes capable of learning fuzzy control rules using process data and improving its control through on-line adaption. The learned improvement is according to a performance objective function that provides evaluative feedback; this performance objective is broadly defined to meet long-range goals over time. Although fuzzy control had proven effective for complex, nonlinear, imprecisely-defined processes for which standard models and controls are either inefficient, impractical or cannot be derived, the state of the art prior to our work showed that procedures for deriving fuzzy control, however, were mostly ad hoc heuristics. The learning ability of neural networks was exploited to systematically derive fuzzy control and permit on-line adaption and in the process optimize control. The operation of neural networks integrates very naturally with fuzzy logic. The neural networks which were designed and tested using simulation software and simulated data, followed by realistic industrial data were reconfigured for application on several platforms as well as for the employment of improved algorithms. The statistical procedures of the learning process were investigated and evaluated with standard statistical procedures (such as ANOVA, graphical analysis of residuals, etc.). The computational advantage of dynamic programming-like methods of optimal control was used to permit on-line fuzzy adaptive control. Tests for the consistency, completeness and interaction of the control rules were applied. Comparisons to other methods and controllers were made so as to identify the major advantages of the resulting controller model. Several specific modifications and extensions were made to the original controller. Additional modifications and explorations have been proposed for further study. Some of these are in progress in our laboratory while others await additional support. All of these enhancements will improve the attractiveness of the controller as an effective tool for the on line control of an array of complex process environments.

  13. Predicting outcomes in patients with perforated gastroduodenal ulcers: artificial neural network modelling indicates a highly complex disease.

    PubMed

    Søreide, K; Thorsen, K; Søreide, J A

    2015-02-01

    Mortality prediction models for patients with perforated peptic ulcer (PPU) have not yielded consistent or highly accurate results. Given the complex nature of this disease, which has many non-linear associations with outcomes, we explored artificial neural networks (ANNs) to predict the complex interactions between the risk factors of PPU and death among patients with this condition. ANN modelling using a standard feed-forward, back-propagation neural network with three layers (i.e., an input layer, a hidden layer and an output layer) was used to predict the 30-day mortality of consecutive patients from a population-based cohort undergoing surgery for PPU. A receiver-operating characteristic (ROC) analysis was used to assess model accuracy. Of the 172 patients, 168 had their data included in the model; the data of 117 (70%) were used for the training set, and the data of 51 (39%) were used for the test set. The accuracy, as evaluated by area under the ROC curve (AUC), was best for an inclusive, multifactorial ANN model (AUC 0.90, 95% CIs 0.85-0.95; p < 0.001). This model outperformed standard predictive scores, including Boey and PULP. The importance of each variable decreased as the number of factors included in the ANN model increased. The prediction of death was most accurate when using an ANN model with several univariate influences on the outcome. This finding demonstrates that PPU is a highly complex disease for which clinical prognoses are likely difficult. The incorporation of computerised learning systems might enhance clinical judgments to improve decision making and outcome prediction.

  14. Time dependent neural network models for detecting changes of state in complex processes: applications in earth sciences and astronomy.

    PubMed

    Valdés, Julio J; Bonham-Carter, Graeme

    2006-03-01

    A computational intelligence approach is used to explore the problem of detecting internal state changes in time dependent processes; described by heterogeneous, multivariate time series with imprecise data and missing values. Such processes are approximated by collections of time dependent non-linear autoregressive models represented by a special kind of neuro-fuzzy neural network. Grid and high throughput computing model mining procedures based on neuro-fuzzy networks and genetic algorithms, generate: (i) collections of models composed of sets of time lag terms from the time series, and (ii) prediction functions represented by neuro-fuzzy networks. The composition of the models and their prediction capabilities, allows the identification of changes in the internal structure of the process. These changes are associated with the alternation of steady and transient states, zones with abnormal behavior, instability, and other situations. This approach is general, and its sensitivity for detecting subtle changes of state is revealed by simulation experiments. Its potential in the study of complex processes in earth sciences and astrophysics is illustrated with applications using paleoclimate and solar data.

  15. An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems

    DTIC Science & Technology

    1991-12-01

    neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input

  16. Distributed recurrent neural forward models with synaptic adaptation and CPG-based control for complex behaviors of walking robots

    PubMed Central

    Dasgupta, Sakyasingha; Goldschmidt, Dennis; Wörgötter, Florentin; Manoonpong, Poramate

    2015-01-01

    Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated movements, (2) distributed (at each leg) recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and (3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps, leg damage adaptations, as well as climbing over high obstacles. Furthermore, we demonstrate that the newly developed recurrent network based approach to online forward models outperforms the adaptive neuron forward models, which have hitherto been the state of the art, to model a subset of similar walking behaviors in walking robots. PMID:26441629

  17. Prediction of Weld Penetration in FCAW of HSLA steel using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Asl, Y. Dadgar; Mostafa, N. B.; Panahizadeh R., V.; Seyedkashi, S. M. H.

    2011-01-01

    Flux-cored arc welding (FCAW) is a semiautomatic or automatic arc welding process that requires a continuously-fed consumable tubular electrode containing a flux. The main FCAW process parameters affecting the depth of penetration are welding current, arc voltage, nozzle-to-work distance, torch angle and welding speed. Shallow depth of penetration may contribute to failure of a welded structure since penetration determines the stress-carrying capacity of a welded joint. To avoid such occurrences; the welding process parameters influencing the weld penetration must be properly selected to obtain an acceptable weld penetration and hence a high quality joint. Artificial neural networks (ANN), also called neural networks (NN), are computational models used to express complex non-linear relationships between input and output data. In this paper, artificial neural network (ANN) method is used to predict the effects of welding current, arc voltage, nozzle-to-work distance, torch angle and welding speed on weld penetration depth in gas shielded FCAW of a grade of high strength low alloy steel. 32 experimental runs were carried out using the bead-on-plate welding technique. Weld penetrations were measured and on the basis of these 32 sets of experimental data, a feed-forward back-propagation neural network was created. 28 sets of the experiments were used as the training data and the remaining 4 sets were used for the testing phase of the network. The ANN has one hidden layer with eight neurons and is trained after 840 iterations. The comparison between the experimental results and ANN results showed that the trained network could predict the effects of the FCAW process parameters on weld penetration adequately.

  18. A Novel Experimental and Analytical Approach to the Multimodal Neural Decoding of Intent During Social Interaction in Freely-behaving Human Infants.

    PubMed

    Cruz-Garza, Jesus G; Hernandez, Zachery R; Tse, Teresa; Caducoy, Eunice; Abibullaev, Berdakh; Contreras-Vidal, Jose L

    2015-10-04

    Understanding typical and atypical development remains one of the fundamental questions in developmental human neuroscience. Traditionally, experimental paradigms and analysis tools have been limited to constrained laboratory tasks and contexts due to technical limitations imposed by the available set of measuring and analysis techniques and the age of the subjects. These limitations severely limit the study of developmental neural dynamics and associated neural networks engaged in cognition, perception and action in infants performing "in action and in context". This protocol presents a novel approach to study infants and young children as they freely organize their own behavior, and its consequences in a complex, partly unpredictable and highly dynamic environment. The proposed methodology integrates synchronized high-density active scalp electroencephalography (EEG), inertial measurement units (IMUs), video recording and behavioral analysis to capture brain activity and movement non-invasively in freely-behaving infants. This setup allows for the study of neural network dynamics in the developing brain, in action and context, as these networks are recruited during goal-oriented, exploration and social interaction tasks.

  19. Self-learning Monte Carlo with deep neural networks

    NASA Astrophysics Data System (ADS)

    Shen, Huitao; Liu, Junwei; Fu, Liang

    2018-05-01

    The self-learning Monte Carlo (SLMC) method is a general algorithm to speedup MC simulations. Its efficiency has been demonstrated in various systems by introducing an effective model to propose global moves in the configuration space. In this paper, we show that deep neural networks can be naturally incorporated into SLMC, and without any prior knowledge can learn the original model accurately and efficiently. Demonstrated in quantum impurity models, we reduce the complexity for a local update from O (β2) in Hirsch-Fye algorithm to O (β lnβ ) , which is a significant speedup especially for systems at low temperatures.

  20. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.

    PubMed

    Hohman, Fred; Hodas, Nathan; Chau, Duen Horng

    2017-05-01

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  1. The Technology of Suppressing Harmonics with Complex Neural Network is Applied to Microgrid

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Li, Zhan-Ying; Wang, Yan-ping; Li, Yang; Zong, Ke-yong

    2018-03-01

    According to the traits of harmonics in microgrid, a new CANN controller which combines BP and RBF neural network is proposed to control APF to detect and suppress harmonics. This controller has the function of current prediction. By simulation in Matlab / Simulink, this design can shorten the delay time nearly 0.02s (a power supply current cycle) in comparison with the traditional controller based on ip-iq method. The new controller also has higher compensation accuracy and better dynamic tracking traits, it can greatly suppress the harmonics and improve the power quality.

  2. Workplace injuries, safety climate and behaviors: application of an artificial neural network.

    PubMed

    Abubakar, A Mohammed; Karadal, Himmet; Bayighomog, Steven W; Merdan, Ethem

    2018-05-09

    This article proposes and tests a model for the interaction effect of the organizational safety climate and behaviors on workplace injuries. Using artificial neural network and survey data from 306 metal casting industry employees in central Anatolia, we found that an organizational safety climate mitigates workplace injuries, and safety behaviors enforce the strength of the negative impact of the safety climate on workplace injuries. The results suggest a complex relationship between the organizational safety climate, safety behavior and workplace injuries. Theoretical and practical implications are discussed in light of decreasing workplace injuries in the Anatolian metal casting industry.

  3. Neural network-based feature point descriptors for registration of optical and SAR images

    NASA Astrophysics Data System (ADS)

    Abulkhanov, Dmitry; Konovalenko, Ivan; Nikolaev, Dmitry; Savchik, Alexey; Shvets, Evgeny; Sidorchuk, Dmitry

    2018-04-01

    Registration of images of different nature is an important technique used in image fusion, change detection, efficient information representation and other problems of computer vision. Solving this task using feature-based approaches is usually more complex than registration of several optical images because traditional feature descriptors (SIFT, SURF, etc.) perform poorly when images have different nature. In this paper we consider the problem of registration of SAR and optical images. We train neural network to build feature point descriptors and use RANSAC algorithm to align found matches. Experimental results are presented that confirm the method's effectiveness.

  4. Artificial Neural Networks: an overview and their use in the analysis of the AMPHORA-3 dataset.

    PubMed

    Buscema, Paolo Massimo; Massini, Giulia; Maurelli, Guido

    2014-10-01

    The Artificial Adaptive Systems (AAS) are theories with which generative algebras are able to create artificial models simulating natural phenomenon. Artificial Neural Networks (ANNs) are the more diffused and best-known learning system models in the AAS. This article describes an overview of ANNs, noting its advantages and limitations for analyzing dynamic, complex, non-linear, multidimensional processes. An example of a specific ANN application to alcohol consumption in Spain, as part of the EU AMPHORA-3 project, during 1961-2006 is presented. Study's limitations are noted and future needed research using ANN methodologies are suggested.

  5. Functional recognition imaging using artificial neural networks: applications to rapid cellular identification via broadband electromechanical response

    NASA Astrophysics Data System (ADS)

    Nikiforov, M. P.; Reukov, V. V.; Thompson, G. L.; Vertegel, A. A.; Guo, S.; Kalinin, S. V.; Jesse, S.

    2009-10-01

    Functional recognition imaging in scanning probe microscopy (SPM) using artificial neural network identification is demonstrated. This approach utilizes statistical analysis of complex SPM responses at a single spatial location to identify the target behavior, which is reminiscent of associative thinking in the human brain, obviating the need for analytical models. We demonstrate, as an example of recognition imaging, rapid identification of cellular organisms using the difference in electromechanical activity over a broad frequency range. Single-pixel identification of model Micrococcus lysodeikticus and Pseudomonas fluorescens bacteria is achieved, demonstrating the viability of the method.

  6. Artificial neural networks application for modeling of friction stir welding effects on mechanical properties of 7075-T6 aluminum alloy

    NASA Astrophysics Data System (ADS)

    Maleki, E.

    2015-12-01

    Friction stir welding (FSW) is a relatively new solid-state joining technique that is widely adopted in manufacturing and industry fields to join different metallic alloys that are hard to weld by conventional fusion welding. Friction stir welding is a very complex process comprising several highly coupled physical phenomena. The complex geometry of some kinds of joints makes it difficult to develop an overall governing equations system for theoretical behavior analyse of the friction stir welded joints. Weld quality is predominantly affected by welding effective parameters, and the experiments are often time consuming and costly. On the other hand, employing artificial intelligence (AI) systems such as artificial neural networks (ANNs) as an efficient approach to solve the science and engineering problems is considerable. In present study modeling of FSW effective parameters by ANNs is investigated. To train the networks, experimental test results on thirty AA-7075-T6 specimens are considered, and the networks are developed based on back propagation (BP) algorithm. ANNs testing are carried out using different experimental data that they are not used during networks training. In this paper, rotational speed of tool, welding speed, axial force, shoulder diameter, pin diameter and tool hardness are regarded as inputs of the ANNs. Yield strength, tensile strength, notch-tensile strength and hardness of welding zone are gathered as outputs of neural networks. According to the obtained results, predicted values for the hardness of welding zone, yield strength, tensile strength and notch-tensile strength have the least mean relative error (MRE), respectively. Comparison of the predicted and the experimental results confirms that the networks are adjusted carefully, and the ANN can be used for modeling of FSW effective parameters.

  7. Advanced Aeroservoelastic Testing and Data Analysis (Les Essais Aeroservoelastiques et l’Analyse des Donnees).

    DTIC Science & Technology

    1995-11-01

    network - based AFS concepts. Neural networks can addition of vanes in each engine exhaust for thrust provide...parameter estimation programs 19-11 8.6 Neural Network Based Methods unknown parameters of the postulated state space model Artificial neural network ...Forward Neural Network the network that the applicability of the recurrent neural and ii) Recurrent Neural Network [117-119]. network to

  8. The quadriceps muscle of knee joint modelling Using Hybrid Particle Swarm Optimization-Neural Network (PSO-NN)

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Saadi Bin Ahmad; Marponga Tolos, Siti; Hee, Pah Chin; Ghani, Nor Azura Md; Ramli, Norazan Mohamed; Nasir, Noorhamizah Binti Mohamed; Ksm Kader, Babul Salam Bin; Saiful Huq, Mohammad

    2017-03-01

    Neural framework has for quite a while been known for its ability to handle a complex nonlinear system without a logical model and can learn refined nonlinear associations gives. Theoretically, the most surely understood computation to set up the framework is the backpropagation (BP) count which relies on upon the minimization of the mean square error (MSE). However, this algorithm is not totally efficient in the presence of outliers which usually exist in dynamic data. This paper exhibits the modelling of quadriceps muscle model by utilizing counterfeit smart procedures named consolidated backpropagation neural network nonlinear autoregressive (BPNN-NAR) and backpropagation neural network nonlinear autoregressive moving average (BPNN-NARMA) models in view of utilitarian electrical incitement (FES). We adapted particle swarm optimization (PSO) approach to enhance the performance of backpropagation algorithm. In this research, a progression of tests utilizing FES was led. The information that is gotten is utilized to build up the quadriceps muscle model. 934 preparing information, 200 testing and 200 approval information set are utilized as a part of the improvement of muscle model. It was found that both BPNN-NAR and BPNN-NARMA performed well in modelling this type of data. As a conclusion, the neural network time series models performed reasonably efficient for non-linear modelling such as active properties of the quadriceps muscle with one input, namely output namely muscle force.

  9. Algorithm for Training a Recurrent Multilayer Perceptron

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.

    2004-01-01

    An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.

  10. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  11. Neural network prediction of carbonate lithofacies from well logs, Big Bow and Sand Arroyo Creek fields, Southwest Kansas

    USGS Publications Warehouse

    Qi, L.; Carr, T.R.

    2006-01-01

    In the Hugoton Embayment of southwestern Kansas, St. Louis Limestone reservoirs have relatively low recovery efficiencies, attributed to the heterogeneous nature of the oolitic deposits. This study establishes quantitative relationships between digital well logs and core description data, and applies these relationships in a probabilistic sense to predict lithofacies in 90 uncored wells across the Big Bow and Sand Arroyo Creek fields. In 10 wells, a single hidden-layer neural network based on digital well logs and core described lithofacies of the limestone depositional texture was used to train and establish a non-linear relationship between lithofacies assignments from detailed core descriptions and selected log curves. Neural network models were optimized by selecting six predictor variables and automated cross-validation with neural network parameters and then used to predict lithofacies on the whole data set of the 2023 half-foot intervals from the 10 cored wells with the selected network size of 35 and a damping parameter of 0.01. Predicted lithofacies results compared to actual lithofacies displays absolute accuracies of 70.37-90.82%. Incorporating adjoining lithofacies, within-one lithofacies improves accuracy slightly (93.72%). Digital logs from uncored wells were batch processed to predict lithofacies and probabilities related to each lithofacies at half-foot resolution corresponding to log units. The results were used to construct interpolated cross-sections and useful depositional patterns of St. Louis lithofacies were illustrated, e.g., the concentration of oolitic deposits (including lithofacies 5 and 6) along local highs and the relative dominance of quartz-rich carbonate grainstone (lithofacies 1) in the zones A and B of the St. Louis Limestone. Neural network techniques are applicable to other complex reservoirs, in which facies geometry and distribution are the key factors controlling heterogeneity and distribution of rock properties. Future work involves extension of the neural network to predict reservoir properties, and construction of three-dimensional geo-models. ?? 2005 Elsevier Ltd. All rights reserved.

  12. Gamma oscillations in a nonlinear regime: a minimal model approach using heterogeneous integrate-and-fire networks.

    PubMed

    Bathellier, Brice; Carleton, Alan; Gerstner, Wulfram

    2008-12-01

    Fast oscillations and in particular gamma-band oscillation (20-80 Hz) are commonly observed during brain function and are at the center of several neural processing theories. In many cases, mathematical analysis of fast oscillations in neural networks has been focused on the transition between irregular and oscillatory firing viewed as an instability of the asynchronous activity. But in fact, brain slice experiments as well as detailed simulations of biological neural networks have produced a large corpus of results concerning the properties of fully developed oscillations that are far from this transition point. We propose here a mathematical approach to deal with nonlinear oscillations in a network of heterogeneous or noisy integrate-and-fire neurons connected by strong inhibition. This approach involves limited mathematical complexity and gives a good sense of the oscillation mechanism, making it an interesting tool to understand fast rhythmic activity in simulated or biological neural networks. A surprising result of our approach is that under some conditions, a change of the strength of inhibition only weakly influences the period of the oscillation. This is in contrast to standard theoretical and experimental models of interneuron network gamma oscillations (ING), where frequency tightly depends on inhibition strength, but it is similar to observations made in some in vitro preparations in the hippocampus and the olfactory bulb and in some detailed network models. This result is explained by the phenomenon of suppression that is known to occur in strongly coupled oscillating inhibitory networks but had not yet been related to the behavior of oscillation frequency.

  13. A hybrid framework for reservoir characterization using fuzzy ranking and an artificial neural network

    NASA Astrophysics Data System (ADS)

    Wang, Baijie; Wang, Xin; Chen, Zhangxin

    2013-08-01

    Reservoir characterization refers to the process of quantitatively assigning reservoir properties using all available field data. Artificial neural networks (ANN) have recently been introduced to solve reservoir characterization problems dealing with the complex underlying relationships inherent in well log data. Despite the utility of ANNs, the current limitation is that most existing applications simply focus on directly implementing existing ANN models instead of improving/customizing them to fit the specific reservoir characterization tasks at hand. In this paper, we propose a novel intelligent framework that integrates fuzzy ranking (FR) and multilayer perceptron (MLP) neural networks for reservoir characterization. FR can automatically identify a minimum subset of well log data as neural inputs, and the MLP is trained to learn the complex correlations from the selected well log data to a target reservoir property. FR guarantees the selection of the optimal subset of representative data from the overall well log data set for the characterization of a specific reservoir property; and, this implicitly improves the modeling and predication accuracy of the MLP. In addition, a growing number of industrial agencies are implementing geographic information systems (GIS) in field data management; and, we have designed the GFAR solution (GIS-based FR ANN Reservoir characterization solution) system, which integrates the proposed framework into a GIS system that provides an efficient characterization solution. Three separate petroleum wells from southwestern Alberta, Canada, were used in the presented case study of reservoir porosity characterization. Our experiments demonstrate that our method can generate reliable results.

  14. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    PubMed Central

    Rusakov, Dmitri A.; Savtchenko, Leonid P.

    2017-01-01

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877

  15. Natural lecithin promotes neural network complexity and activity

    PubMed Central

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-01-01

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called “essential” fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications. PMID:27228907

  16. Natural lecithin promotes neural network complexity and activity.

    PubMed

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-05-27

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called "essential" fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications.

  17. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  18. Quantum Associative Neural Network with Nonlinear Search Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Rigui; Wang, Huian; Wu, Qian; Shi, Yang

    2012-03-01

    Based on analysis on properties of quantum linear superposition, to overcome the complexity of existing quantum associative memory which was proposed by Ventura, a new storage method for multiply patterns is proposed in this paper by constructing the quantum array with the binary decision diagrams. Also, the adoption of the nonlinear search algorithm increases the pattern recalling speed of this model which has multiply patterns to O( {log2}^{2^{n -t}} ) = O( n - t ) time complexity, where n is the number of quantum bit and t is the quantum information of the t quantum bit. Results of case analysis show that the associative neural network model proposed in this paper based on quantum learning is much better and optimized than other researchers' counterparts both in terms of avoiding the additional qubits or extraordinary initial operators, storing pattern and improving the recalling speed.

  19. The characteristic patterns of neuronal avalanches in mice under anesthesia and at rest: An investigation using constrained artificial neural networks

    PubMed Central

    Knöpfel, Thomas; Leech, Robert

    2018-01-01

    Local perturbations within complex dynamical systems can trigger cascade-like events that spread across significant portions of the system. Cascades of this type have been observed across a broad range of scales in the brain. Studies of these cascades, known as neuronal avalanches, usually report the statistics of large numbers of avalanches, without probing the characteristic patterns produced by the avalanches themselves. This is partly due to limitations in the extent or spatiotemporal resolution of commonly used neuroimaging techniques. In this study, we overcome these limitations by using optical voltage (genetically encoded voltage indicators) imaging. This allows us to record cortical activity in vivo across an entire cortical hemisphere, at both high spatial (~30um) and temporal (~20ms) resolution in mice that are either in an anesthetized or awake state. We then use artificial neural networks to identify the characteristic patterns created by neuronal avalanches in our data. The avalanches in the anesthetized cortex are most accurately classified by an artificial neural network architecture that simultaneously connects spatial and temporal information. This is in contrast with the awake cortex, in which avalanches are most accurately classified by an architecture that treats spatial and temporal information separately, due to the increased levels of spatiotemporal complexity. This is in keeping with reports of higher levels of spatiotemporal complexity in the awake brain coinciding with features of a dynamical system operating close to criticality. PMID:29795654

  20. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  1. Understanding the Implications of Neural Population Activity on Behavior

    NASA Astrophysics Data System (ADS)

    Briguglio, John

    Learning how neural activity in the brain leads to the behavior we exhibit is one of the fundamental questions in Neuroscience. In this dissertation, several lines of work are presented to that use principles of neural coding to understand behavior. In one line of work, we formulate the efficient coding hypothesis in a non-traditional manner in order to test human perceptual sensitivity to complex visual textures. We find a striking agreement between how variable a particular texture signal is and how sensitive humans are to its presence. This reveals that the efficient coding hypothesis is still a guiding principle for neural organization beyond the sensory periphery, and that the nature of cortical constraints differs from the peripheral counterpart. In another line of work, we relate frequency discrimination acuity to neural responses from auditory cortex in mice. It has been previously observed that optogenetic manipulation of auditory cortex, in addition to changing neural responses, evokes changes in behavioral frequency discrimination. We are able to account for changes in frequency discrimination acuity on an individual basis by examining the Fisher information from the neural population with and without optogenetic manipulation. In the third line of work, we address the question of what a neural population should encode given that its inputs are responses from another group of neurons. Drawing inspiration from techniques in machine learning, we train Deep Belief Networks on fake retinal data and show the emergence of Garbor-like filters, reminiscent of responses in primary visual cortex. In the last line of work, we model the state of a cortical excitatory-inhibitory network during complex adaptive stimuli. Using a rate model with Wilson-Cowan dynamics, we demonstrate that simple non-linearities in the signal transferred from inhibitory to excitatory neurons can account for real neural recordings taken from auditory cortex. This work establishes and tests a variety of hypotheses that will be useful in helping to understand the relationship between neural activity and behavior as recorded neural populations continue to grow.

  2. Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language

    NASA Astrophysics Data System (ADS)

    Tanadi, Theo

    2018-03-01

    Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.

  3. The Neural Border: Induction, Specification and Maturation of the territory that generates Neural Crest cells.

    PubMed

    Pla, Patrick; Monsoro-Burq, Anne H

    2018-05-28

    The neural crest is induced at the edge between the neural plate and the nonneural ectoderm, in an area called the neural (plate) border, during gastrulation and neurulation. In recent years, many studies have explored how this domain is patterned, and how the neural crest is induced within this territory, that also participates to the prospective dorsal neural tube, the dorsalmost nonneural ectoderm, as well as placode derivatives in the anterior area. This review highlights the tissue interactions, the cell-cell signaling and the molecular mechanisms involved in this dynamic spatiotemporal patterning, resulting in the induction of the premigratory neural crest. Collectively, these studies allow building a complex neural border and early neural crest gene regulatory network, mostly composed by transcriptional regulations but also, more recently, including novel signaling interactions. Copyright © 2018. Published by Elsevier Inc.

  4. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; ...

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  5. A convolutional neural network neutrino event classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurisano, A.; Radovic, A.; Rocco, D.

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  6. Classification of time-of-flight secondary ion mass spectrometry spectra from complex Cu-Fe sulphides by principal component analysis and artificial neural networks.

    PubMed

    Kalegowda, Yogesh; Harmer, Sarah L

    2013-01-08

    Artificial neural network (ANN) and a hybrid principal component analysis-artificial neural network (PCA-ANN) classifiers have been successfully implemented for classification of static time-of-flight secondary ion mass spectrometry (ToF-SIMS) mass spectra collected from complex Cu-Fe sulphides (chalcopyrite, bornite, chalcocite and pyrite) at different flotation conditions. ANNs are very good pattern classifiers because of: their ability to learn and generalise patterns that are not linearly separable; their fault and noise tolerance capability; and high parallelism. In the first approach, fragments from the whole ToF-SIMS spectrum were used as input to the ANN, the model yielded high overall correct classification rates of 100% for feed samples, 88% for conditioned feed samples and 91% for Eh modified samples. In the second approach, the hybrid pattern classifier PCA-ANN was integrated. PCA is a very effective multivariate data analysis tool applied to enhance species features and reduce data dimensionality. Principal component (PC) scores which accounted for 95% of the raw spectral data variance, were used as input to the ANN, the model yielded high overall correct classification rates of 88% for conditioned feed samples and 95% for Eh modified samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. A program for the Bayesian Neural Network in the ROOT framework

    NASA Astrophysics Data System (ADS)

    Zhong, Jiahang; Huang, Run-Sheng; Lee, Shih-Chang

    2011-12-01

    We present a Bayesian Neural Network algorithm implemented in the TMVA package (Hoecker et al., 2007 [1]), within the ROOT framework (Brun and Rademakers, 1997 [2]). Comparing to the conventional utilization of Neural Network as discriminator, this new implementation has more advantages as a non-parametric regression tool, particularly for fitting probabilities. It provides functionalities including cost function selection, complexity control and uncertainty estimation. An example of such application in High Energy Physics is shown. The algorithm is available with ROOT release later than 5.29. Program summaryProgram title: TMVA-BNN Catalogue identifier: AEJX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD license No. of lines in distributed program, including test data, etc.: 5094 No. of bytes in distributed program, including test data, etc.: 1,320,987 Distribution format: tar.gz Programming language: C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system Operating system: Most UNIX/Linux systems. The application programs were thoroughly tested under Fedora and Scientific Linux CERN. Classification: 11.9 External routines: ROOT package version 5.29 or higher ( http://root.cern.ch) Nature of problem: Non-parametric fitting of multivariate distributions Solution method: An implementation of Neural Network following the Bayesian statistical interpretation. Uses Laplace approximation for the Bayesian marginalizations. Provides the functionalities of automatic complexity control and uncertainty estimation. Running time: Time consumption for the training depends substantially on the size of input sample, the NN topology, the number of training iterations, etc. For the example in this manuscript, about 7 min was used on a PC/Linux with 2.0 GHz processors.

  8. Plasticity of brain wave network interactions and evolution across physiologic states

    PubMed Central

    Liu, Kang K. L.; Bartsch, Ronny P.; Lin, Aijing; Mantegna, Rosario N.; Ivanov, Plamen Ch.

    2015-01-01

    Neural plasticity transcends a range of spatio-temporal scales and serves as the basis of various brain activities and physiologic functions. At the microscopic level, it enables the emergence of brain waves with complex temporal dynamics. At the macroscopic level, presence and dominance of specific brain waves is associated with important brain functions. The role of neural plasticity at different levels in generating distinct brain rhythms and how brain rhythms communicate with each other across brain areas to generate physiologic states and functions remains not understood. Here we perform an empirical exploration of neural plasticity at the level of brain wave network interactions representing dynamical communications within and between different brain areas in the frequency domain. We introduce the concept of time delay stability (TDS) to quantify coordinated bursts in the activity of brain waves, and we employ a system-wide Network Physiology integrative approach to probe the network of coordinated brain wave activations and its evolution across physiologic states. We find an association between network structure and physiologic states. We uncover a hierarchical reorganization in the brain wave networks in response to changes in physiologic state, indicating new aspects of neural plasticity at the integrated level. Globally, we find that the entire brain network undergoes a pronounced transition from low connectivity in Deep Sleep and REM to high connectivity in Light Sleep and Wake. In contrast, we find that locally, different brain areas exhibit different network dynamics of brain wave interactions to achieve differentiation in function during different sleep stages. Moreover, our analyses indicate that plasticity also emerges in frequency-specific networks, which represent interactions across brain locations mediated through a specific frequency band. Comparing frequency-specific networks within the same physiologic state we find very different degree of network connectivity and link strength, while at the same time each frequency-specific network is characterized by a different signature pattern of sleep-stage stratification, reflecting a remarkable flexibility in response to change in physiologic state. These new aspects of neural plasticity demonstrate that in addition to dominant brain waves, the network of brain wave interactions is a previously unrecognized hallmark of physiologic state and function. PMID:26578891

  9. Drug release control and system understanding of sucrose esters matrix tablets by artificial neural networks.

    PubMed

    Chansanroj, Krisanin; Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele

    2011-10-09

    Artificial neural networks (ANNs) were applied for system understanding and prediction of drug release properties from direct compacted matrix tablets using sucrose esters (SEs) as matrix-forming agents for controlled release of a highly water soluble drug, metoprolol tartrate. Complexity of the system was presented through the effects of SE concentration and tablet porosity at various hydrophilic-lipophilic balance (HLB) values of SEs ranging from 0 to 16. Both effects contributed to release behaviors especially in the system containing hydrophilic SEs where swelling phenomena occurred. A self-organizing map neural network (SOM) was applied for visualizing interrelation among the variables and multilayer perceptron neural networks (MLPs) were employed to generalize the system and predict the drug release properties based on HLB value and concentration of SEs and tablet properties, i.e., tablet porosity, volume and tensile strength. Accurate prediction was obtained after systematically optimizing network performance based on learning algorithm of MLP. Drug release was mainly attributed to the effects of SEs, tablet volume and tensile strength in multi-dimensional interrelation whereas tablet porosity gave a small impact. Ability of system generalization and accurate prediction of the drug release properties proves the validity of SOM and MLPs for the formulation modeling of direct compacted matrix tablets containing controlled release agents of different material properties. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Supervised Learning Based on Temporal Coding in Spiking Neural Networks.

    PubMed

    Mostafa, Hesham

    2017-08-01

    Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.

  11. PSF estimation for defocus blurred image based on quantum back-propagation neural network

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang

    2010-11-01

    Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.

  12. Unsupervised sputum color image segmentation for lung cancer diagnosis based on a Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Sammouda, Rachid; Niki, Noboru; Nishitani, Hiroshi; Nakamura, S.; Mori, Shinichiro

    1997-04-01

    The paper presents a method for automatic segmentation of sputum cells with color images, to develop an efficient algorithm for lung cancer diagnosis based on a Hopfield neural network. We formulate the segmentation problem as a minimization of an energy function constructed with two terms, the cost-term as a sum of squared errors, and the second term a temporary noise added to the network as an excitation to escape certain local minima with the result of being closer to the global minimum. To increase the accuracy in segmenting the regions of interest, a preclassification technique is used to extract the sputum cell regions within the color image and remove those of the debris cells. The former is then given with the raw image to the input of Hopfield neural network to make a crisp segmentation by assigning each pixel to label such as background, cytoplasm, and nucleus. The proposed technique has yielded correct segmentation of complex scene of sputum prepared by ordinary manual staining method in most of the tested images selected from our database containing thousands of sputum color images.

  13. Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Pu; Bennett, Christopher H.; Cabaret, Théo; Vodenicarevic, Damir; Chabi, Djaafar; Querlioz, Damien; Jousselme, Bruno; Derycke, Vincent; Klein, Jacques-Olivier

    2016-09-01

    Multiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations.

  14. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2012-01-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  15. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2011-12-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  16. An Intelligent Gear Fault Diagnosis Methodology Using a Complex Wavelet Enhanced Convolutional Neural Network

    PubMed Central

    Sun, Weifang; Yao, Bin; Zeng, Nianyin; He, Yuchao; Cao, Xincheng; He, Wangpeng

    2017-01-01

    As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault’s characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault’s characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal’s features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear’s weak fault features. PMID:28773148

  17. Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model

    DOE PAGES

    Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.

    2008-01-01

    This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less

  18. Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses.

    PubMed

    Lin, Yu-Pu; Bennett, Christopher H; Cabaret, Théo; Vodenicarevic, Damir; Chabi, Djaafar; Querlioz, Damien; Jousselme, Bruno; Derycke, Vincent; Klein, Jacques-Olivier

    2016-09-07

    Multiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations.

  19. Stability analysis and synchronization in discrete-time complex networks with delayed coupling

    NASA Astrophysics Data System (ADS)

    Cheng, Ranran; Peng, Mingshu; Yu, Weibin; Sun, Bo; Yu, Jinchen

    2013-12-01

    A new network of coupled maps is proposed in which the connections between units involve no delays but the intra-neural communication does, whereas in the work of Atay et al. [Phys. Rev. Lett. 92, 144101 (2004)], the focus is on information processing delayed by the inter-neural communication. We show that the synchronization of the network depends on not only the intrinsic dynamical features and inter-connection topology (characterized by the spectrum of the graph Laplacian) but also the delays and the coupling strength. There are two main findings: (i) the more neighbours, the easier to be synchronized; (ii) odd delays are easier to be synchronized than even ones. In addition, compared with those discussed by Atay et al. [Phys. Rev. Lett. 92, 144101 (2004)], our model has a better synchronizability for regular networks and small-world variants.

  20. Common neural correlates of intertemporal choices and intelligence in adolescents.

    PubMed

    Ripke, Stephan; Hübner, Thomas; Mennigen, Eva; Müller, Kathrin U; Li, Shu-Chen; Smolka, Michael N

    2015-02-01

    Converging behavioral evidence indicates that temporal discounting, measured by intertemporal choice tasks, is inversely related to intelligence. At the neural level, the parieto-frontal network is pivotal for complex, higher-order cognitive processes. Relatedly, underrecruitment of the pFC during a working memory task has been found to be associated with steeper temporal discounting. Furthermore, this network has also been shown to be related to the consistency of intertemporal choices. Here we report an fMRI study that directly investigated the association of neural correlates of intertemporal choice behavior with intelligence in an adolescent sample (n = 206; age 13.7-15.5 years). After identifying brain regions where the BOLD response during intertemporal choice was correlated with individual differences in intelligence, we further tested whether BOLD responses in these areas would mediate the associations between intelligence, the discounting rate, and choice consistency. We found positive correlations between BOLD response in a value-independent decision network (i.e., dorsolateral pFC, precuneus, and occipital areas) and intelligence. Furthermore, BOLD response in a value-dependent decision network (i.e., perigenual ACC, inferior frontal gyrus, ventromedial pFC, ventral striatum) was positively correlated with intelligence. The mediation analysis revealed that BOLD responses in the value-independent network mediated the association between intelligence and choice consistency, whereas BOLD responses in the value-dependent network mediated the association between intelligence and the discounting rate. In summary, our findings provide evidence for common neural correlates of intertemporal choice and intelligence, possibly linked by valuation as well as executive functions.

  1. Accelerating Chemical Discovery with Machine Learning: Simulated Evolution of Spin Crossover Complexes with an Artificial Neural Network.

    PubMed

    Janet, Jon Paul; Chan, Lydia; Kulik, Heather J

    2018-03-01

    Machine learning (ML) has emerged as a powerful complement to simulation for materials discovery by reducing time for evaluation of energies and properties at accuracy competitive with first-principles methods. We use genetic algorithm (GA) optimization to discover unconventional spin-crossover complexes in combination with efficient scoring from an artificial neural network (ANN) that predicts spin-state splitting of inorganic complexes. We explore a compound space of over 5600 candidate materials derived from eight metal/oxidation state combinations and a 32-ligand pool. We introduce a strategy for error-aware ML-driven discovery by limiting how far the GA travels away from the nearest ANN training points while maximizing property (i.e., spin-splitting) fitness, leading to discovery of 80% of the leads from full chemical space enumeration. Over a 51-complex subset, average unsigned errors (4.5 kcal/mol) are close to the ANN's baseline 3 kcal/mol error. By obtaining leads from the trained ANN within seconds rather than days from a DFT-driven GA, this strategy demonstrates the power of ML for accelerating inorganic material discovery.

  2. Weather forecasting based on hybrid neural model

    NASA Astrophysics Data System (ADS)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-11-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  3. High-performance reconfigurable hardware architecture for restricted Boltzmann machines.

    PubMed

    Ly, Daniel Le; Chow, Paul

    2010-11-01

    Despite the popularity and success of neural networks in research, the number of resulting commercial or industrial applications has been limited. A primary cause for this lack of adoption is that neural networks are usually implemented as software running on general-purpose processors. Hence, a hardware implementation that can exploit the inherent parallelism in neural networks is desired. This paper investigates how the restricted Boltzmann machine (RBM), which is a popular type of neural network, can be mapped to a high-performance hardware architecture on field-programmable gate array (FPGA) platforms. The proposed modular framework is designed to reduce the time complexity of the computations through heavily customized hardware engines. A method to partition large RBMs into smaller congruent components is also presented, allowing the distribution of one RBM across multiple FPGA resources. The framework is tested on a platform of four Xilinx Virtex II-Pro XC2VP70 FPGAs running at 100 MHz through a variety of different configurations. The maximum performance was obtained by instantiating an RBM of 256 × 256 nodes distributed across four FPGAs, which resulted in a computational speed of 3.13 billion connection-updates-per-second and a speedup of 145-fold over an optimized C program running on a 2.8-GHz Intel processor.

  4. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

    PubMed Central

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures. PMID:29066942

  5. A hybrid artificial neural network as a software sensor for optimal control of a wastewater treatment process.

    PubMed

    Choi, D J; Park, H

    2001-11-01

    For control and automation of biological treatment processes, lack of reliable on-line sensors to measure water quality parameters is one of the most important problems to overcome. Many parameters cannot be measured directly with on-line sensors. The accuracy of existing hardware sensors is also not sufficient and maintenance problems such as electrode fouling often cause trouble. This paper deals with the development of software sensor techniques that estimate the target water quality parameter from other parameters using the correlation between water quality parameters. We focus our attention on the preprocessing of noisy data and the selection of the best model feasible to the situation. Problems of existing approaches are also discussed. We propose a hybrid neural network as a software sensor inferring wastewater quality parameter. Multivariate regression, artificial neural networks (ANN), and a hybrid technique that combines principal component analysis as a preprocessing stage are applied to data from industrial wastewater processes. The hybrid ANN technique shows an enhancement of prediction capability and reduces the overfitting problem of neural networks. The result shows that the hybrid ANN technique can be used to extract information from noisy data and to describe the nonlinearity of complex wastewater treatment processes.

  6. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices.

    PubMed

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures.

  7. Comparative Study on Prediction Effects of Short Fatigue Crack Propagation Rate by Two Different Calculation Methods

    NASA Astrophysics Data System (ADS)

    Yang, Bing; Liao, Zhen; Qin, Yahang; Wu, Yayun; Liang, Sai; Xiao, Shoune; Yang, Guangwu; Zhu, Tao

    2017-05-01

    To describe the complicated nonlinear process of the fatigue short crack evolution behavior, especially the change of the crack propagation rate, two different calculation methods are applied. The dominant effective short fatigue crack propagation rates are calculated based on the replica fatigue short crack test with nine smooth funnel-shaped specimens and the observation of the replica films according to the effective short fatigue cracks principle. Due to the fast decay and the nonlinear approximation ability of wavelet analysis, the self-learning ability of neural network, and the macroscopic searching and global optimization of genetic algorithm, the genetic wavelet neural network can reflect the implicit complex nonlinear relationship when considering multi-influencing factors synthetically. The effective short fatigue cracks and the dominant effective short fatigue crack are simulated and compared by the Genetic Wavelet Neural Network. The simulation results show that Genetic Wavelet Neural Network is a rational and available method for studying the evolution behavior of fatigue short crack propagation rate. Meanwhile, a traditional data fitting method for a short crack growth model is also utilized for fitting the test data. It is reasonable and applicable for predicting the growth rate. Finally, the reason for the difference between the prediction effects by these two methods is interpreted.

  8. Scaling of counter-current imbibition recovery curves using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Jafari, Iman; Masihi, Mohsen; Nasiri Zarandi, Masoud

    2018-06-01

    Scaling imbibition curves are of great importance in the characterization and simulation of oil production from naturally fractured reservoirs. Different parameters such as matrix porosity and permeability, oil and water viscosities, matrix dimensions, and oil/water interfacial tensions have an effective on the imbibition process. Studies on the scaling imbibition curves along with the consideration of different assumptions have resulted in various scaling equations. In this work, using an artificial neural network (ANN) method, a novel technique is presented for scaling imbibition recovery curves, which can be used for scaling the experimental and field-scale imbibition cases. The imbibition recovery curves for training and testing the neural network were gathered through the simulation of different scenarios using a commercial reservoir simulator. In this ANN-based method, six parameters were assumed to have an effect on the imbibition process and were considered as the inputs for training the network. Using the ‘Bayesian regularization’ training algorithm, the network was trained and tested. Training and testing phases showed superior results in comparison with the other scaling methods. It is concluded that using the new technique is useful for scaling imbibition recovery curves, especially for complex cases, for which the common scaling methods are not designed.

  9. From Molecular Circuit Dysfunction to Disease: Case Studies in Epilepsy, Traumatic Brain Injury, and Alzheimer’s Disease

    PubMed Central

    Dulla, Chris G.; Coulter, Douglas A.; Ziburkus, Jokubas

    2015-01-01

    Complex circuitry with feed-forward and feed-back systems regulate neuronal activity throughout the brain. Cell biological, electrical, and neurotransmitter systems enable neural networks to process and drive the entire spectrum of cognitive, behavioral, and motor functions. Simultaneous orchestration of distinct cells and interconnected neural circuits relies on hundreds, if not thousands, of unique molecular interactions. Even single molecule dysfunctions can be disrupting to neural circuit activity, leading to neurological pathology. Here, we sample our current understanding of how molecular aberrations lead to disruptions in networks using three neurological pathologies as exemplars: epilepsy, traumatic brain injury (TBI), and Alzheimer’s disease (AD). Epilepsy provides a window into how total destabilization of network balance can occur. TBI is an abrupt physical disruption that manifests in both acute and chronic neurological deficits. Last, in AD progressive cell loss leads to devastating cognitive consequences. Interestingly, all three of these neurological diseases are interrelated. The goal of this review, therefore, is to identify molecular changes that may lead to network dysfunction, elaborate on how altered network activity and circuit structure can contribute to neurological disease, and suggest common threads that may lie at the heart of molecular circuit dysfunction. PMID:25948650

  10. From Molecular Circuit Dysfunction to Disease: Case Studies in Epilepsy, Traumatic Brain Injury, and Alzheimer's Disease.

    PubMed

    Dulla, Chris G; Coulter, Douglas A; Ziburkus, Jokubas

    2016-06-01

    Complex circuitry with feed-forward and feed-back systems regulate neuronal activity throughout the brain. Cell biological, electrical, and neurotransmitter systems enable neural networks to process and drive the entire spectrum of cognitive, behavioral, and motor functions. Simultaneous orchestration of distinct cells and interconnected neural circuits relies on hundreds, if not thousands, of unique molecular interactions. Even single molecule dysfunctions can be disrupting to neural circuit activity, leading to neurological pathology. Here, we sample our current understanding of how molecular aberrations lead to disruptions in networks using three neurological pathologies as exemplars: epilepsy, traumatic brain injury (TBI), and Alzheimer's disease (AD). Epilepsy provides a window into how total destabilization of network balance can occur. TBI is an abrupt physical disruption that manifests in both acute and chronic neurological deficits. Last, in AD progressive cell loss leads to devastating cognitive consequences. Interestingly, all three of these neurological diseases are interrelated. The goal of this review, therefore, is to identify molecular changes that may lead to network dysfunction, elaborate on how altered network activity and circuit structure can contribute to neurological disease, and suggest common threads that may lie at the heart of molecular circuit dysfunction. © The Author(s) 2015.

  11. Neural Network Prediction of Failure of Damaged Composite Pressure Vessels from Strain Field Data Acquired by a Computer Vision Method

    NASA Technical Reports Server (NTRS)

    Russell, Samuel S.; Lansing, Matthew D.

    1997-01-01

    This effort used a new and novel method of acquiring strains called Sub-pixel Digital Video Image Correlation (SDVIC) on impact damaged Kevlar/epoxy filament wound pressure vessels during a proof test. To predict the burst pressure, the hoop strain field distribution around the impact location from three vessels was used to train a neural network. The network was then tested on additional pressure vessels. Several variations on the network were tried. The best results were obtained using a single hidden layer. SDVIC is a fill-field non-contact computer vision technique which provides in-plane deformation and strain data over a load differential. This method was used to determine hoop and axial displacements, hoop and axial linear strains, the in-plane shear strains and rotations in the regions surrounding impact sites in filament wound pressure vessels (FWPV) during proof loading by internal pressurization. The relationship between these deformation measurement values and the remaining life of the pressure vessels, however, requires a complex theoretical model or numerical simulation. Both of these techniques are time consuming and complicated. Previous results using neural network methods had been successful in predicting the burst pressure for graphite/epoxy pressure vessels based upon acoustic emission (AE) measurements in similar tests. The neural network associates the character of the AE amplitude distribution, which depends upon the extent of impact damage, with the burst pressure. Similarly, higher amounts of impact damage are theorized to cause a higher amount of strain concentration in the damage effected zone at a given pressure and result in lower burst pressures. This relationship suggests that a neural network might be able to find an empirical relationship between the SDVIC strain field data and the burst pressure, analogous to the AE method, with greater speed and simplicity than theoretical or finite element modeling. The process of testing SDVIC neural network analysis and some encouraging preliminary results are presented in this paper. Details are given concerning the processing of SDVIC output data such that it may be used as back propagation neural network (BPNN) input data. The software written to perform this processing and the BPNN algorithm are also discussed. It will be shown that, with limited training, test results indicate an average error in burst pressure prediction of approximately six percent,

  12. Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.

    PubMed

    Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve

    2011-11-01

    Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Complex networks with large numbers of labelable attractors

    NASA Astrophysics Data System (ADS)

    Mi, Yuanyuan; Zhang, Lisheng; Huang, Xiaodong; Qian, Yu; Hu, Gang; Liao, Xuhong

    2011-09-01

    Information storage in many functional subsystems of the brain is regarded by theoretical neuroscientists to be related to attractors of neural networks. The number of attractors is large and each attractor can be temporarily represented or suppressed easily by corresponding external stimulus. In this letter, we discover that complex networks consisting of excitable nodes have similar fascinating properties of coexistence of large numbers of oscillatory attractors, most of which can be labeled with a few nodes. According to a simple labeling rule, different attractors can be identified and the number of labelable attractors can be predicted from the analysis of network topology. With the cues of the labeling association, these attractors can be conveniently retrieved or suppressed on purpose.

  14. Structurally Dynamic Spin Market Networks

    NASA Astrophysics Data System (ADS)

    Horváth, Denis; Kuscsik, Zoltán

    The agent-based model of stock price dynamics on a directed evolving complex network is suggested and studied by direct simulation. The stationary regime is maintained as a result of the balance between the extremal dynamics, adaptivity of strategic variables and reconnection rules. The inherent structure of node agent "brain" is modeled by a recursive neural network with local and global inputs and feedback connections. For specific parametric combination the complex network displays small-world phenomenon combined with scale-free behavior. The identification of a local leader (network hub, agent whose strategies are frequently adapted by its neighbors) is carried out by repeated random walk process through network. The simulations show empirically relevant dynamics of price returns and volatility clustering. The additional emerging aspects of stylized market statistics are Zipfian distributions of fitness.

  15. A neural network technique for remeshing of bone microstructure.

    PubMed

    Fischer, Anath; Holdstein, Yaron

    2012-01-01

    Today, there is major interest within the biomedical community in developing accurate noninvasive means for the evaluation of bone microstructure and bone quality. Recent improvements in 3D imaging technology, among them development of micro-CT and micro-MRI scanners, allow in-vivo 3D high-resolution scanning and reconstruction of large specimens or even whole bone models. Thus, the tendency today is to evaluate bone features using 3D assessment techniques rather than traditional 2D methods. For this purpose, high-quality meshing methods are required. However, the 3D meshes produced from current commercial systems usually are of low quality with respect to analysis and rapid prototyping. 3D model reconstruction of bone is difficult due to the complexity of bone microstructure. The small bone features lead to a great deal of neighborhood ambiguity near each vertex. The relatively new neural network method for mesh reconstruction has the potential to create or remesh 3D models accurately and quickly. A neural network (NN), which resembles an artificial intelligence (AI) algorithm, is a set of interconnected neurons, where each neuron is capable of making an autonomous arithmetic calculation. Moreover, each neuron is affected by its surrounding neurons through the structure of the network. This paper proposes an extension of the growing neural gas (GNN) neural network technique for remeshing a triangular manifold mesh that represents bone microstructure. This method has the advantage of reconstructing the surface of a genus-n freeform object without a priori knowledge regarding the original object, its topology, or its shape.

  16. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  17. A model for integrating elementary neural functions into delayed-response behavior.

    PubMed

    Gisiger, Thomas; Kerszberg, Michel

    2006-04-01

    It is well established that various cortical regions can implement a wide array of neural processes, yet the mechanisms which integrate these processes into behavior-producing, brain-scale activity remain elusive. We propose that an important role in this respect might be played by executive structures controlling the traffic of information between the cortical regions involved. To illustrate this hypothesis, we present a neural network model comprising a set of interconnected structures harboring stimulus-related activity (visual representation, working memory, and planning), and a group of executive units with task-related activity patterns that manage the information flowing between them. The resulting dynamics allows the network to perform the dual task of either retaining an image during a delay (delayed-matching to sample task), or recalling from this image another one that has been associated with it during training (delayed-pair association task). The model reproduces behavioral and electrophysiological data gathered on the inferior temporal and prefrontal cortices of primates performing these same tasks. It also makes predictions on how neural activity coding for the recall of the image associated with the sample emerges and becomes prospective during the training phase. The network dynamics proves to be very stable against perturbations, and it exhibits signs of scale-invariant organization and cooperativity. The present network represents a possible neural implementation for active, top-down, prospective memory retrieval in primates. The model suggests that brain activity leading to performance of cognitive tasks might be organized in modular fashion, simple neural functions becoming integrated into more complex behavior by executive structures harbored in prefrontal cortex and/or basal ganglia.

  18. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  19. Optimization behavior of brainstem respiratory neurons. A cerebral neural network model.

    PubMed

    Poon, C S

    1991-01-01

    A recent model of respiratory control suggested that the steady-state respiratory responses to CO2 and exercise may be governed by an optimal control law in the brainstem respiratory neurons. It was not certain, however, whether such complex optimization behavior could be accomplished by a realistic biological neural network. To test this hypothesis, we developed a hybrid computer-neural model in which the dynamics of the lung, brain and other tissue compartments were simulated on a digital computer. Mimicking the "controller" was a human subject who pedalled on a bicycle with varying speed (analog of ventilatory output) with a view to minimize an analog signal of the total cost of breathing (chemical and mechanical) which was computed interactively and displayed on an oscilloscope. In this manner, the visuomotor cortex served as a proxy (homolog) of the brainstem respiratory neurons in the model. Results in 4 subjects showed a linear steady-state ventilatory CO2 response to arterial PCO2 during simulated CO2 inhalation and a nearly isocapnic steady-state response during simulated exercise. Thus, neural optimization is a plausible mechanism for respiratory control during exercise and can be achieved by a neural network with cognitive computational ability without the need for an exercise stimulus.

  20. A novel recurrent neural network with finite-time convergence for linear programming.

    PubMed

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  1. Deep convolutional neural network based antenna selection in multiple-input multiple-output system

    NASA Astrophysics Data System (ADS)

    Cai, Jiaxin; Li, Yan; Hu, Ying

    2018-03-01

    Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.

  2. Predicting The Type Of Pregnancy Using Flexible Discriminate Analysis And Artificial Neural Networks: A Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooman, A.; Mohammadzadeh, M

    Some medical and epidemiological surveys have been designed to predict a nominal response variable with several levels. With regard to the type of pregnancy there are four possible states: wanted, unwanted by wife, unwanted by husband and unwanted by couple. In this paper, we have predicted the type of pregnancy, as well as the factors influencing it using three different models and comparing them. Regarding the type of pregnancy with several levels, we developed a multinomial logistic regression, a neural network and a flexible discrimination based on the data and compared their results using tow statistical indices: Surface under curvemore » (ROC) and kappa coefficient. Based on these tow indices, flexible discrimination proved to be a better fit for prediction on data in comparison to other methods. When the relations among variables are complex, one can use flexible discrimination instead of multinomial logistic regression and neural network to predict the nominal response variables with several levels in order to gain more accurate predictions.« less

  3. AN ARTIFICIAL NEURAL NETWORK EVALUATION OF TUBERCULOSIS USING GENETIC AND PHYSIOLOGICAL PATIENT DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, William O.; Darsey, Jerry A.; Hanna, Josh

    When doctors see more cases of patients with tell-tale symptoms of a disease, it is hoped that they will be able to recognize an infection administer treatment appropriately, thereby speeding up recovery for sick patients. We hope that our studies can aid in the detection of tuberculosis by using a computer model called an artificial neural network. Our model looks at patients with and without tuberculosis (TB). The data that the neural network examined came from the following: patient' age, gender, place, of birth, blood type, Rhesus (Rh) factor, and genes of the human Leukocyte Antigens (HLA) system (9q34.1) presentmore » in the Major Histocompatibility Complex. With availability in genetic data and good research, we hope to give them an advantage in the detection of tuberculosis. We try to mimic the doctor's experience with a computer test, which will learn from patient data the factors that contribute to TB.« less

  4. Artificial Neural Network Based Mission Planning Mechanism for Spacecraft

    NASA Astrophysics Data System (ADS)

    Li, Zhaoyu; Xu, Rui; Cui, Pingyuan; Zhu, Shengying

    2018-04-01

    The ability to plan and react fast in dynamic space environments is central to intelligent behavior of spacecraft. For space and robotic applications, many planners have been used. But it is difficult to encode the domain knowledge and directly use existing techniques such as heuristic to improve the performance of the application systems. Therefore, regarding planning as an advanced control problem, this paper first proposes an autonomous mission planning and action selection mechanism through a multiple layer perceptron neural network approach to select actions in planning process and improve efficiency. To prove the availability and effectiveness, we use autonomous mission planning problems of the spacecraft, which is a sophisticated system with complex subsystems and constraints as an example. Simulation results have shown that artificial neural networks (ANNs) are usable for planning problems. Compared with the existing planning method in EUROPA, the mechanism using ANNs is more efficient and can guarantee stable performance. Therefore, the mechanism proposed in this paper is more suitable for planning problems of spacecraft that require real time and stability.

  5. Face recognition via Gabor and convolutional neural network

    NASA Astrophysics Data System (ADS)

    Lu, Tongwei; Wu, Menglu; Lu, Tao

    2018-04-01

    In recent years, the powerful feature learning and classification ability of convolutional neural network have attracted widely attention. Compared with the deep learning, the traditional machine learning algorithm has a good explanatory which deep learning does not have. Thus, In this paper, we propose a method to extract the feature of the traditional algorithm as the input of convolution neural network. In order to reduce the complexity of the network, the kernel function of Gabor wavelet is used to extract the feature from different position, frequency and direction of target image. It is sensitive to edge of image which can provide good direction and scale selection. The extraction of the image from eight directions on a scale are as the input of network that we proposed. The network have the advantage of weight sharing and local connection and texture feature of the input image can reduce the influence of facial expression, gesture and illumination. At the same time, we introduced a layer which combined the results of the pooling and convolution can extract deeper features. The training network used the open source caffe framework which is beneficial to feature extraction. The experiment results of the proposed method proved that the network structure effectively overcame the barrier of illumination and had a good robustness as well as more accurate and rapid than the traditional algorithm.

  6. Multi-voxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    PubMed Central

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.; Munhall, Kevin G.; Cusack, Rhodri; Johnsrude, Ingrid S.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multi-voxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was employed to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared to during passive listening. One network of regions appears to encode an ‘error signal’ irrespective of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a fronto-temporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Taken together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems. PMID:23467350

  7. Cooperation of Deterministic Dynamics and Random Noise in Production of Complex Syntactical Avian Song Sequences: A Neural Network Model

    PubMed Central

    Yamashita, Yuichi; Okumura, Tetsu; Okanoya, Kazuo; Tani, Jun

    2011-01-01

    How the brain learns and generates temporal sequences is a fundamental issue in neuroscience. The production of birdsongs, a process which involves complex learned sequences, provides researchers with an excellent biological model for this topic. The Bengalese finch in particular learns a highly complex song with syntactical structure. The nucleus HVC (HVC), a premotor nucleus within the avian song system, plays a key role in generating the temporal structures of their songs. From lesion studies, the nucleus interfacialis (NIf) projecting to the HVC is considered one of the essential regions that contribute to the complexity of their songs. However, the types of interaction between the HVC and the NIf that can produce complex syntactical songs remain unclear. In order to investigate the function of interactions between the HVC and NIf, we have proposed a neural network model based on previous biological evidence. The HVC is modeled by a recurrent neural network (RNN) that learns to generate temporal patterns of songs. The NIf is modeled as a mechanism that provides auditory feedback to the HVC and generates random noise that feeds into the HVC. The model showed that complex syntactical songs can be replicated by simple interactions between deterministic dynamics of the RNN and random noise. In the current study, the plausibility of the model is tested by the comparison between the changes in the songs of actual birds induced by pharmacological inhibition of the NIf and the changes in the songs produced by the model resulting from modification of parameters representing NIf functions. The efficacy of the model demonstrates that the changes of songs induced by pharmacological inhibition of the NIf can be interpreted as a trade-off between the effects of noise and the effects of feedback on the dynamics of the RNN of the HVC. These facts suggest that the current model provides a convincing hypothesis for the functional role of NIf–HVC interaction. PMID:21559065

  8. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  9. Orthogonal projection approach and continuous wavelet transform-feed forward neural networks for simultaneous spectrophotometric determination of some heavy metals in diet samples.

    PubMed

    Abbasi Tarighat, Maryam

    2016-02-01

    Simultaneous spectrophotometric determination of a mixture of overlapped complexes of Fe(3+), Mn(2+), Cu(2+), and Zn(2+) ions with 2-(3-hydroxy-1-phenyl-but-2-enylideneamino) pyridine-3-ol(HPEP) by orthogonal projection approach-feed forward neural network (OPA-FFNN) and continuous wavelet transform-feed forward neural network (CWT-FFNN) is discussed. Ionic complexes HPEP were formulated with varying reagent concentration, pH and time of color formation for completion of complexation reactions. It was found that, at 5.0 × 10(-4) mol L(-1) of HPEP, pH 9.5 and 10 min after mixing the complexation reactions were completed. The spectral data were analyzed using partial response plots, and identified non-linearity modeled using FFNN. Reducing the number of OPA-FFNN and CWT-FFNN inputs were simplified using dissimilarity pure spectra of OPA and selected wavelet coefficients. Once the pure dissimilarity plots ad optimal wavelet coefficients are selected, different ANN models were employed for the calculation of the final calibration models. The performance of these two approaches were tested with regard to root mean square errors of prediction (RMSE %) values, using synthetic solutions. Under the working conditions, the proposed methods were successfully applied to the simultaneous determination of metal ions in different vegetable and foodstuff samples. The results show that, OPA-FFNN and CWT-FFNN were effective in simultaneously determining Fe(3+), Mn(2+), Cu(2+), and Zn(2+) concentration. Also, concentrations of metal ions in the samples were determined by flame atomic absorption spectrometry (FAAS). The amounts of metal ions obtained by the proposed methods were in good agreement with those obtained by FAAS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Two-Dimensional Optoelectronic Graphene Nanoprobes for Neural Nerwork

    NASA Astrophysics Data System (ADS)

    Hong, Tu; Kitko, Kristina; Wang, Rui; Zhang, Qi; Xu, Yaqiong

    2014-03-01

    Brain is the most complex network created by nature, with billions of neurons connected by trillions of synapses through sophisticated wiring patterns and countless modulatory mechanisms. Current methods to study the neuronal process, either by electrophysiology or optical imaging, have significant limitations on throughput and sensitivity. Here, we use graphene, a monolayer of carbon atoms, as a two-dimensional nanoprobe for neural network. Scanning photocurrent measurement is applied to detect the local integration of electrical and chemical signals in mammalian neurons. Such interface between nanoscale electronic device and biological system provides not only ultra-high sensitivity, but also sub-millisecond temporal resolution, owing to the high carrier mobility of graphene.

  11. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohman, Frederick M.; Hodas, Nathan O.; Chau, Duen Horng

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as “black-boxes” due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user’s data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  12. Identification of serial number on bank card using recurrent neural network

    NASA Astrophysics Data System (ADS)

    Liu, Li; Huang, Linlin; Xue, Jian

    2018-04-01

    Identification of serial number on bank card has many applications. Due to the different number printing mode, complex background, distortion in shape, etc., it is quite challenging to achieve high identification accuracy. In this paper, we propose a method using Normalization-Cooperated Gradient Feature (NCGF) and Recurrent Neural Network (RNN) based on Long Short-Term Memory (LSTM) for serial number identification. The NCGF maps the gradient direction elements of original image to direction planes such that the RNN with direction planes as input can recognize numbers more accurately. Taking the advantages of NCGF and RNN, we get 90%digit string recognition accuracy.

  13. A Neural Network Design for the Estimation of Nonlinear Behavior of a Magnetically-Excited Piezoelectric Harvester

    NASA Astrophysics Data System (ADS)

    Çelik, Emre; Uzun, Yunus; Kurt, Erol; Öztürk, Nihat; Topaloğlu, Nurettin

    2018-01-01

    An application of an artificial neural network (ANN) has been implemented in this article to model the nonlinear relationship of the harvested electrical power of a recently developed piezoelectric pendulum with respect to its resistive load R L and magnetic excitation frequency f. Prediction of harvested power for a wide range is a difficult task, because it increases dramatically when f gets closer to the natural frequency f 0 of the system. The neural model of the concerned system is designed upon the basis of a standard multi-layer network with a back propagation learning algorithm. Input data, termed input patterns, to present to the network and the respective output data, termed output patterns, describing desired network output that are carefully collected from the experiment under several conditions in order to train the developed network accurately. Results have indicated that the designed ANN is an effective means for predicting the harvested power of the piezoelectric harvester as functions of R L and f with a root mean square error of 6.65 × 10-3 for training and 1.40 for different test conditions. Using the proposed approach, the harvested power can be estimated reasonably without tackling the difficulty of experimental studies and complexity of analytical formulas representing the concerned system.

  14. Neural network configuration and efficiency underlies individual differences in spatial orientation ability.

    PubMed

    Arnold, Aiden E G F; Protzner, Andrea B; Bray, Signe; Levy, Richard M; Iaria, Giuseppe

    2014-02-01

    Spatial orientation is a complex cognitive process requiring the integration of information processed in a distributed system of brain regions. Current models on the neural basis of spatial orientation are based primarily on the functional role of single brain regions, with limited understanding of how interaction among these brain regions relates to behavior. In this study, we investigated two sources of variability in the neural networks that support spatial orientation--network configuration and efficiency--and assessed whether variability in these topological properties relates to individual differences in orientation accuracy. Participants with higher accuracy were shown to express greater activity in the right supramarginal gyrus, the right precentral cortex, and the left hippocampus, over and above a core network engaged by the whole group. Additionally, high-performing individuals had increased levels of global efficiency within a resting-state network composed of brain regions engaged during orientation and increased levels of node centrality in the right supramarginal gyrus, the right primary motor cortex, and the left hippocampus. These results indicate that individual differences in the configuration of task-related networks and their efficiency measured at rest relate to the ability to spatially orient. Our findings advance systems neuroscience models of orientation and navigation by providing insight into the role of functional integration in shaping orientation behavior.

  15. Neural network modeling for surgical decisions on traumatic brain injury patients.

    PubMed

    Li, Y C; Liu, L; Chiu, W T; Jian, W S

    2000-01-01

    Computerized medical decision support systems have been a major research topic in recent years. Intelligent computer programs were implemented to aid physicians and other medical professionals in making difficult medical decisions. This report compares three different mathematical models for building a traumatic brain injury (TBI) medical decision support system (MDSS). These models were developed based on a large TBI patient database. This MDSS accepts a set of patient data such as the types of skull fracture, Glasgow Coma Scale (GCS), episode of convulsion and return the chance that a neurosurgeon would recommend an open-skull surgery for this patient. The three mathematical models described in this report including a logistic regression model, a multi-layer perceptron (MLP) neural network and a radial-basis-function (RBF) neural network. From the 12,640 patients selected from the database. A randomly drawn 9480 cases were used as the training group to develop/train our models. The other 3160 cases were in the validation group which we used to evaluate the performance of these models. We used sensitivity, specificity, areas under receiver-operating characteristics (ROC) curve and calibration curves as the indicator of how accurate these models are in predicting a neurosurgeon's decision on open-skull surgery. The results showed that, assuming equal importance of sensitivity and specificity, the logistic regression model had a (sensitivity, specificity) of (73%, 68%), compared to (80%, 80%) from the RBF model and (88%, 80%) from the MLP model. The resultant areas under ROC curve for logistic regression, RBF and MLP neural networks are 0.761, 0.880 and 0.897, respectively (P < 0.05). Among these models, the logistic regression has noticeably poorer calibration. This study demonstrated the feasibility of applying neural networks as the mechanism for TBI decision support systems based on clinical databases. The results also suggest that neural networks may be a better solution for complex, non-linear medical decision support systems than conventional statistical techniques such as logistic regression.

  16. Gamma Spectroscopy by Artificial Neural Network Coupled with MCNP

    NASA Astrophysics Data System (ADS)

    Sahiner, Huseyin

    While neutron activation analysis is widely used in many areas, sensitivity of the analysis depends on how the analysis is conducted. Even though the sensitivity of the techniques carries error, compared to chemical analysis, its range is in parts per million or sometimes billion. Due to this sensitivity, the use of neutron activation analysis becomes important when analyzing bio-samples. Artificial neural network is an attractive technique for complex systems. Although there are neural network applications on spectral analysis, training by simulated data to analyze experimental data has not been made. This study offers an improvement on spectral analysis and optimization on neural network for the purpose. The work considers five elements that are considered as trace elements for bio-samples. However, the system is not limited to five elements. The only limitation of the study comes from data library availability on MCNP. A perceptron network was employed to identify five elements from gamma spectra. In quantitative analysis, better results were obtained when the neural fitting tool in MATLAB was used. As a training function, Levenberg-Marquardt algorithm was used with 23 neurons in the hidden layer with 259 gamma spectra in the input. Because the interest of the study deals with five elements, five neurons representing peak counts of five isotopes in the input layer were used. Five output neurons revealed mass information of these elements from irradiated kidney stones. Results showing max error of 17.9% in APA, 24.9% in UA, 28.2% in COM, 27.9% in STRU type showed the success of neural network approach in analyzing gamma spectra. This high error was attributed to Zn that has a very long decay half-life compared to the other elements. The simulation and experiments were made under certain experimental setup (3 hours irradiation, 96 hours decay time, 8 hours counting time). Nevertheless, the approach is subject to be generalized for different setups.

  17. Linear matrix inequality approach to exponential synchronization of a class of chaotic neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Cui, Bao-Tong

    2007-07-01

    In this paper, a synchronization scheme for a class of chaotic neural networks with time-varying delays is presented. This class of chaotic neural networks covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks, and bidirectional associative memory networks. The obtained criteria are expressed in terms of linear matrix inequalities, thus they can be efficiently verified. A comparison between our results and the previous results shows that our results are less restrictive.

  18. Revealing networks from dynamics: an introduction

    NASA Astrophysics Data System (ADS)

    Timme, Marc; Casadiego, Jose

    2014-08-01

    What can we learn from the collective dynamics of a complex network about its interaction topology? Taking the perspective from nonlinear dynamics, we briefly review recent progress on how to infer structural connectivity (direct interactions) from accessing the dynamics of the units. Potential applications range from interaction networks in physics, to chemical and metabolic reactions, protein and gene regulatory networks as well as neural circuits in biology and electric power grids or wireless sensor networks in engineering. Moreover, we briefly mention some standard ways of inferring effective or functional connectivity.

  19. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  20. The neural network to determine the mechanical properties of the steels

    NASA Astrophysics Data System (ADS)

    Yemelyanov, Vitaliy; Yemelyanova, Nataliya; Safonova, Marina; Nedelkin, Aleksey

    2018-04-01

    The authors describe the neural network structure and software that is designed and developed to determine the mechanical properties of steels. The neural network is developed to refine upon the values of the steels properties. The results of simulations of the developed neural network are shown. The authors note the low standard error of the proposed neural network. To realize the proposed neural network the specialized software has been developed.

Top