Sample records for static neural network

  1. A Feasibility Study of Synthesizing Subsurfaces Modeled with Computational Neural Networks

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Housner, Jerrold M.; Szewczyk, Z. Peter

    1998-01-01

    This paper investigates the feasibility of synthesizing substructures modeled with computational neural networks. Substructures are modeled individually with computational neural networks and the response of the assembled structure is predicted by synthesizing the neural networks. A superposition approach is applied to synthesize models for statically determinate substructures while an interface displacement collocation approach is used to synthesize statically indeterminate substructure models. Beam and plate substructures along with components of a complicated Next Generation Space Telescope (NGST) model are used in this feasibility study. In this paper, the limitations and difficulties of synthesizing substructures modeled with neural networks are also discussed.

  2. Artificial neural network prediction of aircraft aeroelastic behavior

    NASA Astrophysics Data System (ADS)

    Pesonen, Urpo Juhani

    An Artificial Neural Network that predicts aeroelastic behavior of aircraft is presented. The neural net was designed to predict the shape of a flexible wing in static flight conditions using results from a structural analysis and an aerodynamic analysis performed with traditional computational tools. To generate reliable training and testing data for the network, an aeroelastic analysis code using these tools as components was designed and validated. To demonstrate the advantages and reliability of Artificial Neural Networks, a network was also designed and trained to predict airfoil maximum lift at low Reynolds numbers where wind tunnel data was used for the training. Finally, a neural net was designed and trained to predict the static aeroelastic behavior of a wing without the need to iterate between the structural and aerodynamic solvers.

  3. Static and transient performance prediction for CFB boilers using a Bayesian-Gaussian Neural Network

    NASA Astrophysics Data System (ADS)

    Ye, Haiwen; Ni, Weidou

    1997-06-01

    A Bayesian-Gaussian Neural Network (BGNN) is put forward in this paper to predict the static and transient performance of Circulating Fluidized Bed (CFB) boilers. The advantages of this network over Back-Propagation Neural Networks (BPNNs), easier determination of topology, simpler and time saving in training process as well as self-organizing ability, make this network more practical in on-line performance prediction for complicated processes. Simulation shows that this network is comparable to the BPNNs in predicting the performance of CFB boilers. Good and practical on-line performance predictions are essential for operation guide and model predictive control of CFB boilers, which are under research by the authors.

  4. Analysis of Pull-In Instability of Geometrically Nonlinear Microbeam Using Radial Basis Artificial Neural Network Based on Couple Stress Theory

    PubMed Central

    Heidari, Mohammad; Heidari, Ali; Homaei, Hadi

    2014-01-01

    The static pull-in instability of beam-type microelectromechanical systems (MEMS) is theoretically investigated. Two engineering cases including cantilever and double cantilever microbeam are considered. Considering the midplane stretching as the source of the nonlinearity in the beam behavior, a nonlinear size-dependent Euler-Bernoulli beam model is used based on a modified couple stress theory, capable of capturing the size effect. By selecting a range of geometric parameters such as beam lengths, width, thickness, gaps, and size effect, we identify the static pull-in instability voltage. A MAPLE package is employed to solve the nonlinear differential governing equations to obtain the static pull-in instability voltage of microbeams. Radial basis function artificial neural network with two functions has been used for modeling the static pull-in instability of microcantilever beam. The network has four inputs of length, width, gap, and the ratio of height to scale parameter of beam as the independent process variables, and the output is static pull-in voltage of microbeam. Numerical data, employed for training the network, and capabilities of the model have been verified in predicting the pull-in instability behavior. The output obtained from neural network model is compared with numerical results, and the amount of relative error has been calculated. Based on this verification error, it is shown that the radial basis function of neural network has the average error of 4.55% in predicting pull-in voltage of cantilever microbeam. Further analysis of pull-in instability of beam under different input conditions has been investigated and comparison results of modeling with numerical considerations shows a good agreement, which also proves the feasibility and effectiveness of the adopted approach. The results reveal significant influences of size effect and geometric parameters on the static pull-in instability voltage of MEMS. PMID:24860602

  5. Optimization of matrix tablets controlled drug release using Elman dynamic neural networks and decision trees.

    PubMed

    Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica

    2012-05-30

    The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Prediction of Aerodynamic Coefficient using Genetic Algorithm Optimized Neural Network for Sparse Data

    NASA Technical Reports Server (NTRS)

    Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Wind tunnels use scale models to characterize aerodynamic coefficients, Wind tunnel testing can be slow and costly due to high personnel overhead and intensive power utilization. Although manual curve fitting can be done, it is highly efficient to use a neural network to define the complex relationship between variables. Numerical simulation of complex vehicles on the wide range of conditions required for flight simulation requires static and dynamic data. Static data at low Mach numbers and angles of attack may be obtained with simpler Euler codes. Static data of stalled vehicles where zones of flow separation are usually present at higher angles of attack require Navier-Stokes simulations which are costly due to the large processing time required to attain convergence. Preliminary dynamic data may be obtained with simpler methods based on correlations and vortex methods; however, accurate prediction of the dynamic coefficients requires complex and costly numerical simulations. A reliable and fast method of predicting complex aerodynamic coefficients for flight simulation I'S presented using a neural network. The training data for the neural network are derived from numerical simulations and wind-tunnel experiments. The aerodynamic coefficients are modeled as functions of the flow characteristics and the control surfaces of the vehicle. The basic coefficients of lift, drag and pitching moment are expressed as functions of angles of attack and Mach number. The modeled and training aerodynamic coefficients show good agreement. This method shows excellent potential for rapid development of aerodynamic models for flight simulation. Genetic Algorithms (GA) are used to optimize a previously built Artificial Neural Network (ANN) that reliably predicts aerodynamic coefficients. Results indicate that the GA provided an efficient method of optimizing the ANN model to predict aerodynamic coefficients. The reliability of the ANN using the GA includes prediction of aerodynamic coefficients to an accuracy of 110% . In our problem, we would like to get an optimized neural network architecture and minimum data set. This has been accomplished within 500 training cycles of a neural network. After removing training pairs (outliers), the GA has produced much better results. The neural network constructed is a feed forward neural network with a back propagation learning mechanism. The main goal has been to free the network design process from constraints of human biases, and to discover better forms of neural network architectures. The automation of the network architecture search by genetic algorithms seems to have been the best way to achieve this goal.

  7. Optimal mapping of neural-network learning on message-passing multicomputers

    NASA Technical Reports Server (NTRS)

    Chu, Lon-Chan; Wah, Benjamin W.

    1992-01-01

    A minimization of learning-algorithm completion time is sought in the present optimal-mapping study of the learning process in multilayer feed-forward artificial neural networks (ANNs) for message-passing multicomputers. A novel approximation algorithm for mappings of this kind is derived from observations of the dominance of a parallel ANN algorithm over its communication time. Attention is given to both static and dynamic mapping schemes for systems with static and dynamic background workloads, as well as to experimental results obtained for simulated mappings on multicomputers with dynamic background workloads.

  8. Synchronization criteria for generalized reaction-diffusion neural networks via periodically intermittent control.

    PubMed

    Gan, Qintao; Lv, Tianshi; Fu, Zhenhua

    2016-04-01

    In this paper, the synchronization problem for a class of generalized neural networks with time-varying delays and reaction-diffusion terms is investigated concerning Neumann boundary conditions in terms of p-norm. The proposed generalized neural networks model includes reaction-diffusion local field neural networks and reaction-diffusion static neural networks as its special cases. By establishing a new inequality, some simple and useful conditions are obtained analytically to guarantee the global exponential synchronization of the addressed neural networks under the periodically intermittent control. According to the theoretical results, the influences of diffusion coefficients, diffusion space, and control rate on synchronization are analyzed. Finally, the feasibility and effectiveness of the proposed methods are shown by simulation examples, and by choosing different diffusion coefficients, diffusion spaces, and control rates, different controlled synchronization states can be obtained.

  9. Static sign language recognition using 1D descriptors and neural networks

    NASA Astrophysics Data System (ADS)

    Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César

    2012-10-01

    A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.

  10. Optimal exponential synchronization of general chaotic delayed neural networks: an LMI approach.

    PubMed

    Liu, Meiqin

    2009-09-01

    This paper investigates the optimal exponential synchronization problem of general chaotic neural networks with or without time delays by virtue of Lyapunov-Krasovskii stability theory and the linear matrix inequality (LMI) technique. This general model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, and recurrent multilayer perceptrons (RMLPs) with or without delays. Using the drive-response concept, time-delay feedback controllers are designed to synchronize two identical chaotic neural networks as quickly as possible. The control design equations are shown to be a generalized eigenvalue problem (GEVP) which can be easily solved by various convex optimization algorithms to determine the optimal control law and the optimal exponential synchronization rate. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.

  11. Robustness of a distributed neural network controller for locomotion in a hexapod robot

    NASA Technical Reports Server (NTRS)

    Chiel, Hillel J.; Beer, Randall D.; Quinn, Roger D.; Espenschied, Kenneth S.

    1992-01-01

    A distributed neural-network controller for locomotion, based on insect neurobiology, has been used to control a hexapod robot. How robust is this controller? Disabling any single sensor, effector, or central component did not prevent the robot from walking. Furthermore, statically stable gaits could be established using either sensor input or central connections. Thus, a complex interplay between central neural elements and sensor inputs is responsible for the robustness of the controller and its ability to generate a continuous range of gaits. These results suggest that biologically inspired neural-network controllers may be a robust method for robotic control.

  12. The super-Turing computational power of plastic recurrent neural networks.

    PubMed

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  13. Adaptive Neural Networks for Automatic Negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakas, D. P.; Vlachos, D. S.; Simos, T. E.

    The use of fuzzy logic and fuzzy neural networks has been found effective for the modelling of the uncertain relations between the parameters of a negotiation procedure. The problem with these configurations is that they are static, that is, any new knowledge from theory or experiment lead to the construction of entirely new models. To overcome this difficulty, we apply in this work, an adaptive neural topology to model the negotiation process. Finally a simple simulation is carried in order to test the new method.

  14. Simulating and Synthesizing Substructures Using Neural Network and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.; VanLandingham, Hugh F.

    1997-01-01

    The feasibility of simulating and synthesizing substructures by computational neural network models is illustrated by investigating a statically indeterminate beam, using both a 1-D and a 2-D plane stress modelling. The beam can be decomposed into two cantilevers with free-end loads. By training neural networks to simulate the cantilever responses to different loads, the original beam problem can be solved as a match-up between two subsystems under compatible interface conditions. The genetic algorithms are successfully used to solve the match-up problem. Simulated results are found in good agreement with the analytical or FEM solutions.

  15. Nonlinear neural control with power systems applications

    NASA Astrophysics Data System (ADS)

    Chen, Dingguo

    1998-12-01

    Extensive studies have been undertaken on the transient stability of large interconnected power systems with flexible ac transmission systems (FACTS) devices installed. Varieties of control methodologies have been proposed to stabilize the postfault system which would otherwise eventually lose stability without a proper control. Generally speaking, regular transient stability is well understood, but the mechanism of load-driven voltage instability or voltage collapse has not been well understood. The interaction of generator dynamics and load dynamics makes synthesis of stabilizing controllers even more challenging. There is currently increasing interest in the research of neural networks as identifiers and controllers for dealing with dynamic time-varying nonlinear systems. This study focuses on the development of novel artificial neural network architectures for identification and control with application to dynamic electric power systems so that the stability of the interconnected power systems, following large disturbances, and/or with the inclusion of uncertain loads, can be largely enhanced, and stable operations are guaranteed. The latitudinal neural network architecture is proposed for the purpose of system identification. It may be used for identification of nonlinear static/dynamic loads, which can be further used for static/dynamic voltage stability analysis. The properties associated with this architecture are investigated. A neural network methodology is proposed for dealing with load modeling and voltage stability analysis. Based on the neural network models of loads, voltage stability analysis evolves, and modal analysis is performed. Simulation results are also provided. The transient stability problem is studied with consideration of load effects. The hierarchical neural control scheme is developed. Trajectory-following policy is used so that the hierarchical neural controller performs as almost well for non-nominal cases as they do for the nominal cases. The adaptive hierarchical neural control scheme is also proposed to deal with the time-varying nature of loads. Further, adaptive neural control, which is based on the on-line updating of the weights and biases of the neural networks, is studied. Simulations provided on the faulted power systems with unknown loads suggest that the proposed adaptive hierarchical neural control schemes should be useful for practical power applications.

  16. Static facial expression recognition with convolution neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei

    2018-03-01

    Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.

  17. Exploring Neural Network Models with Hierarchical Memories and Their Use in Modeling Biological Systems

    NASA Astrophysics Data System (ADS)

    Pusuluri, Sai Teja

    Energy landscapes are often used as metaphors for phenomena in biology, social sciences and finance. Different methods have been implemented in the past for the construction of energy landscapes. Neural network models based on spin glass physics provide an excellent mathematical framework for the construction of energy landscapes. This framework uses a minimal number of parameters and constructs the landscape using data from the actual phenomena. In the past neural network models were used to mimic the storage and retrieval process of memories (patterns) in the brain. With advances in the field now, these models are being used in machine learning, deep learning and modeling of complex phenomena. Most of the past literature focuses on increasing the storage capacity and stability of stored patterns in the network but does not study these models from a modeling perspective or an energy landscape perspective. This dissertation focuses on neural network models both from a modeling perspective and from an energy landscape perspective. I firstly show how the cellular interconversion phenomenon can be modeled as a transition between attractor states on an epigenetic landscape constructed using neural network models. The model allows the identification of a reaction coordinate of cellular interconversion by analyzing experimental and simulation time course data. Monte Carlo simulations of the model show that the initial phase of cellular interconversion is a Poisson process and the later phase of cellular interconversion is a deterministic process. Secondly, I explore the static features of landscapes generated using neural network models, such as sizes of basins of attraction and densities of metastable states. The simulation results show that the static landscape features are strongly dependent on the correlation strength and correlation structure between patterns. Using different hierarchical structures of the correlation between patterns affects the landscape features. These results show how the static landscape features can be controlled by adjusting the correlations between patterns. Finally, I explore the dynamical features of landscapes generated using neural network models such as the stability of minima and the transition rates between minima. The results from this project show that the stability depends on the correlations between patterns. It is also found that the transition rates between minima strongly depend on the type of bias applied and the correlation between patterns. The results from this part of the dissertation can be useful in engineering an energy landscape without even having the complete information about the associated minima of the landscape.

  18. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  19. Application of artificial neural networks to the design optimization of aerospace structural components

    NASA Technical Reports Server (NTRS)

    Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.

    1993-01-01

    The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.

  20. Optimum Design of Aerospace Structural Components Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Berke, L.; Patnaik, S. N.; Murthy, P. L. N.

    1993-01-01

    The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires a trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network using the code NETS. Optimum designs for new design conditions were predicted using the trained network. Neural net prediction of optimum designs was found to be satisfactory for the majority of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.

  1. Functional expansion representations of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1992-01-01

    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  2. Structure-function clustering in multiplex brain networks

    NASA Astrophysics Data System (ADS)

    Crofts, J. J.; Forrester, M.; O'Dea, R. D.

    2016-10-01

    A key question in neuroscience is to understand how a rich functional repertoire of brain activity arises within relatively static networks of structurally connected neural populations: elucidating the subtle interactions between evoked “functional connectivity” and the underlying “structural connectivity” has the potential to address this. These structural-functional networks (and neural networks more generally) are more naturally described using a multilayer or multiplex network approach, in favour of standard single-layer network analyses that are more typically applied to such systems. In this letter, we address such issues by exploring important structure-function relations in the Macaque cortical network by modelling it as a duplex network that comprises an anatomical layer, describing the known (macro-scale) network topology of the Macaque monkey, and a functional layer derived from simulated neural activity. We investigate and characterize correlations between structural and functional layers, as system parameters controlling simulated neural activity are varied, by employing recently described multiplex network measures. Moreover, we propose a novel measure of multiplex structure-function clustering which allows us to investigate the emergence of functional connections that are distinct from the underlying cortical structure, and to highlight the dependence of multiplex structure on the neural dynamical regime.

  3. Dynamics of coupled mode solitons in bursting neural networks

    NASA Astrophysics Data System (ADS)

    Nfor, N. Oma; Ghomsi, P. Guemkam; Moukam Kakmeni, F. M.

    2018-02-01

    Using an electrically coupled chain of Hindmarsh-Rose neural models, we analytically derived the nonlinearly coupled complex Ginzburg-Landau equations. This is realized by superimposing the lower and upper cutoff modes of wave propagation and by employing the multiple scale expansions in the semidiscrete approximation. We explore the modified Hirota method to analytically obtain the bright-bright pulse soliton solutions of our nonlinearly coupled equations. With these bright solitons as initial conditions of our numerical scheme, and knowing that electrical signals are the basis of information transfer in the nervous system, it is found that prior to collisions at the boundaries of the network, neural information is purely conveyed by bisolitons at lower cutoff mode. After collision, the bisolitons are completely annihilated and neural information is now relayed by the upper cutoff mode via the propagation of plane waves. It is also shown that the linear gain of the system is inextricably linked to the complex physiological mechanisms of ion mobility, since the speeds and spatial profiles of the coupled nerve impulses vary with the gain. A linear stability analysis performed on the coupled system mainly confirms the instability of plane waves in the neural network, with a glaring example of the transition of weak plane waves into a dark soliton and then static kinks. Numerical simulations have confirmed the annihilation phenomenon subsequent to collision in neural systems. They equally showed that the symmetry breaking of the pulse solution of the system leaves in the network static internal modes, sometime referred to as Goldstone modes.

  4. Dynamics of coupled mode solitons in bursting neural networks.

    PubMed

    Nfor, N Oma; Ghomsi, P Guemkam; Moukam Kakmeni, F M

    2018-02-01

    Using an electrically coupled chain of Hindmarsh-Rose neural models, we analytically derived the nonlinearly coupled complex Ginzburg-Landau equations. This is realized by superimposing the lower and upper cutoff modes of wave propagation and by employing the multiple scale expansions in the semidiscrete approximation. We explore the modified Hirota method to analytically obtain the bright-bright pulse soliton solutions of our nonlinearly coupled equations. With these bright solitons as initial conditions of our numerical scheme, and knowing that electrical signals are the basis of information transfer in the nervous system, it is found that prior to collisions at the boundaries of the network, neural information is purely conveyed by bisolitons at lower cutoff mode. After collision, the bisolitons are completely annihilated and neural information is now relayed by the upper cutoff mode via the propagation of plane waves. It is also shown that the linear gain of the system is inextricably linked to the complex physiological mechanisms of ion mobility, since the speeds and spatial profiles of the coupled nerve impulses vary with the gain. A linear stability analysis performed on the coupled system mainly confirms the instability of plane waves in the neural network, with a glaring example of the transition of weak plane waves into a dark soliton and then static kinks. Numerical simulations have confirmed the annihilation phenomenon subsequent to collision in neural systems. They equally showed that the symmetry breaking of the pulse solution of the system leaves in the network static internal modes, sometime referred to as Goldstone modes.

  5. Prototype-Incorporated Emotional Neural Network.

    PubMed

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  6. An on-line BCI for control of hand grasp sequence and holding using adaptive probabilistic neural network.

    PubMed

    Hazrati, Mehrnaz Kh; Erfanian, Abbas

    2008-01-01

    This paper presents a new EEG-based Brain-Computer Interface (BCI) for on-line controlling the sequence of hand grasping and holding in a virtual reality environment. The goal of this research is to develop an interaction technique that will allow the BCI to be effective in real-world scenarios for hand grasp control. Moreover, for consistency of man-machine interface, it is desirable the intended movement to be what the subject imagines. For this purpose, we developed an on-line BCI which was based on the classification of EEG associated with imagination of the movement of hand grasping and resting state. A classifier based on probabilistic neural network (PNN) was introduced for classifying the EEG. The PNN is a feedforward neural network that realizes the Bayes decision discriminant function by estimating probability density function using mixtures of Gaussian kernels. Two types of classification schemes were considered here for on-line hand control: adaptive and static. In contrast to static classification, the adaptive classifier was continuously updated on-line during recording. The experimental evaluation on six subjects on different days demonstrated that by using the static scheme, a classification accuracy as high as the rate obtained by the adaptive scheme can be achieved. At the best case, an average classification accuracy of 93.0% and 85.8% was obtained using adaptive and static scheme, respectively. The results obtained from more than 1500 trials on six subjects showed that interactive virtual reality environment can be used as an effective tool for subject training in BCI.

  7. Dynamic security contingency screening and ranking using neural networks.

    PubMed

    Mansour, Y; Vaahedi, E; El-Sharkawi, M A

    1997-01-01

    This paper summarizes BC Hydro's experience in applying neural networks to dynamic security contingency screening and ranking. The idea is to use the information on the prevailing operating condition and directly provide contingency screening and ranking using a trained neural network. To train the two neural networks for the large scale systems of BC Hydro and Hydro Quebec, in total 1691 detailed transient stability simulation were conducted, 1158 for BC Hydro system and 533 for the Hydro Quebec system. The simulation program was equipped with the energy margin calculation module (second kick) to measure the energy margin in each run. The first set of results showed poor performance for the neural networks in assessing the dynamic security. However a number of corrective measures improved the results significantly. These corrective measures included: 1) the effectiveness of output; 2) the number of outputs; 3) the type of features (static versus dynamic); 4) the number of features; 5) system partitioning; and 6) the ratio of training samples to features. The final results obtained using the large scale systems of BC Hydro and Hydro Quebec demonstrates a good potential for neural network in dynamic security assessment contingency screening and ranking.

  8. Temporal neural networks and transient analysis of complex engineering systems

    NASA Astrophysics Data System (ADS)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  9. Mindfulness and dynamic functional neural connectivity in children and adolescents.

    PubMed

    Marusak, Hilary A; Elrahal, Farrah; Peters, Craig A; Kundu, Prantik; Lombardo, Michael V; Calhoun, Vince D; Goldberg, Elimelech K; Cohen, Cindy; Taub, Jeffrey W; Rabinak, Christine A

    2018-01-15

    Interventions that promote mindfulness consistently show salutary effects on cognition and emotional wellbeing in adults, and more recently, in children and adolescents. However, we lack understanding of the neurobiological mechanisms underlying mindfulness in youth that should allow for more judicious application of these interventions in clinical and educational settings. Using multi-echo multi-band fMRI, we examined dynamic (i.e., time-varying) and conventional static resting-state connectivity between core neurocognitive networks (i.e., salience/emotion, default mode, central executive) in 42 children and adolescents (ages 6-17). We found that trait mindfulness in youth relates to dynamic but not static resting-state connectivity. Specifically, more mindful youth transitioned more between brain states over the course of the scan, spent overall less time in a certain connectivity state, and showed a state-specific reduction in connectivity between salience/emotion and central executive networks. The number of state transitions mediated the link between higher mindfulness and lower anxiety, providing new insights into potential neural mechanisms underlying benefits of mindfulness on psychological health in youth. Our results provide new evidence that mindfulness in youth relates to functional neural dynamics and interactions between neurocognitive networks, over time. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Communications and control for electric power systems: Power flow classification for static security assessment

    NASA Technical Reports Server (NTRS)

    Niebur, D.; Germond, A.

    1993-01-01

    This report investigates the classification of power system states using an artificial neural network model, Kohonen's self-organizing feature map. The ultimate goal of this classification is to assess power system static security in real-time. Kohonen's self-organizing feature map is an unsupervised neural network which maps N-dimensional input vectors to an array of M neurons. After learning, the synaptic weight vectors exhibit a topological organization which represents the relationship between the vectors of the training set. This learning is unsupervised, which means that the number and size of the classes are not specified beforehand. In the application developed in this report, the input vectors used as the training set are generated by off-line load-flow simulations. The learning algorithm and the results of the organization are discussed.

  11. Distributed neural control of a hexapod walking vehicle

    NASA Technical Reports Server (NTRS)

    Beer, R. D.; Sterling, L. S.; Quinn, R. D.; Chiel, H. J.; Ritzmann, R.

    1989-01-01

    There has been a long standing interest in the design of controllers for multilegged vehicles. The approach is to apply distributed control to this problem, rather than using parallel computing of a centralized algorithm. Researchers describe a distributed neural network controller for hexapod locomotion which is based on the neural control of locomotion in insects. The model considers the simplified kinematics with two degrees of freedom per leg, but the model includes the static stability constraint. Through simulation, it is demonstrated that this controller can generate a continuous range of statically stable gaits at different speeds by varying a single control parameter. In addition, the controller is extremely robust, and can continue the function even after several of its elements have been disabled. Researchers are building a small hexapod robot whose locomotion will be controlled by this network. Researchers intend to extend their model to the dynamic control of legs with more than two degrees of freedom by using data on the control of multisegmented insect legs. Another immediate application of this neural control approach is also exhibited in biology: the escape reflex. Advanced robots are being equipped with tactile sensing and machine vision so that the sensory inputs to the robot controller are vast and complex. Neural networks are ideal for a lower level safety reflex controller because of their extremely fast response time. The combination of robotics, computer modeling, and neurobiology has been remarkably fruitful, and is likely to lead to deeper insights into the problems of real time sensorimotor control.

  12. System and Method for Modeling the Flow Performance Features of an Object

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles (Inventor); Ross, James (Inventor)

    1997-01-01

    The method and apparatus includes a neural network for generating a model of an object in a wind tunnel from performance data on the object. The network is trained from test input signals (e.g., leading edge flap position, trailing edge flap position, angle of attack, and other geometric configurations, and power settings) and test output signals (e.g., lift, drag, pitching moment, or other performance features). In one embodiment, the neural network training method employs a modified Levenberg-Marquardt optimization technique. The model can be generated 'real time' as wind tunnel testing proceeds. Once trained, the model is used to estimate performance features associated with the aircraft given geometric configuration and/or power setting input. The invention can also be applied in other similar static flow modeling applications in aerodynamics, hydrodynamics, fluid dynamics, and other such disciplines. For example, the static testing of cars, sails, and foils, propellers, keels, rudders, turbines, fins, and the like, in a wind tunnel, water trough, or other flowing medium.

  13. Memory and pattern storage in neural networks with activity dependent synapses

    NASA Astrophysics Data System (ADS)

    Mejias, J. F.; Torres, J. J.

    2009-01-01

    We present recently obtained results on the influence of the interplay between several activity dependent synaptic mechanisms, such as short-term depression and facilitation, on the maximum memory storage capacity in an attractor neural network [1]. In contrast with the case of synaptic depression, which drastically reduces the capacity of the network to store and retrieve activity patterns [2], synaptic facilitation is able to enhance the memory capacity in different situations. In particular, we find that a convenient balance between depression and facilitation can enhance the memory capacity, reaching maximal values similar to those obtained with static synapses, that is, without activity-dependent processes. We also argue, employing simple arguments, that this level of balance is compatible with experimental data recorded from some cortical areas, where depression and facilitation may play an important role for both memory-oriented tasks and information processing. We conclude that depressing synapses with a certain level of facilitation allow to recover the good retrieval properties of networks with static synapses while maintaining the nonlinear properties of dynamic synapses, convenient for information processing and coding.

  14. Fuzzy wavelet plus a quantum neural network as a design base for power system stability enhancement.

    PubMed

    Ganjefar, Soheil; Tofighi, Morteza; Karami, Hamidreza

    2015-11-01

    In this study, we introduce an indirect adaptive fuzzy wavelet neural controller (IAFWNC) as a power system stabilizer to damp inter-area modes of oscillations in a multi-machine power system. Quantum computing is an efficient method for improving the computational efficiency of neural networks, so we developed an identifier based on a quantum neural network (QNN) to train the IAFWNC in the proposed scheme. All of the controller parameters are tuned online based on the Lyapunov stability theory to guarantee the closed-loop stability. A two-machine, two-area power system equipped with a static synchronous series compensator as a series flexible ac transmission system was used to demonstrate the effectiveness of the proposed controller. The simulation and experimental results demonstrated that the proposed IAFWNC scheme can achieve favorable control performance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks

    PubMed Central

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-01-01

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction. PMID:28672867

  16. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks.

    PubMed

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-06-26

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  17. Neural Correlates of Racial Ingroup Bias in Observing Computer-Animated Social Encounters.

    PubMed

    Katsumi, Yuta; Dolcos, Sanda

    2017-01-01

    Despite evidence for the role of group membership in the neural correlates of social cognition, the mechanisms associated with processing non-verbal behaviors displayed by racially ingroup vs. outgroup members remain unclear. Here, 20 Caucasian participants underwent fMRI recording while observing social encounters with ingroup and outgroup characters displaying dynamic and static non-verbal behaviors. Dynamic behaviors included approach and avoidance behaviors, preceded or not by a handshake; both dynamic and static behaviors were followed by participants' ratings. Behaviorally, participants showed bias toward their ingroup members, demonstrated by faster/slower reaction times for evaluating ingroup static/approach behaviors, respectively. At the neural level, despite overall similar responses in the action observation network to ingroup and outgroup encounters, the medial prefrontal cortex showed dissociable activation, possibly reflecting spontaneous processing of ingroup static behaviors and positive evaluations of ingroup approach behaviors. The anterior cingulate and superior frontal cortices also showed sensitivity to race, reflected in coordinated and reduced activation for observing ingroup static behaviors. Finally, the posterior superior temporal sulcus showed uniquely increased activity to observing ingroup handshakes. These findings shed light on the mechanisms of racial ingroup bias in observing social encounters, and have implications for understanding factors related to successful interactions with individuals from diverse backgrounds.

  18. Neural Correlates of Racial Ingroup Bias in Observing Computer-Animated Social Encounters

    PubMed Central

    Katsumi, Yuta; Dolcos, Sanda

    2018-01-01

    Despite evidence for the role of group membership in the neural correlates of social cognition, the mechanisms associated with processing non-verbal behaviors displayed by racially ingroup vs. outgroup members remain unclear. Here, 20 Caucasian participants underwent fMRI recording while observing social encounters with ingroup and outgroup characters displaying dynamic and static non-verbal behaviors. Dynamic behaviors included approach and avoidance behaviors, preceded or not by a handshake; both dynamic and static behaviors were followed by participants’ ratings. Behaviorally, participants showed bias toward their ingroup members, demonstrated by faster/slower reaction times for evaluating ingroup static/approach behaviors, respectively. At the neural level, despite overall similar responses in the action observation network to ingroup and outgroup encounters, the medial prefrontal cortex showed dissociable activation, possibly reflecting spontaneous processing of ingroup static behaviors and positive evaluations of ingroup approach behaviors. The anterior cingulate and superior frontal cortices also showed sensitivity to race, reflected in coordinated and reduced activation for observing ingroup static behaviors. Finally, the posterior superior temporal sulcus showed uniquely increased activity to observing ingroup handshakes. These findings shed light on the mechanisms of racial ingroup bias in observing social encounters, and have implications for understanding factors related to successful interactions with individuals from diverse backgrounds. PMID:29354042

  19. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System.

    PubMed

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas; Chicca, Elisabetta

    2012-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  20. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System

    PubMed Central

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L.; Wennekers, Thomas; Chicca, Elisabetta

    2011-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems. PMID:22347163

  1. Load reduction test method of similarity theory and BP neural networks of large cranes

    NASA Astrophysics Data System (ADS)

    Yang, Ruigang; Duan, Zhibin; Lu, Yi; Wang, Lei; Xu, Gening

    2016-01-01

    Static load tests are an important means of supervising and detecting a crane's lift capacity. Due to space restrictions, however, there are difficulties and potential danger when testing large bridge cranes. To solve the loading problems of large-tonnage cranes during testing, an equivalency test is proposed based on the similarity theory and BP neural networks. The maximum stress and displacement of a large bridge crane is tested in small loads, combined with the training neural network of a similar structure crane through stress and displacement data which is collected by a physics simulation progressively loaded to a static load test load within the material scope of work. The maximum stress and displacement of a crane under a static load test load can be predicted through the relationship of stress, displacement, and load. By measuring the stress and displacement of small tonnage weights, the stress and displacement of large loads can be predicted, such as the maximum load capacity, which is 1.25 times the rated capacity. Experimental study shows that the load reduction test method can reflect the lift capacity of large bridge cranes. The load shedding predictive analysis for Sanxia 1200 t bridge crane test data indicates that when the load is 1.25 times the rated lifting capacity, the predicted displacement and actual displacement error is zero. The method solves the problem that lifting capacities are difficult to obtain and testing accidents are easily possible when 1.25 times related weight loads are tested for large tonnage cranes.

  2. A bio-inspired system for spatio-temporal recognition in static and video imagery

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas

    2007-04-01

    This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.

  3. Neural Network Burst Pressure Prediction in Composite Overwrapped Pressure Vessels

    NASA Technical Reports Server (NTRS)

    Hill, Eric v. K.; Dion, Seth-Andrew T.; Karl, Justin O.; Spivey, Nicholas S.; Walker, James L., II

    2007-01-01

    Acoustic emission data were collected during the hydroburst testing of eleven 15 inch diameter filament wound composite overwrapped pressure vessels. A neural network burst pressure prediction was generated from the resulting AE amplitude data. The bottles shared commonality of graphite fiber, epoxy resin, and cure time. Individual bottles varied by cure mode (rotisserie versus static oven curing), types of inflicted damage, temperature of the pressurant, and pressurization scheme. Three categorical variables were selected to represent undamaged bottles, impact damaged bottles, and bottles with lacerated hoop fibers. This categorization along with the removal of the AE data from the disbonding noise between the aluminum liner and the composite overwrap allowed the prediction of burst pressures in all three sets of bottles using a single backpropagation neural network. Here the worst case error was 3.38 percent.

  4. An optimized BP neural network based on genetic algorithm for static decoupling of a six-axis force/torque sensor

    NASA Astrophysics Data System (ADS)

    Fu, Liyue; Song, Aiguo

    2018-02-01

    In order to improve the measurement precision of 6-axis force/torque sensor for robot, BP decoupling algorithm optimized by GA (GA-BP algorithm) is proposed in this paper. The weights and thresholds of a BP neural network with 6-10-6 topology are optimized by GA to develop decouple a six-axis force/torque sensor. By comparison with other traditional decoupling algorithm, calculating the pseudo-inverse matrix of calibration and classical BP algorithm, the decoupling results validate the good decoupling performance of GA-BP algorithm and the coupling errors are reduced.

  5. Reward-Modulated Hebbian Plasticity as Leverage for Partially Embodied Control in Compliant Robotics

    PubMed Central

    Burms, Jeroen; Caluwaerts, Ken; Dambre, Joni

    2015-01-01

    In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics. PMID:26347645

  6. Models of Innate Neural Attractors and Their Applications for Neural Information Processing

    PubMed Central

    Solovyeva, Ksenia P.; Karandashev, Iakov M.; Zhavoronkov, Alex; Dunin-Barkowski, Witali L.

    2016-01-01

    In this work we reveal and explore a new class of attractor neural networks, based on inborn connections provided by model molecular markers, the molecular marker based attractor neural networks (MMBANN). Each set of markers has a metric, which is used to make connections between neurons containing the markers. We have explored conditions for the existence of attractor states, critical relations between their parameters and the spectrum of single neuron models, which can implement the MMBANN. Besides, we describe functional models (perceptron and SOM), which obtain significant advantages over the traditional implementation of these models, while using MMBANN. In particular, a perceptron, based on MMBANN, gets specificity gain in orders of error probabilities values, MMBANN SOM obtains real neurophysiological meaning, the number of possible grandma cells increases 1000-fold with MMBANN. MMBANN have sets of attractor states, which can serve as finite grids for representation of variables in computations. These grids may show dimensions of d = 0, 1, 2,…. We work with static and dynamic attractor neural networks of the dimensions d = 0 and 1. We also argue that the number of dimensions which can be represented by attractors of activities of neural networks with the number of elements N = 104 does not exceed 8. PMID:26778977

  7. Neuronify: An Educational Simulator for Neural Circuits.

    PubMed

    Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Våvang Solbrå, Andreas; Tennøe, Simen; Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne; Hafting, Torkel; Einevoll, Gaute T

    2017-01-01

    Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux).

  8. Neuronify: An Educational Simulator for Neural Circuits

    PubMed Central

    Hafreager, Anders; Malthe-Sørenssen, Anders; Fyhn, Marianne

    2017-01-01

    Abstract Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux). PMID:28321440

  9. Theta phase precession and phase selectivity: a cognitive device description of neural coding

    NASA Astrophysics Data System (ADS)

    Zalay, Osbert C.; Bardakjian, Berj L.

    2009-06-01

    Information in neural systems is carried by way of phase and rate codes. Neuronal signals are processed through transformative biophysical mechanisms at the cellular and network levels. Neural coding transformations can be represented mathematically in a device called the cognitive rhythm generator (CRG). Incoming signals to the CRG are parsed through a bank of neuronal modes that orchestrate proportional, integrative and derivative transformations associated with neural coding. Mode outputs are then mixed through static nonlinearities to encode (spatio) temporal phase relationships. The static nonlinear outputs feed and modulate a ring device (limit cycle) encoding output dynamics. Small coupled CRG networks were created to investigate coding functionality associated with neuronal phase preference and theta precession in the hippocampus. Phase selectivity was found to be dependent on mode shape and polarity, while phase precession was a product of modal mixing (i.e. changes in the relative contribution or amplitude of mode outputs resulted in shifting phase preference). Nonlinear system identification was implemented to help validate the model and explain response characteristics associated with modal mixing; in particular, principal dynamic modes experimentally derived from a hippocampal neuron were inserted into a CRG and the neuron's dynamic response was successfully cloned. From our results, small CRG networks possessing disynaptic feedforward inhibition in combination with feedforward excitation exhibited frequency-dependent inhibitory-to-excitatory and excitatory-to-inhibitory transitions that were similar to transitions seen in a single CRG with quadratic modal mixing. This suggests nonlinear modal mixing to be a coding manifestation of the effect of network connectivity in shaping system dynamic behavior. We hypothesize that circuits containing disynaptic feedforward inhibition in the nervous system may be candidates for interpreting upstream rate codes to guide downstream processes such as phase precession, because of their demonstrated frequency-selective properties.

  10. Movement decoupling control for two-axis fast steering mirror

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Qiao, Yongming; Lv, Tao

    2017-02-01

    Based on flexure hinge and piezoelectric actuator of two-axis fast steering mirror is a complex system with time varying, uncertain and strong coupling. It is extremely difficult to achieve high precision decoupling control with the traditional PID control method. The feedback error learning method was established an inverse hysteresis model which was based inner product dynamic neural network nonlinear and no-smooth for piezo-ceramic. In order to improve the actuator high precision, a method was proposed, which was based piezo-ceramic inverse model of two dynamic neural network adaptive control. The experiment result indicated that, compared with two neural network adaptive movement decoupling control algorithm, static relative error is reduced from 4.44% to 0.30% and coupling degree is reduced from 12.71% to 0.60%, while dynamic relative error is reduced from 13.92% to 2.85% and coupling degree is reduced from 2.63% to 1.17%.

  11. Dynamic gesture recognition using neural networks: a fundament for advanced interaction construction

    NASA Astrophysics Data System (ADS)

    Boehm, Klaus; Broll, Wolfgang; Sokolewicz, Michael A.

    1994-04-01

    Interaction in virtual reality environments is still a challenging task. Static hand posture recognition is currently the most common and widely used method for interaction using glove input devices. In order to improve the naturalness of interaction, and thereby decrease the user-interface learning time, there is a need to be able to recognize dynamic gestures. In this paper we describe our approach to overcoming the difficulties of dynamic gesture recognition (DGR) using neural networks. Backpropagation neural networks have already proven themselves to be appropriate and efficient for posture recognition. However, the extensive amount of data involved in DGR requires a different approach. Because of features such as topology preservation and automatic-learning, Kohonen Feature Maps are particularly suitable for the reduction of the high dimensional data space that is the result of a dynamic gesture, and are thus implemented for this task.

  12. Estimating spatio-temporal dynamics of stream total phosphate concentration by soft computing techniques.

    PubMed

    Chang, Fi-John; Chen, Pin-An; Chang, Li-Chiu; Tsai, Yu-Hsuan

    2016-08-15

    This study attempts to model the spatio-temporal dynamics of total phosphate (TP) concentrations along a river for effective hydro-environmental management. We propose a systematical modeling scheme (SMS), which is an ingenious modeling process equipped with a dynamic neural network and three refined statistical methods, for reliably predicting the TP concentrations along a river simultaneously. Two different types of artificial neural network (BPNN-static neural network; NARX network-dynamic neural network) are constructed in modeling the dynamic system. The Dahan River in Taiwan is used as a study case, where ten-year seasonal water quality data collected at seven monitoring stations along the river are used for model training and validation. Results demonstrate that the NARX network can suitably capture the important dynamic features and remarkably outperforms the BPNN model, and the SMS can effectively identify key input factors, suitably overcome data scarcity, significantly increase model reliability, satisfactorily estimate site-specific TP concentration at seven monitoring stations simultaneously, and adequately reconstruct seasonal TP data into a monthly scale. The proposed SMS can reliably model the dynamic spatio-temporal water pollution variation in a river system for missing, hazardous or costly data of interest. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Mexican sign language recognition using normalized moments and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita

    2014-09-01

    This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.

  14. Oscillations and chaos in neural networks: an exactly solvable model.

    PubMed Central

    Wang, L P; Pichler, E E; Ross, J

    1990-01-01

    We consider a randomly diluted higher-order network with noise, consisting of McCulloch-Pitts neurons that interact by Hebbian-type connections. For this model, exact dynamical equations are derived and solved for both parallel and random sequential updating algorithms. For parallel dynamics, we find a rich spectrum of different behaviors including static retrieving and oscillatory and chaotic phenomena in different parts of the parameter space. The bifurcation parameters include first- and second-order neuronal interaction coefficients and a rescaled noise level, which represents the combined effects of the random synaptic dilution, interference between stored patterns, and additional background noise. We show that a marked difference in terms of the occurrence of oscillations or chaos exists between neural networks with parallel and random sequential dynamics. Images PMID:2251287

  15. Neural Network Based Modeling and Analysis of LP Control Surface Allocation

    NASA Technical Reports Server (NTRS)

    Langari, Reza; Krishnakumar, Kalmanje; Gundy-Burlet, Karen

    2003-01-01

    This paper presents an approach to interpretive modeling of LP based control allocation in intelligent flight control. The emphasis is placed on a nonlinear interpretation of the LP allocation process as a static map to support analytical study of the resulting closed loop system, albeit in approximate form. The approach makes use of a bi-layer neural network to capture the essential functioning of the LP allocation process. It is further shown via Lyapunov based analysis that under certain relatively mild conditions the resulting closed loop system is stable. Some preliminary conclusions from a study at Ames are stated and directions for further research are given at the conclusion of the paper.

  16. Optimizing a neural network for detection of moving vehicles in video

    NASA Astrophysics Data System (ADS)

    Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri

    2017-10-01

    In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.

  17. Noradrenergic modulation of neural erotic stimulus perception.

    PubMed

    Graf, Heiko; Wiegers, Maike; Metzger, Coraline Danielle; Walter, Martin; Grön, Georg; Abler, Birgit

    2017-09-01

    We recently investigated neuromodulatory effects of the noradrenergic agent reboxetine and the dopamine receptor affine amisulpride in healthy subjects on dynamic erotic stimulus processing. Whereas amisulpride left sexual functions and neural activations unimpaired, we observed detrimental activations under reboxetine within the caudate nucleus corresponding to motivational components of sexual behavior. However, broadly impaired subjective sexual functioning under reboxetine suggested effects on further neural components. We now investigated the same sample under these two agents with static erotic picture stimulation as alternative stimulus presentation mode to potentially observe further neural treatment effects of reboxetine. 19 healthy males were investigated under reboxetine, amisulpride and placebo for 7 days each within a double-blind cross-over design. During fMRI static erotic picture were presented with preceding anticipation periods. Subjective sexual functions were assessed by a self-reported questionnaire. Neural activations were attenuated within the caudate nucleus, putamen, ventral striatum, the pregenual and anterior midcingulate cortex and in the orbitofrontal cortex under reboxetine. Subjective diminished sexual arousal under reboxetine was correlated with attenuated neural reactivity within the posterior insula. Again, amisulpride left neural activations along with subjective sexual functioning unimpaired. Neither reboxetine nor amisulpride altered differential neural activations during anticipation of erotic stimuli. Our results verified detrimental effects of noradrenergic agents on neural motivational but also emotional and autonomic components of sexual behavior. Considering the overlap of neural network alterations with those evoked by serotonergic agents, our results suggest similar neuromodulatory effects of serotonergic and noradrenergic agents on common neural pathways relevant for sexual behavior. Copyright © 2017 Elsevier B.V. and ECNP. All rights reserved.

  18. TRACING CO-REGULATORY NETWORK DYNAMICS IN NOISY, SINGLE-CELL TRANSCRIPTOME TRAJECTORIES.

    PubMed

    Cordero, Pablo; Stuart, Joshua M

    2017-01-01

    The availability of gene expression data at the single cell level makes it possible to probe the molecular underpinnings of complex biological processes such as differentiation and oncogenesis. Promising new methods have emerged for reconstructing a progression 'trajectory' from static single-cell transcriptome measurements. However, it remains unclear how to adequately model the appreciable level of noise in these data to elucidate gene regulatory network rewiring. Here, we present a framework called Single Cell Inference of MorphIng Trajectories and their Associated Regulation (SCIMITAR) that infers progressions from static single-cell transcriptomes by employing a continuous parametrization of Gaussian mixtures in high-dimensional curves. SCIMITAR yields rich models from the data that highlight genes with expression and co-expression patterns that are associated with the inferred progression. Further, SCIMITAR extracts regulatory states from the implicated trajectory-evolvingco-expression networks. We benchmark the method on simulated data to show that it yields accurate cell ordering and gene network inferences. Applied to the interpretation of a single-cell human fetal neuron dataset, SCIMITAR finds progression-associated genes in cornerstone neural differentiation pathways missed by standard differential expression tests. Finally, by leveraging the rewiring of gene-gene co-expression relations across the progression, the method reveals the rise and fall of co-regulatory states and trajectory-dependent gene modules. These analyses implicate new transcription factors in neural differentiation including putative co-factors for the multi-functional NFAT pathway.

  19. Efficient shortest-path-tree computation in network routing based on pulse-coupled neural networks.

    PubMed

    Qu, Hong; Yi, Zhang; Yang, Simon X

    2013-06-01

    Shortest path tree (SPT) computation is a critical issue for routers using link-state routing protocols, such as the most commonly used open shortest path first and intermediate system to intermediate system. Each router needs to recompute a new SPT rooted from itself whenever a change happens in the link state. Most commercial routers do this computation by deleting the current SPT and building a new one using static algorithms such as the Dijkstra algorithm at the beginning. Such recomputation of an entire SPT is inefficient, which may consume a considerable amount of CPU time and result in a time delay in the network. Some dynamic updating methods using the information in the updated SPT have been proposed in recent years. However, there are still many limitations in those dynamic algorithms. In this paper, a new modified model of pulse-coupled neural networks (M-PCNNs) is proposed for the SPT computation. It is rigorously proved that the proposed model is capable of solving some optimization problems, such as the SPT. A static algorithm is proposed based on the M-PCNNs to compute the SPT efficiently for large-scale problems. In addition, a dynamic algorithm that makes use of the structure of the previously computed SPT is proposed, which significantly improves the efficiency of the algorithm. Simulation results demonstrate the effective and efficient performance of the proposed approach.

  20. Decoupling control of vehicle chassis system based on neural network inverse system

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Zhao, Wanzhong; Luan, Zhongkai; Gao, Qi; Deng, Ke

    2018-06-01

    Steering and suspension are two important subsystems affecting the handling stability and riding comfort of the chassis system. In order to avoid the interference and coupling of the control channels between active front steering (AFS) and active suspension subsystems (ASS), this paper presents a composite decoupling control method, which consists of a neural network inverse system and a robust controller. The neural network inverse system is composed of a static neural network with several integrators and state feedback of the original chassis system to approach the inverse system of the nonlinear systems. The existence of the inverse system for the chassis system is proved by the reversibility derivation of Interactor algorithm. The robust controller is based on the internal model control (IMC), which is designed to improve the robustness and anti-interference of the decoupled system by adding a pre-compensation controller to the pseudo linear system. The results of the simulation and vehicle test show that the proposed decoupling controller has excellent decoupling performance, which can transform the multivariable system into a number of single input and single output systems, and eliminate the mutual influence and interference. Furthermore, it has satisfactory tracking capability and robust performance, which can improve the comprehensive performance of the chassis system.

  1. Synaptic dynamics regulation in response to high frequency stimulation in neuronal networks

    NASA Astrophysics Data System (ADS)

    Su, Fei; Wang, Jiang; Li, Huiyan; Wei, Xile; Yu, Haitao; Deng, Bin

    2018-02-01

    High frequency stimulation (HFS) has confirmed its ability in modulating the pathological neural activities. However its detailed mechanism is unclear. This study aims to explore the effects of HFS on neuronal networks dynamics. First, the two-neuron FitzHugh-Nagumo (FHN) networks with static coupling strength and the small-world FHN networks with spike-time-dependent plasticity (STDP) modulated synaptic coupling strength are constructed. Then, the multi-scale method is used to transform the network models into equivalent averaged models, where the HFS intensity is modeled as the ratio between stimulation amplitude and frequency. Results show that in static two-neuron networks, there is still synaptic current projected to the postsynaptic neuron even if the presynaptic neuron is blocked by the HFS. In the small-world networks, the effects of the STDP adjusting rate parameter on the inactivation ratio and synchrony degree increase with the increase of HFS intensity. However, only when the HFS intensity becomes very large can the STDP time window parameter affect the inactivation ratio and synchrony index. Both simulation and numerical analysis demonstrate that the effects of HFS on neuronal network dynamics are realized through the adjustment of synaptic variable and conductance.

  2. Detection of Foreign Matter in Transfusion Solution Based on Gaussian Background Modeling and an Optimized BP Neural Network

    PubMed Central

    Zhou, Fuqiang; Su, Zhen; Chai, Xinghua; Chen, Lipeng

    2014-01-01

    This paper proposes a new method to detect and identify foreign matter mixed in a plastic bottle filled with transfusion solution. A spin-stop mechanism and mixed illumination style are applied to obtain high contrast images between moving foreign matter and a static transfusion background. The Gaussian mixture model is used to model the complex background of the transfusion image and to extract moving objects. A set of features of moving objects are extracted and selected by the ReliefF algorithm, and optimal feature vectors are fed into the back propagation (BP) neural network to distinguish between foreign matter and bubbles. The mind evolutionary algorithm (MEA) is applied to optimize the connection weights and thresholds of the BP neural network to obtain a higher classification accuracy and faster convergence rate. Experimental results show that the proposed method can effectively detect visible foreign matter in 250-mL transfusion bottles. The misdetection rate and false alarm rate are low, and the detection accuracy and detection speed are satisfactory. PMID:25347581

  3. Forecasting of palm oil price in Malaysia using linear and nonlinear methods

    NASA Astrophysics Data System (ADS)

    Nor, Abu Hassan Shaari Md; Sarmidi, Tamat; Hosseinidoust, Ehsan

    2014-09-01

    The first question that comes to the mind is: "How can we predict the palm oil price accurately?" This question is the authorities, policy makers and economist's question for a long period of time. The first reason is that in the recent years Malaysia showed a comparative advantage in palm oil production and has become top producer and exporter in the world. Secondly, palm oil price plays significant role in government budget and represents important source of income for Malaysia, which potentially can influence the magnitude of monetary policies and eventually have an impact on inflation. Thirdly, knowledge on the future trends would be helpful in the planning and decision making procedures and will generate precise fiscal and monetary policy. Daily data on palm oil prices along with the ARIMA models, neural networks and fuzzy logic systems are employed in this paper. Empirical findings indicate that the dynamic neural network of NARX and the hybrid system of ANFIS provide higher accuracy than the ARIMA and static neural network for forecasting the palm oil price in Malaysia.

  4. Deep Learning for Real-Time Capable Object Detection and Localization on Mobile Platforms

    NASA Astrophysics Data System (ADS)

    Particke, F.; Kolbenschlag, R.; Hiller, M.; Patiño-Studencki, L.; Thielecke, J.

    2017-10-01

    Industry 4.0 is one of the most formative terms in current times. Subject of research are particularly smart and autonomous mobile platforms, which enormously lighten the workload and optimize production processes. In order to interact with humans, the platforms need an in-depth knowledge of the environment. Hence, it is required to detect a variety of static and non-static objects. Goal of this paper is to propose an accurate and real-time capable object detection and localization approach for the use on mobile platforms. A method is introduced to use the powerful detection capabilities of a neural network for the localization of objects. Therefore, detection information of a neural network is combined with depth information from a RGB-D camera, which is mounted on a mobile platform. As detection network, YOLO Version 2 (YOLOv2) is used on a mobile robot. In order to find the detected object in the depth image, the bounding boxes, predicted by YOLOv2, are mapped to the corresponding regions in the depth image. This provides a powerful and extremely fast approach for establishing a real-time-capable Object Locator. In the evaluation part, the localization approach turns out to be very accurate. Nevertheless, it is dependent on the detected object itself and some additional parameters, which are analysed in this paper.

  5. Understanding emotion with brain networks.

    PubMed

    Pessoa, Luiz

    2018-02-01

    Emotional processing appears to be interlocked with perception, cognition, motivation, and action. These interactions are supported by the brain's large-scale non-modular anatomical and functional architectures. An important component of this organization involves characterizing the brain in terms of networks. Two aspects of brain networks are discussed: brain networks should be considered as inherently overlapping (not disjoint) and dynamic (not static). Recent work on multivariate pattern analysis shows that affective dimensions can be detected in the activity of distributed neural systems that span cortical and subcortical regions. More broadly, the paper considers how we should think of causation in complex systems like the brain, so as to inform the relationship between emotion and other mental aspects, such as cognition.

  6. Displacement prediction of Baijiabao landslide based on empirical mode decomposition and long short-term memory neural network in Three Gorges area, China

    NASA Astrophysics Data System (ADS)

    Xu, Shiluo; Niu, Ruiqing

    2018-02-01

    Every year, landslides pose huge threats to thousands of people in China, especially those in the Three Gorges area. It is thus necessary to establish an early warning system to help prevent property damage and save peoples' lives. Most of the landslide displacement prediction models that have been proposed are static models. However, landslides are dynamic systems. In this paper, the total accumulative displacement of the Baijiabao landslide is divided into trend and periodic components using empirical mode decomposition. The trend component is predicted using an S-curve estimation, and the total periodic component is predicted using a long short-term memory neural network (LSTM). LSTM is a dynamic model that can remember historical information and apply it to the current output. Six triggering factors are chosen to predict the periodic term using the Pearson cross-correlation coefficient and mutual information. These factors include the cumulative precipitation during the previous month, the cumulative precipitation during a two-month period, the reservoir level during the current month, the change in the reservoir level during the previous month, the cumulative increment of the reservoir level during the current month, and the cumulative displacement during the previous month. When using one-step-ahead prediction, LSTM yields a root mean squared error (RMSE) value of 6.112 mm, while the support vector machine for regression (SVR) and the back-propagation neural network (BP) yield values of 10.686 mm and 8.237 mm, respectively. Meanwhile, the Elman network (Elman) yields an RMSE value of 6.579 mm. In addition, when using multi-step-ahead prediction, LSTM obtains an RMSE value of 8.648 mm, while SVR, BP and the Elman network obtains RSME values of 13.418 mm, 13.014 mm, and 13.370 mm. The predicted results indicate that, to some extent, the dynamic model (LSTM) achieves results that are more accurate than those of the static models (i.e., SVR and BP). LSTM even displays better performance than the Elman network, which is also a dynamic method.

  7. Memristive neural network for on-line learning and tracking with brain-inspired spike timing dependent plasticity.

    PubMed

    Pedretti, G; Milo, V; Ambrogio, S; Carboni, R; Bianchi, S; Calderoni, A; Ramaswamy, N; Spinelli, A S; Ielmini, D

    2017-07-13

    Brain-inspired computation can revolutionize information technology by introducing machines capable of recognizing patterns (images, speech, video) and interacting with the external world in a cognitive, humanlike way. Achieving this goal requires first to gain a detailed understanding of the brain operation, and second to identify a scalable microelectronic technology capable of reproducing some of the inherent functions of the human brain, such as the high synaptic connectivity (~10 4 ) and the peculiar time-dependent synaptic plasticity. Here we demonstrate unsupervised learning and tracking in a spiking neural network with memristive synapses, where synaptic weights are updated via brain-inspired spike timing dependent plasticity (STDP). The synaptic conductance is updated by the local time-dependent superposition of pre- and post-synaptic spikes within a hybrid one-transistor/one-resistor (1T1R) memristive synapse. Only 2 synaptic states, namely the low resistance state (LRS) and the high resistance state (HRS), are sufficient to learn and recognize patterns. Unsupervised learning of a static pattern and tracking of a dynamic pattern of up to 4 × 4 pixels are demonstrated, paving the way for intelligent hardware technology with up-scaled memristive neural networks.

  8. Prolongation of SMAP to Spatiotemporally Seamless Coverage of Continental U.S. Using a Deep Learning Neural Network

    NASA Astrophysics Data System (ADS)

    Fang, Kuai; Shen, Chaopeng; Kifer, Daniel; Yang, Xiao

    2017-11-01

    The Soil Moisture Active Passive (SMAP) mission has delivered valuable sensing of surface soil moisture since 2015. However, it has a short time span and irregular revisit schedules. Utilizing a state-of-the-art time series deep learning neural network, Long Short-Term Memory (LSTM), we created a system that predicts SMAP level-3 moisture product with atmospheric forcings, model-simulated moisture, and static physiographic attributes as inputs. The system removes most of the bias with model simulations and improves predicted moisture climatology, achieving small test root-mean-square errors (<0.035) and high-correlation coefficients >0.87 for over 75% of Continental United States, including the forested southeast. As the first application of LSTM in hydrology, we show the proposed network avoids overfitting and is robust for both temporal and spatial extrapolation tests. LSTM generalizes well across regions with distinct climates and environmental settings. With high fidelity to SMAP, LSTM shows great potential for hindcasting, data assimilation, and weather forecasting.

  9. Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.

    PubMed

    Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone

    2017-12-26

    Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.

  10. A Fast Neural Network Approach to Predict Lung Tumor Motion during Respiration for Radiation Therapy Applications

    PubMed Central

    Slama, Matous; Benes, Peter M.; Bila, Jiri

    2015-01-01

    During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time. PMID:25893194

  11. A fast neural network approach to predict lung tumor motion during respiration for radiation therapy applications.

    PubMed

    Bukovsky, Ivo; Homma, Noriyasu; Ichiji, Kei; Cejnek, Matous; Slama, Matous; Benes, Peter M; Bila, Jiri

    2015-01-01

    During radiotherapy treatment for thoracic and abdomen cancers, for example, lung cancers, respiratory motion moves the target tumor and thus badly affects the accuracy of radiation dose delivery into the target. A real-time image-guided technique can be used to monitor such lung tumor motion for accurate dose delivery, but the system latency up to several hundred milliseconds for repositioning the radiation beam also affects the accuracy. In order to compensate the latency, neural network prediction technique with real-time retraining can be used. We have investigated real-time prediction of 3D time series of lung tumor motion on a classical linear model, perceptron model, and on a class of higher-order neural network model that has more attractive attributes regarding its optimization convergence and computational efficiency. The implemented static feed-forward neural architectures are compared when using gradient descent adaptation and primarily the Levenberg-Marquardt batch algorithm as the ones of the most common and most comprehensible learning algorithms. The proposed technique resulted in fast real-time retraining, so the total computational time on a PC platform was equal to or even less than the real treatment time. For one-second prediction horizon, the proposed techniques achieved accuracy less than one millimeter of 3D mean absolute error in one hundred seconds of total treatment time.

  12. Invariant recognition drives neural representations of action sequences

    PubMed Central

    Poggio, Tomaso

    2017-01-01

    Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864

  13. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders.

    PubMed

    Sato, Wataru; Toichi, Motomi; Uono, Shota; Kochiyama, Takanori

    2012-08-13

    Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD.We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI). Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex-MTG-IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD.

  14. A continuous-time neural model for sequential action.

    PubMed

    Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard

    2014-11-05

    Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  15. The Perception of Dynamic and Static Facial Expressions of Happiness and Disgust Investigated by ERPs and fMRI Constrained Source Analysis

    PubMed Central

    Trautmann-Lengsfeld, Sina Alexa; Domínguez-Borràs, Judith; Escera, Carles; Herrmann, Manfred; Fehr, Thorsten

    2013-01-01

    A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach. PMID:23818974

  16. UAV Trajectory Modeling Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural network should be able to predict the vehicle's future states at next time step. A complete 4-D trajectory are then generated step by step using the trained neural network. Experiments in this work show that the neural network can approximate the sUAV's model and predict the trajectory accurately.

  17. Classification of time-of-flight secondary ion mass spectrometry spectra from complex Cu-Fe sulphides by principal component analysis and artificial neural networks.

    PubMed

    Kalegowda, Yogesh; Harmer, Sarah L

    2013-01-08

    Artificial neural network (ANN) and a hybrid principal component analysis-artificial neural network (PCA-ANN) classifiers have been successfully implemented for classification of static time-of-flight secondary ion mass spectrometry (ToF-SIMS) mass spectra collected from complex Cu-Fe sulphides (chalcopyrite, bornite, chalcocite and pyrite) at different flotation conditions. ANNs are very good pattern classifiers because of: their ability to learn and generalise patterns that are not linearly separable; their fault and noise tolerance capability; and high parallelism. In the first approach, fragments from the whole ToF-SIMS spectrum were used as input to the ANN, the model yielded high overall correct classification rates of 100% for feed samples, 88% for conditioned feed samples and 91% for Eh modified samples. In the second approach, the hybrid pattern classifier PCA-ANN was integrated. PCA is a very effective multivariate data analysis tool applied to enhance species features and reduce data dimensionality. Principal component (PC) scores which accounted for 95% of the raw spectral data variance, were used as input to the ANN, the model yielded high overall correct classification rates of 88% for conditioned feed samples and 95% for Eh modified samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Diminished neural network dynamics after moderate and severe traumatic brain injury.

    PubMed

    Gilbert, Nicholas; Bernier, Rachel A; Calhoun, Vincent D; Brenner, Einat; Grossner, Emily; Rajtmajer, Sarah M; Hillary, Frank G

    2018-01-01

    Over the past decade there has been increasing enthusiasm in the cognitive neurosciences around using network science to understand the system-level changes associated with brain disorders. A growing literature has used whole-brain fMRI analysis to examine changes in the brain's subnetworks following traumatic brain injury (TBI). Much of network modeling in this literature has focused on static network mapping, which provides a window into gross inter-nodal relationships, but is insensitive to more subtle fluctuations in network dynamics, which may be an important predictor of neural network plasticity. In this study, we examine the dynamic connectivity with focus on state-level connectivity (state) and evaluate the reliability of dynamic network states over the course of two runs of intermittent task and resting data. The goal was to examine the dynamic properties of neural networks engaged periodically with task stimulation in order to determine: 1) the reliability of inter-nodal and network-level characteristics over time and 2) the transitions between distinct network states after traumatic brain injury. To do so, we enrolled 23 individuals with moderate and severe TBI at least 1-year post injury and 19 age- and education-matched healthy adults using functional MRI methods, dynamic connectivity modeling, and graph theory. The results reveal several distinct network "states" that were reliably evident when comparing runs; the overall frequency of dynamic network states are highly reproducible (r-values>0.8) for both samples. Analysis of movement between states resulted in fewer state transitions in the TBI sample and, in a few cases, brain injury resulted in the appearance of states not exhibited by the healthy control (HC) sample. Overall, the findings presented here demonstrate the reliability of observable dynamic mental states during periods of on-task performance and support emerging evidence that brain injury may result in diminished network dynamics.

  19. Diminished neural network dynamics after moderate and severe traumatic brain injury

    PubMed Central

    Gilbert, Nicholas; Bernier, Rachel A.; Calhoun, Vincent D.; Brenner, Einat; Grossner, Emily; Rajtmajer, Sarah M.

    2018-01-01

    Over the past decade there has been increasing enthusiasm in the cognitive neurosciences around using network science to understand the system-level changes associated with brain disorders. A growing literature has used whole-brain fMRI analysis to examine changes in the brain’s subnetworks following traumatic brain injury (TBI). Much of network modeling in this literature has focused on static network mapping, which provides a window into gross inter-nodal relationships, but is insensitive to more subtle fluctuations in network dynamics, which may be an important predictor of neural network plasticity. In this study, we examine the dynamic connectivity with focus on state-level connectivity (state) and evaluate the reliability of dynamic network states over the course of two runs of intermittent task and resting data. The goal was to examine the dynamic properties of neural networks engaged periodically with task stimulation in order to determine: 1) the reliability of inter-nodal and network-level characteristics over time and 2) the transitions between distinct network states after traumatic brain injury. To do so, we enrolled 23 individuals with moderate and severe TBI at least 1-year post injury and 19 age- and education-matched healthy adults using functional MRI methods, dynamic connectivity modeling, and graph theory. The results reveal several distinct network “states” that were reliably evident when comparing runs; the overall frequency of dynamic network states are highly reproducible (r-values>0.8) for both samples. Analysis of movement between states resulted in fewer state transitions in the TBI sample and, in a few cases, brain injury resulted in the appearance of states not exhibited by the healthy control (HC) sample. Overall, the findings presented here demonstrate the reliability of observable dynamic mental states during periods of on-task performance and support emerging evidence that brain injury may result in diminished network dynamics. PMID:29883447

  20. Structure design and characteristic analysis of micro-nano probe based on six dimensional micro-force measuring principle

    NASA Astrophysics Data System (ADS)

    Yang, Hong-tao; Cai, Chun-mei; Fang, Chuan-zhi; Wu, Tian-feng

    2013-10-01

    In order to develop micro-nano probe having error self-correcting function and good rigidity structure, a new micro-nano probe system was developed based on six-dimensional micro-force measuring principle. The structure and working principle of the probe was introduced in detail. The static nonlinear decoupling method was established with BP neural network to do the static decoupling for the dimension coupling existing in each direction force measurements. The optimal parameters of BP neural network were selected and the decoupling simulation experiments were done. The maximum probe coupling rate after decoupling is 0.039% in X direction, 0.025% in Y direction and 0.027% in Z direction. The static measurement sensitivity of the probe can reach 10.76μɛ / mN in Z direction and 14.55μɛ / mN in X and Y direction. The modal analysis and harmonic response analysis under three dimensional harmonic load of the probe were done by using finite element method. The natural frequencies under different vibration modes were obtained and the working frequency of the probe was determined, which is higher than 10000 Hz . The transient response analysis of the probe was done, which indicates that the response time of the probe can reach 0.4 ms. From the above results, it is shown that the developed micro-nano probe meets triggering requirements of micro-nano probe. Three dimension measuring force can be measured precisely by the developed probe, which can be used to predict and correct the force deformation error and the touch error of the measuring ball and the measuring rod.

  1. An automated technique for potential differentiation of ovarian mature teratomas from other benign tumours using neural networks classification of 2D ultrasound static images: a pilot study

    NASA Astrophysics Data System (ADS)

    Al-karawi, Dhurgham; Sayasneh, A.; Al-Assam, Hisham; Jassim, Sabah; Page, N.; Timmerman, D.; Bourne, T.; Du, Hongbo

    2017-05-01

    Ovarian cysts are a common pathology in women of all age groups. It is estimated that 5-10% of women have a surgical intervention to remove an ovarian cyst in their lifetime. Given this frequency rate, characterization of ovarian masses is essential for optimal management of patients. Patients with benign ovarian masses can be managed conservatively if they are asymptomatic. Mature teratomas are common benign ovarian cysts that occur, in most cases, in premenopausal women. These ovarian cysts can contain different types of human tissue including bone, cartilage, fat, hair, or other tissue. If they are causing no symptoms, they can be harmless and may not require surgery. Subjective assessment by ultrasound examiners has a high diagnostic accuracy when characterising mature teratomas from other types of tumours. The aim of this study is to develop a computerised technique with the potential to characterise mature teratomas and distinguish them from other types of benign ovarian tumours. Local Binary Pattern (LBP) was applied to extract texture features that are specific in distinguishing teratomas. Neural Networks (NN) was then used as a classifier for recognising mature teratomas. A pilot sample set of 130 B-mode static ovarian ultrasound images (41 mature teratomas tumours and 89 other types of benign tumours) was used to test the effectiveness of the proposed technique. Test results show an average accuracy rate of 99.4% with a sensitivity of 100%, specificity of 98.8% and positive predictive value of 98.9%. This study demonstrates that the NN and LBP techniques can accurately classify static 2D B-mode ultrasound images of benign ovarian masses into mature teratomas and other types of benign tumours.

  2. Static hand gesture recognition from a video

    NASA Astrophysics Data System (ADS)

    Rokade, Rajeshree S.; Doye, Dharmpal

    2011-10-01

    A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns to convey meaning- "simultaneously combining hand shapes, orientation and movement of the hands". Sign languages commonly develop in deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. In this paper, we proposed a novel system for recognition of static hand gestures from a video, based on Kohonen neural network. We proposed algorithm to separate out key frames, which include correct gestures from a video sequence. We segment, hand images from complex and non uniform background. Features are extracted by applying Kohonen on key frames and recognition is done.

  3. An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics

    NASA Technical Reports Server (NTRS)

    Baluja, Shumeet

    1995-01-01

    This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.

  4. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders

    PubMed Central

    2012-01-01

    Background Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD. We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI). Result Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. Conclusions These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD. PMID:22889284

  5. Aeroelasticity of morphing wings using neural networks

    NASA Astrophysics Data System (ADS)

    Natarajan, Anand

    In this dissertation, neural networks are designed to effectively model static non-linear aeroelastic problems in adaptive structures and linear dynamic aeroelastic systems with time varying stiffness. The use of adaptive materials in aircraft wings allows for the change of the contour or the configuration of a wing (morphing) in flight. The use of smart materials, to accomplish these deformations, can imply that the stiffness of the wing with a morphing contour changes as the contour changes. For a rapidly oscillating body in a fluid field, continuously adapting structural parameters may render the wing to behave as a time variant system. Even the internal spars/ribs of the aircraft wing which define the wing stiffness can be made adaptive, that is, their stiffness can be made to vary with time. The immediate effect on the structural dynamics of the wing, is that, the wing motion is governed by a differential equation with time varying coefficients. The study of this concept of a time varying torsional stiffness, made possible by the use of active materials and adaptive spars, in the dynamic aeroelastic behavior of an adaptable airfoil is performed here. Another type of aeroelastic problem of an adaptive structure that is investigated here, is the shape control of an adaptive bump situated on the leading edge of an airfoil. Such a bump is useful in achieving flow separation control for lateral directional maneuverability of the aircraft. Since actuators are being used to create this bump on the wing surface, the energy required to do so needs to be minimized. The adverse pressure drag as a result of this bump needs to be controlled so that the loss in lift over the wing is made minimal. The design of such a "spoiler bump" on the surface of the airfoil is an optimization problem of maximizing pressure drag due to flow separation while minimizing the loss in lift and energy required to deform the bump. One neural network is trained using the CFD code FLUENT to represent the aerodynamic loading over the bump. A second neural network is trained for calculating the actuator loads, bump displacement and lift, drag forces over the airfoil using the finite element solver, ANSYS and the previously trained neural network. This non-linear aeroelastic model of the deforming bump on an airfoil surface using neural networks can serve as a fore-runner for other non-linear aeroelastic problems.

  6. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John M.; Herren, Kenneth A.

    2008-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  7. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Herren, Kenneth

    2007-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  8. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    PubMed

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  9. Enhanced and diminished visuo-spatial information processing in autism depends on stimulus complexity.

    PubMed

    Bertone, Armando; Mottron, Laurent; Jelenic, Patricia; Faubert, Jocelyn

    2005-10-01

    Visuo-perceptual processing in autism is characterized by intact or enhanced performance on static spatial tasks and inferior performance on dynamic tasks, suggesting a deficit of dorsal visual stream processing in autism. However, previous findings by Bertone et al. indicate that neuro-integrative mechanisms used to detect complex motion, rather than motion perception per se, may be impaired in autism. We present here the first demonstration of concurrent enhanced and decreased performance in autism on the same visuo-spatial static task, wherein the only factor dichotomizing performance was the neural complexity required to discriminate grating orientation. The ability of persons with autism was found to be superior for identifying the orientation of simple, luminance-defined (or first-order) gratings but inferior for complex, texture-defined (or second-order) gratings. Using a flicker contrast sensitivity task, we demonstrated that this finding is probably not due to abnormal information processing at a sub-cortical level (magnocellular and parvocellular functioning). Together, these findings are interpreted as a clear indication of altered low-level perceptual information processing in autism, and confirm that the deficits and assets observed in autistic visual perception are contingent on the complexity of the neural network required to process a given type of visual stimulus. We suggest that atypical neural connectivity, resulting in enhanced lateral inhibition, may account for both enhanced and decreased low-level information processing in autism.

  10. Comparison of neural network applications for channel assignment in cellular TDMA networks and dynamically sectored PCS networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1997-04-01

    The use of artificial neural networks (NNs) to address the channel assignment problem (CAP) for cellular time-division multiple access and code-division multiple access networks has previously been investigated by this author and many others. The investigations to date have been based on a hexagonal cell structure established by omnidirectional antennas at the base stations. No account was taken of the use of spatial isolation enabled by directional antennas to reduce interference between mobiles. Any reduction in interference translates into increased capacity and consequently alters the performance of the NNs. Previous studies have sought to improve the performance of Hopfield- Tank network algorithms and self-organizing feature map algorithms applied primarily to static channel assignment (SCA) for cellular networks that handle uniformly distributed, stationary traffic in each cell for a single type of service. The resulting algorithms minimize energy functions representing interference constraint and ad hoc conditions that promote convergence to optimal solutions. While the structures of the derived neural network algorithms (NNAs) offer the potential advantages of inherent parallelism and adaptability to changing system conditions, this potential has yet to be fulfilled the CAP for emerging mobile networks. The next-generation communication infrastructures must accommodate dynamic operating conditions. Macrocell topologies are being refined to microcells and picocells that can be dynamically sectored by adaptively controlled, directional antennas and programmable transceivers. These networks must support the time-varying demands for personal communication services (PCS) that simultaneously carry voice, data and video and, thus, require new dynamic channel assignment (DCA) algorithms. This paper examines the impact of dynamic cell sectoring and geometric conditioning on NNAs developed for SCA in omnicell networks with stationary traffic to improve the metrics of convergence rate and call blocking. Genetic algorithms (GAs) are also considered in PCS networks as a means to overcome the known weakness of Hopfield NNAs in determining global minima. The resulting GAs for DCA in PCS networks are compared to improved DCA algorithms based on Hopfield NNs for stationary cellular networks. Algorithm performance is compared on the basis of rate of convergence, blocking probability, analytic complexity, and parametric sensitivity to transient traffic demands and channel interference.

  11. Luminance- and Texture-Defined Information Processing in School-Aged Children with Autism

    PubMed Central

    Rivest, Jessica B.; Jemel, Boutheina; Bertone, Armando; McKerral, Michelle; Mottron, Laurent

    2013-01-01

    According to the complexity-specific hypothesis, the efficacy with which individuals with autism spectrum disorder (ASD) process visual information varies according to the extensiveness of the neural network required to process stimuli. Specifically, adults with ASD are less sensitive to texture-defined (or second-order) information, which necessitates the implication of several cortical visual areas. Conversely, the sensitivity to simple, luminance-defined (or first-order) information, which mainly relies on primary visual cortex (V1) activity, has been found to be either superior (static material) or intact (dynamic material) in ASD. It is currently unknown if these autistic perceptual alterations are present in childhood. In the present study, behavioural (threshold) and electrophysiological measures were obtained for static luminance- and texture-defined gratings presented to school-aged children with ASD and compared to those of typically developing children. Our behavioural and electrophysiological (P140) results indicate that luminance processing is likely unremarkable in autistic children. With respect to texture processing, there was no significant threshold difference between groups. However, unlike typical children, autistic children did not show reliable enhancements of brain activity (N230 and P340) in response to texture-defined gratings relative to luminance-defined gratings. This suggests reduced efficiency of neuro-integrative mechanisms operating at a perceptual level in autism. These results are in line with the idea that visual atypicalities mediated by intermediate-scale neural networks emerge before or during the school-age period in autism. PMID:24205355

  12. Luminance- and texture-defined information processing in school-aged children with autism.

    PubMed

    Rivest, Jessica B; Jemel, Boutheina; Bertone, Armando; McKerral, Michelle; Mottron, Laurent

    2013-01-01

    According to the complexity-specific hypothesis, the efficacy with which individuals with autism spectrum disorder (ASD) process visual information varies according to the extensiveness of the neural network required to process stimuli. Specifically, adults with ASD are less sensitive to texture-defined (or second-order) information, which necessitates the implication of several cortical visual areas. Conversely, the sensitivity to simple, luminance-defined (or first-order) information, which mainly relies on primary visual cortex (V1) activity, has been found to be either superior (static material) or intact (dynamic material) in ASD. It is currently unknown if these autistic perceptual alterations are present in childhood. In the present study, behavioural (threshold) and electrophysiological measures were obtained for static luminance- and texture-defined gratings presented to school-aged children with ASD and compared to those of typically developing children. Our behavioural and electrophysiological (P140) results indicate that luminance processing is likely unremarkable in autistic children. With respect to texture processing, there was no significant threshold difference between groups. However, unlike typical children, autistic children did not show reliable enhancements of brain activity (N230 and P340) in response to texture-defined gratings relative to luminance-defined gratings. This suggests reduced efficiency of neuro-integrative mechanisms operating at a perceptual level in autism. These results are in line with the idea that visual atypicalities mediated by intermediate-scale neural networks emerge before or during the school-age period in autism.

  13. Prediction of Biological Motion Perception Performance from Intrinsic Brain Network Regional Efficiency

    PubMed Central

    Wang, Zengjian; Zhang, Delong; Liang, Bishan; Chang, Song; Pan, Jinghua; Huang, Ruiwang; Liu, Ming

    2016-01-01

    Biological motion perception (BMP) refers to the ability to perceive the moving form of a human figure from a limited amount of stimuli, such as from a few point lights located on the joints of a moving body. BMP is commonplace and important, but there is great inter-individual variability in this ability. This study used multiple regression model analysis to explore the association between BMP performance and intrinsic brain activity, in order to investigate the neural substrates underlying inter-individual variability of BMP performance. The resting-state functional magnetic resonance imaging (rs-fMRI) and BMP performance data were collected from 24 healthy participants, for whom intrinsic brain networks were constructed, and a graph-based network efficiency metric was measured. Then, a multiple linear regression model was used to explore the association between network regional efficiency and BMP performance. We found that the local and global network efficiency of many regions was significantly correlated with BMP performance. Further analysis showed that the local efficiency rather than global efficiency could be used to explain most of the BMP inter-individual variability, and the regions involved were predominately located in the Default Mode Network (DMN). Additionally, discrimination analysis showed that the local efficiency of certain regions such as the thalamus could be used to classify BMP performance across participants. Notably, the association pattern between network nodal efficiency and BMP was different from the association pattern of static directional/gender information perception. Overall, these findings show that intrinsic brain network efficiency may be considered a neural factor that explains BMP inter-individual variability. PMID:27853427

  14. Automatic classification of transiently evoked otoacoustic emissions using an artificial neural network.

    PubMed

    Buller, G; Lutman, M E

    1998-08-01

    The increasing use of transiently evoked otoacoustic emissions (TEOAE) in large neonatal hearing screening programmes makes a standardized method of response classification desirable. Until now methods have been either subjective or based on arbitrary response characteristics. This study takes an expert system approach to standardize the subjective judgements of an experienced scorer. The method that is developed comprises three stages. First, it transforms TEOAEs from waveforms in the time domain into a simplified parameter set. Second, the parameter set is classified by an artificial neural network that has been taught on a large database TEOAE waveforms and corresponding expert scores. Third, additional fuzzy logic rules automatically detect probable artefacts in the waveforms and synchronized spontaneous emission components. In this way, the knowledge of the experienced scorer is encapsulated in the expert system software and thereafter can be accessed by non-experts. Teaching and evaluation of the neural network was based on TEOAEs from a database totalling 2190 neonatal hearing screening tests. The database was divided into learning and test groups with 820 and 1370 waveforms respectively. From each recorded waveform a set of 12 parameters was calculated, representing signal static and dynamic properties. The artifical network was taught with parameter sets of only the learning groups. Reproduction of the human scorer classification by the neural net in the learning group showed a sensitivity for detecting screen fails of 99.3% (299 from 301 failed results on subjective scoring) and a specificity for detecting screen passes of 81.1% (421 of 519 pass results). To quantify the post hoc performance of the net (generalization), the test group was then presented to the network input. Sensitivity was 99.4% (474 from 477) and specificity was 87.3% (780 from 893). To check the efficiency of the classification method, a second learning group was selected out of the previous test group, and the previous learning group was used as the test group. Repeating learning and test procedures yielded 99.3% sensitivity and 80.7% specificity for reproduction, and 99.4% sensitivity and 86.7% specificity for generalization. In all respects, performance was better than for a previously optimized method based simply on cross-correlation between replicate non-linear waveforms. It is concluded that classification methods based on neural networks show promise for application to large neonatal screening programmes utilizing TEOAEs.

  15. Neural dynamics of motion perception: direction fields, apertures, and resonant grouping.

    PubMed

    Grossberg, S; Mingolla, E

    1993-03-01

    A neural network model of global motion segmentation by visual cortex is described. Called the motion boundary contour system (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyze how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The motion BCS describes how preprocessing of motion signals by a motion oriented contrast (MOC) filter is joined to long-range cooperative grouping mechanisms in a motion cooperative-competitive (MOCC) loop to control phenomena such as motion capture. The motion BCS is computed in parallel with the static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the motion BCS and the static BCS, specialized to process motion directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1-->MT and V1-->V2-->MT are made--notably, the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions of contrast. Interactions of model simple cells, complex cells, hyper-complex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.

  16. Detection of white matter lesion regions in MRI using SLIC0 and convolutional neural network.

    PubMed

    Diniz, Pedro Henrique Bandeira; Valente, Thales Levi Azevedo; Diniz, João Otávio Bandeira; Silva, Aristófanes Corrêa; Gattass, Marcelo; Ventura, Nina; Muniz, Bernardo Carvalho; Gasparetto, Emerson Leandro

    2018-04-19

    White matter lesions are non-static brain lesions that have a prevalence rate up to 98% in the elderly population. Because they may be associated with several brain diseases, it is important that they are detected as soon as possible. Magnetic Resonance Imaging (MRI) provides three-dimensional data with the possibility to detect and emphasize contrast differences in soft tissues, providing rich information about the human soft tissue anatomy. However, the amount of data provided for these images is far too much for manual analysis/interpretation, representing a difficult and time-consuming task for specialists. This work presents a computational methodology capable of detecting regions of white matter lesions of the brain in MRI of FLAIR modality. The techniques highlighted in this methodology are SLIC0 clustering for candidate segmentation and convolutional neural networks for candidate classification. The methodology proposed here consists of four steps: (1) images acquisition, (2) images preprocessing, (3) candidates segmentation and (4) candidates classification. The methodology was applied on 91 magnetic resonance images provided by DASA, and achieved an accuracy of 98.73%, specificity of 98.77% and sensitivity of 78.79% with 0.005 of false positives, without any false positives reduction technique, in detection of white matter lesion regions. It is demonstrated the feasibility of the analysis of brain MRI using SLIC0 and convolutional neural network techniques to achieve success in detection of white matter lesions regions. Copyright © 2018. Published by Elsevier B.V.

  17. Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramshaw, M. J.

    2017-07-28

    Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less

  18. Prolongation of SMAP to Spatio-temporally Seamless Coverage of Continental US Using a Deep Learning Neural Network

    NASA Astrophysics Data System (ADS)

    Fang, K.; Shen, C.; Kifer, D.; Yang, X.

    2017-12-01

    The Soil Moisture Active Passive (SMAP) mission has delivered high-quality and valuable sensing of surface soil moisture since 2015. However, its short time span, coarse resolution, and irregular revisit schedule have limited its use. Utilizing a state-of-the-art deep-in-time neural network, Long Short-Term Memory (LSTM), we created a system that predicts SMAP level-3 soil moisture data using climate forcing, model-simulated moisture, and static physical attributes as inputs. The system removes most of the bias with model simulations and also improves predicted moisture climatology, achieving a testing accuracy of 0.025 to 0.03 in most parts of Continental United States (CONUS). As the first application of LSTM in hydrology, we show that it is more robust than simpler methods in either temporal or spatial extrapolation tests. We also discuss roles of different predictors, the effectiveness of regularization algorithms and impacts of training strategies. With high fidelity to SMAP products, our data can aid various applications including data assimilation, weather forecasting, and soil moisture hindcasting.

  19. A novel optical fiber displacement sensor of wider measurement range based on neural network

    NASA Astrophysics Data System (ADS)

    Guo, Yuan; Dai, Xue Feng; Wang, Yu Tian

    2006-02-01

    By studying on the output characteristics of random type optical fiber sensor and semicircular type optical fiber sensor, the ratio of the two output signals was used as the output signal of the whole system. Then the measurement range was enlarged, the linearity was improved, and the errors of reflective and absorbent changing of target surface are automatically compensated. Meantime, an optical fiber sensor model of correcting static error based on BP artificial neural network(ANN) is set up. So the intrinsic errors such as effects of fluctuations in the light, circuit excursion, the intensity losses in the fiber lines and the additional losses in the receiving fiber caused by bends are eliminated. By discussing in theory and experiment, the error of nonlinear is 2.9%, the measuring range reaches to 5-6mm and the relative accuracy is 2%.And this sensor has such characteristics as no electromagnetic interference, simple construction, high sensitivity, good accuracy and stability. Also the multi-point sensor system can be used to on-line and non-touch monitor in working locales.

  20. Synaptic convergence regulates synchronization-dependent spike transfer in feedforward neural networks.

    PubMed

    Sailamul, Pachaya; Jang, Jaeson; Paik, Se-Bum

    2017-12-01

    Correlated neural activities such as synchronizations can significantly alter the characteristics of spike transfer between neural layers. However, it is not clear how this synchronization-dependent spike transfer can be affected by the structure of convergent feedforward wiring. To address this question, we implemented computer simulations of model neural networks: a source and a target layer connected with different types of convergent wiring rules. In the Gaussian-Gaussian (GG) model, both the connection probability and the strength are given as Gaussian distribution as a function of spatial distance. In the Uniform-Constant (UC) and Uniform-Exponential (UE) models, the connection probability density is a uniform constant within a certain range, but the connection strength is set as a constant value or an exponentially decaying function, respectively. Then we examined how the spike transfer function is modulated under these conditions, while static or synchronized input patterns were introduced to simulate different levels of feedforward spike synchronization. We observed that the synchronization-dependent modulation of the transfer function appeared noticeably different for each convergence condition. The modulation of the spike transfer function was largest in the UC model, and smallest in the UE model. Our analysis showed that this difference was induced by the different spike weight distributions that was generated from convergent synapses in each model. Our results suggest that, the structure of the feedforward convergence is a crucial factor for correlation-dependent spike control, thus must be considered important to understand the mechanism of information transfer in the brain.

  1. A framework for fast probabilistic centroid-moment-tensor determination—inversion of regional static displacement measurements

    NASA Astrophysics Data System (ADS)

    Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot

    2014-03-01

    The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.

  2. An algorithm for generating modular hierarchical neural network classifiers: a step toward larger scale applications

    NASA Astrophysics Data System (ADS)

    Roverso, Davide

    2003-08-01

    Many-class learning is the problem of training a classifier to discriminate among a large number of target classes. Together with the problem of dealing with high-dimensional patterns (i.e. a high-dimensional input space), the many class problem (i.e. a high-dimensional output space) is a major obstacle to be faced when scaling-up classifier systems and algorithms from small pilot applications to large full-scale applications. The Autonomous Recursive Task Decomposition (ARTD) algorithm is here proposed as a solution to the problem of many-class learning. Example applications of ARTD to neural classifier training are also presented. In these examples, improvements in training time are shown to range from 4-fold to more than 30-fold in pattern classification tasks of both static and dynamic character.

  3. An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems

    DTIC Science & Technology

    1991-12-01

    neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input

  4. Classification of 2-dimensional array patterns: assembling many small neural networks is better than using a large one.

    PubMed

    Chen, Liang; Xue, Wei; Tokuda, Naoyuki

    2010-08-01

    In many pattern classification/recognition applications of artificial neural networks, an object to be classified is represented by a fixed sized 2-dimensional array of uniform type, which corresponds to the cells of a 2-dimensional grid of the same size. A general neural network structure, called an undistricted neural network, which takes all the elements in the array as inputs could be used for problems such as these. However, a districted neural network can be used to reduce the training complexity. A districted neural network usually consists of two levels of sub-neural networks. Each of the lower level neural networks, called a regional sub-neural network, takes the elements in a region of the array as its inputs and is expected to output a temporary class label, called an individual opinion, based on the partial information of the entire array. The higher level neural network, called an assembling sub-neural network, uses the outputs (opinions) of regional sub-neural networks as inputs, and by consensus derives the label decision for the object. Each of the sub-neural networks can be trained separately and thus the training is less expensive. The regional sub-neural networks can be trained and performed in parallel and independently, therefore a high speed can be achieved. We prove theoretically in this paper, using a simple model, that a districted neural network is actually more stable than an undistricted neural network in noisy environments. We conjecture that the result is valid for all neural networks. This theory is verified by experiments involving gender classification and human face recognition. We conclude that a districted neural network is highly recommended for neural network applications in recognition or classification of 2-dimensional array patterns in highly noisy environments. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  5. Advanced Aeroservoelastic Testing and Data Analysis (Les Essais Aeroservoelastiques et l’Analyse des Donnees).

    DTIC Science & Technology

    1995-11-01

    network - based AFS concepts. Neural networks can addition of vanes in each engine exhaust for thrust provide...parameter estimation programs 19-11 8.6 Neural Network Based Methods unknown parameters of the postulated state space model Artificial neural network ...Forward Neural Network the network that the applicability of the recurrent neural and ii) Recurrent Neural Network [117-119]. network to

  6. Direct heuristic dynamic programming for damping oscillations in a large power system.

    PubMed

    Lu, Chao; Si, Jennie; Xie, Xiaorong

    2008-08-01

    This paper applies a neural-network-based approximate dynamic programming method, namely, the direct heuristic dynamic programming (direct HDP), to a large power system stability control problem. The direct HDP is a learning- and approximation-based approach to addressing nonlinear coordinated control under uncertainty. One of the major design parameters, the controller learning objective function, is formulated to directly account for network-wide low-frequency oscillation with the presence of nonlinearity, uncertainty, and coupling effect among system components. Results include a novel learning control structure based on the direct HDP with applications to two power system problems. The first case involves static var compensator supplementary damping control, which is used to provide a comprehensive evaluation of the learning control performance. The second case aims at addressing a difficult complex system challenge by providing a new solution to a large interconnected power network oscillation damping control problem that frequently occurs in the China Southern Power Grid.

  7. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  8. Time Series Neural Network Model for Part-of-Speech Tagging Indonesian Language

    NASA Astrophysics Data System (ADS)

    Tanadi, Theo

    2018-03-01

    Part-of-speech tagging (POS tagging) is an important part in natural language processing. Many methods have been used to do this task, including neural network. This paper models a neural network that attempts to do POS tagging. A time series neural network is modelled to solve the problems that a basic neural network faces when attempting to do POS tagging. In order to enable the neural network to have text data input, the text data will get clustered first using Brown Clustering, resulting a binary dictionary that the neural network can use. To further the accuracy of the neural network, other features such as the POS tag, suffix, and affix of previous words would also be fed to the neural network.

  9. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  10. A novel recurrent neural network with finite-time convergence for linear programming.

    PubMed

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  11. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  12. Disrupted global metastability and static and dynamic brain connectivity across individuals in the Alzheimer’s disease continuum

    NASA Astrophysics Data System (ADS)

    Córdova-Palomera, Aldo; Kaufmann, Tobias; Persson, Karin; Alnæs, Dag; Doan, Nhat Trung; Moberget, Torgeir; Lund, Martina Jonette; Barca, Maria Lage; Engvig, Andreas; Brækhus, Anne; Engedal, Knut; Andreassen, Ole A.; Selbæk, Geir; Westlye, Lars T.

    2017-01-01

    As findings on the neuropathological and behavioral components of Alzheimer’s disease (AD) continue to accrue, converging evidence suggests that macroscale brain functional disruptions may mediate their association. Recent developments on theoretical neuroscience indicate that instantaneous patterns of brain connectivity and metastability may be a key mechanism in neural communication underlying cognitive performance. However, the potential significance of these patterns across the AD spectrum remains virtually unexplored. We assessed the clinical sensitivity of static and dynamic functional brain disruptions across the AD spectrum using resting-state fMRI in a sample consisting of AD patients (n = 80) and subjects with either mild (n = 44) or subjective (n = 26) cognitive impairment (MCI, SCI). Spatial maps constituting the nodes in the functional brain network and their associated time-series were estimated using spatial group independent component analysis and dual regression, and whole-brain oscillatory activity was analyzed both globally (metastability) and locally (static and dynamic connectivity). Instantaneous phase metrics showed functional coupling alterations in AD compared to MCI and SCI, both static (putamen, dorsal and default-mode) and dynamic (temporal, frontal-superior and default-mode), along with decreased global metastability. The results suggest that brains of AD patients display altered oscillatory patterns, in agreement with theoretical premises on cognitive dynamics.

  13. Linear matrix inequality approach to exponential synchronization of a class of chaotic neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Cui, Bao-Tong

    2007-07-01

    In this paper, a synchronization scheme for a class of chaotic neural networks with time-varying delays is presented. This class of chaotic neural networks covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks, and bidirectional associative memory networks. The obtained criteria are expressed in terms of linear matrix inequalities, thus they can be efficiently verified. A comparison between our results and the previous results shows that our results are less restrictive.

  14. Next-generation probes, particles, and proteins for neural interfacing

    PubMed Central

    Rivnay, Jonathan; Wang, Huiliang; Fenno, Lief; Deisseroth, Karl; Malliaras, George G.

    2017-01-01

    Bidirectional interfacing with the nervous system enables neuroscience research, diagnosis, and therapy. This two-way communication allows us to monitor the state of the brain and its composite networks and cells as well as to influence them to treat disease or repair/restore sensory or motor function. To provide the most stable and effective interface, the tools of the trade must bridge the soft, ion-rich, and evolving nature of neural tissue with the largely rigid, static realm of microelectronics and medical instruments that allow for readout, analysis, and/or control. In this Review, we describe how the understanding of neural signaling and material-tissue interactions has fueled the expansion of the available tool set. New probe architectures and materials, nanoparticles, dyes, and designer genetically encoded proteins push the limits of recording and stimulation lifetime, localization, and specificity, blurring the boundary between living tissue and engineered tools. Understanding these approaches, their modality, and the role of cross-disciplinary development will support new neurotherapies and prostheses and provide neuroscientists and neurologists with unprecedented access to the brain. PMID:28630894

  15. Diagnosing Autism Spectrum Disorder from Brain Resting-State Functional Connectivity Patterns Using a Deep Neural Network with a Novel Feature Selection Method.

    PubMed

    Guo, Xinyu; Dominick, Kelli C; Minai, Ali A; Li, Hailong; Erickson, Craig A; Lu, Long J

    2017-01-01

    The whole-brain functional connectivity (FC) pattern obtained from resting-state functional magnetic resonance imaging data are commonly applied to study neuropsychiatric conditions such as autism spectrum disorder (ASD) by using different machine learning models. Recent studies indicate that both hyper- and hypo- aberrant ASD-associated FCs were widely distributed throughout the entire brain rather than only in some specific brain regions. Deep neural networks (DNN) with multiple hidden layers have shown the ability to systematically extract lower-to-higher level information from high dimensional data across a series of neural hidden layers, significantly improving classification accuracy for such data. In this study, a DNN with a novel feature selection method (DNN-FS) is developed for the high dimensional whole-brain resting-state FC pattern classification of ASD patients vs. typical development (TD) controls. The feature selection method is able to help the DNN generate low dimensional high-quality representations of the whole-brain FC patterns by selecting features with high discriminating power from multiple trained sparse auto-encoders. For the comparison, a DNN without the feature selection method (DNN-woFS) is developed, and both of them are tested with different architectures (i.e., with different numbers of hidden layers/nodes). Results show that the best classification accuracy of 86.36% is generated by the DNN-FS approach with 3 hidden layers and 150 hidden nodes (3/150). Remarkably, DNN-FS outperforms DNN-woFS for all architectures studied. The most significant accuracy improvement was 9.09% with the 3/150 architecture. The method also outperforms other feature selection methods, e.g., two sample t -test and elastic net. In addition to improving the classification accuracy, a Fisher's score-based biomarker identification method based on the DNN is also developed, and used to identify 32 FCs related to ASD. These FCs come from or cross different pre-defined brain networks including the default-mode, cingulo-opercular, frontal-parietal, and cerebellum. Thirteen of them are statically significant between ASD and TD groups (two sample t -test p < 0.05) while 19 of them are not. The relationship between the statically significant FCs and the corresponding ASD behavior symptoms is discussed based on the literature and clinician's expert knowledge. Meanwhile, the potential reason of obtaining 19 FCs which are not statistically significant is also provided.

  16. Resting state cerebral blood flow with arterial spin labeling MRI in developing human brains.

    PubMed

    Liu, Feng; Duan, Yunsuo; Peterson, Bradley S; Asllani, Iris; Zelaya, Fernando; Lythgoe, David; Kangarlu, Alayar

    2018-07-01

    The development of brain circuits is coupled with changes in neurovascular coupling, which refers to the close relationship between neural activity and cerebral blood flow (CBF). Studying the characteristics of CBF during resting state in developing brain can be a complementary way to understand the functional connectivity of the developing brain. Arterial spin labeling (ASL), as a noninvasive MR technique, is particularly attractive for studying cerebral perfusion in children and even newborns. We have collected pulsed ASL data in resting state for 47 healthy subjects from young children to adolescence (aged from 6 to 20 years old). In addition to studying the developmental change of static CBF maps during resting state, we also analyzed the CBF time series to reveal the dynamic characteristics of CBF in differing age groups. We used the seed-based correlation analysis to examine the temporal relationship of CBF time series between the selected ROIs and other brain regions. We have shown the developmental patterns in both static CBF maps and dynamic characteristics of CBF. While higher CBF of default mode network (DMN) in all age groups supports that DMN is the prominent active network during the resting state, the CBF connectivity patterns of some typical resting state networks show distinct patterns of metabolic activity during the resting state in the developing brains. Copyright © 2018 European Paediatric Neurology Society. All rights reserved.

  17. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  18. The neural network to determine the mechanical properties of the steels

    NASA Astrophysics Data System (ADS)

    Yemelyanov, Vitaliy; Yemelyanova, Nataliya; Safonova, Marina; Nedelkin, Aleksey

    2018-04-01

    The authors describe the neural network structure and software that is designed and developed to determine the mechanical properties of steels. The neural network is developed to refine upon the values of the steels properties. The results of simulations of the developed neural network are shown. The authors note the low standard error of the proposed neural network. To realize the proposed neural network the specialized software has been developed.

  19. Hydrologic Record Extension of Water-Level Data in the Everglades Depth Estimation Network (EDEN) Using Artificial Neural Network Models, 2000-2006

    USGS Publications Warehouse

    Conrads, Paul; Roehl, Edwin A.

    2007-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level gaging stations, ground-elevation models, and water-surface models designed to provide scientists, engineers, and water-resource managers with current (2000-present) water-depth information for the entire freshwater portion of the greater Everglades. The U.S. Geological Survey Greater Everglades Priority Ecosystem Science provides support for EDEN and the goal of providing quality assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. To increase the accuracy of the water-surface models, 25 real-time water-level gaging stations were added to the network of 253 established water-level gaging stations. To incorporate the data from the newly added stations to the 7-year EDEN database in the greater Everglades, the short-term water-level records (generally less than 1 year) needed to be simulated back in time (hindcasted) to be concurrent with data from the established gaging stations in the database. A three-step modeling approach using artificial neural network models was used to estimate the water levels at the new stations. The artificial neural network models used static variables that represent the gaging station location and percent vegetation in addition to dynamic variables that represent water-level data from the established EDEN gaging stations. The final step of the modeling approach was to simulate the computed error of the initial estimate to increase the accuracy of the final water-level estimate. The three-step modeling approach for estimating water levels at the new EDEN gaging stations produced satisfactory results. The coefficients of determination (R2) for 21 of the 25 estimates were greater than 0.95, and all of the estimates (25 of 25) were greater than 0.82. The model estimates showed good agreement with the measured data. For some new EDEN stations with limited measured data, the record extension (hindcasts) included periods beyond the range of the data used to train the artificial neural network models. The comparison of the hindcasts with long-term water-level data proximal to the new EDEN gaging stations indicated that the water-level estimates were reasonable. The percent model error (root mean square error divided by the range of the measured data) was less than 6 percent, and for the majority of stations (20 of 25), the percent model error was less than 1 percent.

  20. Efficient Analysis of Complex Structures

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.

    2000-01-01

    Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).

  1. Crack propagation analysis using acoustic emission sensors for structural health monitoring systems.

    PubMed

    Kral, Zachary; Horn, Walter; Steck, James

    2013-01-01

    Aerospace systems are expected to remain in service well beyond their designed life. Consequently, maintenance is an important issue. A novel method of implementing artificial neural networks and acoustic emission sensors to form a structural health monitoring (SHM) system for aerospace inspection routines was the focus of this research. Simple structural elements, consisting of flat aluminum plates of AL 2024-T3, were subjected to increasing static tensile loading. As the loading increased, designed cracks extended in length, releasing strain waves in the process. Strain wave signals, measured by acoustic emission sensors, were further analyzed in post-processing by artificial neural networks (ANN). Several experiments were performed to determine the severity and location of the crack extensions in the structure. ANNs were trained on a portion of the data acquired by the sensors and the ANNs were then validated with the remaining data. The combination of a system of acoustic emission sensors, and an ANN could determine crack extension accurately. The difference between predicted and actual crack extensions was determined to be between 0.004 in. and 0.015 in. with 95% confidence. These ANNs, coupled with acoustic emission sensors, showed promise for the creation of an SHM system for aerospace systems.

  2. Region stability analysis and tracking control of memristive recurrent neural network.

    PubMed

    Bao, Gang; Zeng, Zhigang; Shen, Yanjun

    2018-02-01

    Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints.

    PubMed

    Liang, X B; Wang, J

    2000-01-01

    This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.

  4. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    USGS Publications Warehouse

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  5. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  6. Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.

    PubMed

    Wan, Ying; Cao, Jinde; Wen, Guanghui

    In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.

  7. Modified neural networks for rapid recovery of tokamak plasma parameters for real time control

    NASA Astrophysics Data System (ADS)

    Sengupta, A.; Ranjan, P.

    2002-07-01

    Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.

  8. Control of magnetic bearing systems via the Chebyshev polynomial-based unified model (CPBUM) neural network.

    PubMed

    Jeng, J T; Lee, T T

    2000-01-01

    A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  9. ChainMail based neural dynamics modeling of soft tissue deformation for surgical simulation.

    PubMed

    Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-07-20

    Realistic and real-time modeling and simulation of soft tissue deformation is a fundamental research issue in the field of surgical simulation. In this paper, a novel cellular neural network approach is presented for modeling and simulation of soft tissue deformation by combining neural dynamics of cellular neural network with ChainMail mechanism. The proposed method formulates the problem of elastic deformation into cellular neural network activities to avoid the complex computation of elasticity. The local position adjustments of ChainMail are incorporated into the cellular neural network as the local connectivity of cells, through which the dynamic behaviors of soft tissue deformation are transformed into the neural dynamics of cellular neural network. Experiments demonstrate that the proposed neural network approach is capable of modeling the soft tissues' nonlinear deformation and typical mechanical behaviors. The proposed method not only improves ChainMail's linear deformation with the nonlinear characteristics of neural dynamics but also enables the cellular neural network to follow the principle of continuum mechanics to simulate soft tissue deformation.

  10. Contextual and Developmental Differences in the Neural Architecture of Cognitive Control.

    PubMed

    Petrican, Raluca; Grady, Cheryl L

    2017-08-09

    Because both development and context impact functional brain architecture, the neural connectivity signature of a cognitive or affective predisposition may similarly vary across different ages and circumstances. To test this hypothesis, we investigated the effects of age and cognitive versus social-affective context on the stable and time-varying neural architecture of inhibition, the putative core cognitive control component, in a subsample ( N = 359, 22-36 years, 174 men) of the Human Connectome Project. Among younger individuals, a neural signature of superior inhibition emerged in both stable and dynamic connectivity analyses. Dynamically, a context-free signature emerged as stronger segregation of internal cognition (default mode) and environmentally driven control (salience, cingulo-opercular) systems. A dynamic social-affective context-specific signature was observed most clearly in the visual system. Stable connectivity analyses revealed both context-free (greater default mode segregation) and context-specific (greater frontoparietal segregation for higher cognitive load; greater attentional and environmentally driven control system segregation for greater reward value) signatures of inhibition. Superior inhibition in more mature adulthood was typified by reduced segregation in the default network with increasing reward value and increased ventral attention but reduced cingulo-opercular and subcortical system segregation with increasing cognitive load. Failure to evidence this neural profile after the age of 30 predicted poorer life functioning. Our results suggest that distinguishable neural mechanisms underlie individual differences in cognitive control during different young adult stages and across tasks, thereby underscoring the importance of better understanding the interplay among dispositional, developmental, and contextual factors in shaping adaptive versus maladaptive patterns of thought and behavior. SIGNIFICANCE STATEMENT The brain's functional architecture changes across different contexts and life stages. To test whether the neural signature of a trait similarly varies, we investigated cognitive versus social-affective context effects on the stable and time-varying neural architecture of inhibition during a period of neurobehavioral fine-tuning (age 22-36 years). Younger individuals with superior inhibition showed distinguishable context-free and context-specific neural profiles, evidenced in both static and dynamic connectivity analyses. More mature individuals with superior inhibition evidenced only context-specific profiles, revealed in the static connectivity patterns linked to increased reward or cognitive load. Delayed expression of this profile predicted poorer life functioning. Our results underscore the importance of understanding the interplay among dispositional, developmental, and contextual factors in shaping behavior. Copyright © 2017 the authors 0270-6474/17/377711-16$15.00/0.

  11. Nested Neural Networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1992-01-01

    Report presents analysis of nested neural networks, consisting of interconnected subnetworks. Analysis based on simplified mathematical models more appropriate for artificial electronic neural networks, partly applicable to biological neural networks. Nested structure allows for retrieval of individual subpatterns. Requires fewer wires and connection devices than fully connected networks, and allows for local reconstruction of damaged subnetworks without rewiring entire network.

  12. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.

    PubMed

    Mocanu, Decebal Constantin; Mocanu, Elena; Stone, Peter; Nguyen, Phuong H; Gibescu, Madeleine; Liotta, Antonio

    2018-06-19

    Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.

  13. Quantum neural networks: Current status and prospects for development

    NASA Astrophysics Data System (ADS)

    Altaisky, M. V.; Kaputkina, N. E.; Krylov, V. A.

    2014-11-01

    The idea of quantum artificial neural networks, first formulated in [34], unites the artificial neural network concept with the quantum computation paradigm. Quantum artificial neural networks were first systematically considered in the PhD thesis by T. Menneer (1998). Based on the works of Menneer and Narayanan [42, 43], Kouda, Matsui, and Nishimura [35, 36], Altaisky [2, 68], Zhou [67], and others, quantum-inspired learning algorithms for neural networks were developed, and are now used in various training programs and computer games [29, 30]. The first practically realizable scaled hardware-implemented model of the quantum artificial neural network is obtained by D-Wave Systems, Inc. [33]. It is a quantum Hopfield network implemented on the basis of superconducting quantum interference devices (SQUIDs). In this work we analyze possibilities and underlying principles of an alternative way to implement quantum neural networks on the basis of quantum dots. A possibility of using quantum neural network algorithms in automated control systems, associative memory devices, and in modeling biological and social networks is examined.

  14. Neural network approaches to capture temporal information

    NASA Astrophysics Data System (ADS)

    van Veelen, Martijn; Nijhuis, Jos; Spaanenburg, Ben

    2000-05-01

    The automated design and construction of neural networks receives growing attention of the neural networks community. Both the growing availability of computing power and development of mathematical and probabilistic theory have had severe impact on the design and modelling approaches of neural networks. This impact is most apparent in the use of neural networks to time series prediction. In this paper, we give our views on past, contemporary and future design and modelling approaches to neural forecasting.

  15. The role of symmetry in neural networks and their Laplacian spectra.

    PubMed

    de Lange, Siemon C; van den Heuvel, Martijn P; de Reus, Marcel A

    2016-11-01

    Human and animal nervous systems constitute complexly wired networks that form the infrastructure for neural processing and integration of information. The organization of these neural networks can be analyzed using the so-called Laplacian spectrum, providing a mathematical tool to produce systems-level network fingerprints. In this article, we examine a characteristic central peak in the spectrum of neural networks, including anatomical brain network maps of the mouse, cat and macaque, as well as anatomical and functional network maps of human brain connectivity. We link the occurrence of this central peak to the level of symmetry in neural networks, an intriguing aspect of network organization resulting from network elements that exhibit similar wiring patterns. Specifically, we propose a measure to capture the global level of symmetry of a network and show that, for both empirical networks and network models, the height of the main peak in the Laplacian spectrum is strongly related to node symmetry in the underlying network. Moreover, examination of spectra of duplication-based model networks shows that neural spectra are best approximated using a trade-off between duplication and diversification. Taken together, our results facilitate a better understanding of neural network spectra and the importance of symmetry in neural networks. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Synchronization Control of Neural Networks With State-Dependent Coefficient Matrices.

    PubMed

    Zhang, Junfeng; Zhao, Xudong; Huang, Jun

    2016-11-01

    This brief is concerned with synchronization control of a class of neural networks with state-dependent coefficient matrices. Being different from the existing drive-response neural networks in the literature, a novel model of drive-response neural networks is established. The concepts of uniformly ultimately bounded (UUB) synchronization and convex hull Lyapunov function are introduced. Then, by using the convex hull Lyapunov function approach, the UUB synchronization design of the drive-response neural networks is proposed, and a delay-independent control law guaranteeing the bounded synchronization of the neural networks is constructed. All present conditions are formulated in terms of bilinear matrix inequalities. By comparison, it is shown that the neural networks obtained in this brief are less conservative than those ones in the literature, and the bounded synchronization is suitable for the novel drive-response neural networks. Finally, an illustrative example is given to verify the validity of the obtained results.

  17. The Laplacian spectrum of neural networks

    PubMed Central

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  18. Introduction to Neural Networks.

    DTIC Science & Technology

    1992-03-01

    parallel processing of information that can greatly reduce the time required to perform operations which are needed in pattern recognition. Neural network, Artificial neural network , Neural net, ANN.

  19. NL(q) Theory: A Neural Control Framework with Global Asymptotic Stability Criteria.

    PubMed

    Vandewalle, Joos; De Moor, Bart L.R.; Suykens, Johan A.K.

    1997-06-01

    In this paper a framework for model-based neural control design is presented, consisting of nonlinear state space models and controllers, parametrized by multilayer feedforward neural networks. The models and closed-loop systems are transformed into so-called NL(q) system form. NL(q) systems represent a large class of nonlinear dynamical systems consisting of q layers with alternating linear and static nonlinear operators that satisfy a sector condition. For such NL(q)s sufficient conditions for global asymptotic stability, input/output stability (dissipativity with finite L(2)-gain) and robust stability and performance are presented. The stability criteria are expressed as linear matrix inequalities. In the analysis problem it is shown how stability of a given controller can be checked. In the synthesis problem two methods for neural control design are discussed. In the first method Narendra's dynamic backpropagation for tracking on a set of specific reference inputs is modified with an NL(q) stability constraint in order to ensure, e.g., closed-loop stability. In a second method control design is done without tracking on specific reference inputs, but based on the input/output stability criteria itself, within a standard plant framework as this is done, for example, in H( infinity ) control theory and &mgr; theory. Copyright 1997 Elsevier Science Ltd.

  20. Learning control of inverted pendulum system by neural network driven fuzzy reasoning: The learning function of NN-driven fuzzy reasoning under changes of reasoning environment

    NASA Technical Reports Server (NTRS)

    Hayashi, Isao; Nomura, Hiroyoshi; Wakami, Noboru

    1991-01-01

    Whereas conventional fuzzy reasonings are associated with tuning problems, which are lack of membership functions and inference rule designs, a neural network driven fuzzy reasoning (NDF) capable of determining membership functions by neural network is formulated. In the antecedent parts of the neural network driven fuzzy reasoning, the optimum membership function is determined by a neural network, while in the consequent parts, an amount of control for each rule is determined by other plural neural networks. By introducing an algorithm of neural network driven fuzzy reasoning, inference rules for making a pendulum stand up from its lowest suspended point are determined for verifying the usefulness of the algorithm.

  1. Optimization of neural network architecture using genetic programming improves detection and modeling of gene-gene interactions in studies of human diseases

    PubMed Central

    Ritchie, Marylyn D; White, Bill C; Parker, Joel S; Hahn, Lance W; Moore, Jason H

    2003-01-01

    Background Appropriate definition of neural network architecture prior to data analysis is crucial for successful data mining. This can be challenging when the underlying model of the data is unknown. The goal of this study was to determine whether optimizing neural network architecture using genetic programming as a machine learning strategy would improve the ability of neural networks to model and detect nonlinear interactions among genes in studies of common human diseases. Results Using simulated data, we show that a genetic programming optimized neural network approach is able to model gene-gene interactions as well as a traditional back propagation neural network. Furthermore, the genetic programming optimized neural network is better than the traditional back propagation neural network approach in terms of predictive ability and power to detect gene-gene interactions when non-functional polymorphisms are present. Conclusion This study suggests that a machine learning strategy for optimizing neural network architecture may be preferable to traditional trial-and-error approaches for the identification and characterization of gene-gene interactions in common, complex human diseases. PMID:12846935

  2. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Neural-network-directed alignment of optical systems using the laser-beam spatial filter as an example

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Krasowski, Michael J.; Weiland, Kenneth E.

    1993-01-01

    This report describes an effort at NASA Lewis Research Center to use artificial neural networks to automate the alignment and control of optical measurement systems. Specifically, it addresses the use of commercially available neural network software and hardware to direct alignments of the common laser-beam-smoothing spatial filter. The report presents a general approach for designing alignment records and combining these into training sets to teach optical alignment functions to neural networks and discusses the use of these training sets to train several types of neural networks. Neural network configurations used include the adaptive resonance network, the back-propagation-trained network, and the counter-propagation network. This work shows that neural networks can be used to produce robust sequencers. These sequencers can learn by example to execute the step-by-step procedures of optical alignment and also can learn adaptively to correct for environmentally induced misalignment. The long-range objective is to use neural networks to automate the alignment and operation of optical measurement systems in remote, harsh, or dangerous aerospace environments. This work also shows that when neural networks are trained by a human operator, training sets should be recorded, training should be executed, and testing should be done in a manner that does not depend on intellectual judgments of the human operator.

  4. Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks.

    PubMed

    Nitta, Tohru

    2017-10-01

    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

  5. The effect of the neural activity on topological properties of growing neural networks.

    PubMed

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  6. LavaNet—Neural network development environment in a general mine planning package

    NASA Astrophysics Data System (ADS)

    Kapageridis, Ioannis Konstantinou; Triantafyllou, A. G.

    2011-04-01

    LavaNet is a series of scripts written in Perl that gives access to a neural network simulation environment inside a general mine planning package. A well known and a very popular neural network development environment, the Stuttgart Neural Network Simulator, is used as the base for the development of neural networks. LavaNet runs inside VULCAN™—a complete mine planning package with advanced database, modelling and visualisation capabilities. LavaNet is taking advantage of VULCAN's Perl based scripting environment, Lava, to bring all the benefits of neural network development and application to geologists, mining engineers and other users of the specific mine planning package. LavaNet enables easy development of neural network training data sets using information from any of the data and model structures available, such as block models and drillhole databases. Neural networks can be trained inside VULCAN™ and the results be used to generate new models that can be visualised in 3D. Direct comparison of developed neural network models with conventional and geostatistical techniques is now possible within the same mine planning software package. LavaNet supports Radial Basis Function networks, Multi-Layer Perceptrons and Self-Organised Maps.

  7. Creative-Dynamics Approach To Neural Intelligence

    NASA Technical Reports Server (NTRS)

    Zak, Michail A.

    1992-01-01

    Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.

  8. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    PubMed Central

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  9. Noise-tolerant inverse analysis models for nondestructive evaluation of transportation infrastructure systems using neural networks

    NASA Astrophysics Data System (ADS)

    Ceylan, Halil; Gopalakrishnan, Kasthurirangan; Birkan Bayrak, Mustafa; Guclu, Alper

    2013-09-01

    The need to rapidly and cost-effectively evaluate the present condition of pavement infrastructure is a critical issue concerning the deterioration of ageing transportation infrastructure all around the world. Nondestructive testing (NDT) and evaluation methods are well-suited for characterising materials and determining structural integrity of pavement systems. The falling weight deflectometer (FWD) is a NDT equipment used to assess the structural condition of highway and airfield pavement systems and to determine the moduli of pavement layers. This involves static or dynamic inverse analysis (referred to as backcalculation) of FWD deflection profiles in the pavement surface under a simulated truck load. The main objective of this study was to employ biologically inspired computational systems to develop robust pavement layer moduli backcalculation algorithms that can tolerate noise or inaccuracies in the FWD deflection data collected in the field. Artificial neural systems, also known as artificial neural networks (ANNs), are valuable computational intelligence tools that are increasingly being used to solve resource-intensive complex engineering problems. Unlike the linear elastic layered theory commonly used in pavement layer backcalculation, non-linear unbound aggregate base and subgrade soil response models were used in an axisymmetric finite element structural analysis programme to generate synthetic database for training and testing the ANN models. In order to develop more robust networks that can tolerate the noisy or inaccurate pavement deflection patterns in the NDT data, several network architectures were trained with varying levels of noise in them. The trained ANN models were capable of rapidly predicting the pavement layer moduli and critical pavement responses (tensile strains at the bottom of the asphalt concrete layer, compressive strains on top of the subgrade layer and the deviator stresses on top of the subgrade layer), and also pavement surface deflections with very low average errors comparable with those obtained directly from the finite element analyses.

  10. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  11. Contagion processes on the static and activity-driven coupling networks

    NASA Astrophysics Data System (ADS)

    Lei, Yanjun; Jiang, Xin; Guo, Quantong; Ma, Yifang; Li, Meng; Zheng, Zhiming

    2016-03-01

    The evolution of network structure and the spreading of epidemic are common coexistent dynamical processes. In most cases, network structure is treated as either static or time-varying, supposing the whole network is observed in the same time window. In this paper, we consider the epidemics spreading on a network which has both static and time-varying structures. Meanwhile, the time-varying part and the epidemic spreading are supposed to be of the same time scale. We introduce a static and activity-driven coupling (SADC) network model to characterize the coupling between the static ("strong") structure and the dynamic ("weak") structure. Epidemic thresholds of the SIS and SIR models are studied using the SADC model both analytically and numerically under various coupling strategies, where the strong structure is of homogeneous or heterogeneous degree distribution. Theoretical thresholds obtained from the SADC model can both recover and generalize the classical results in static and time-varying networks. It is demonstrated that a weak structure might make the epidemic threshold low in homogeneous networks but high in heterogeneous cases. Furthermore, we show that the weak structure has a substantive effect on the outbreak of the epidemics. This result might be useful in designing some efficient control strategies for epidemics spreading in networks.

  12. Neural network to diagnose lining condition

    NASA Astrophysics Data System (ADS)

    Yemelyanov, V. A.; Yemelyanova, N. Y.; Nedelkin, A. A.; Zarudnaya, M. V.

    2018-03-01

    The paper presents data on the problem of diagnosing the lining condition at the iron and steel works. The authors describe the neural network structure and software that are designed and developed to determine the lining burnout zones. The simulation results of the proposed neural networks are presented. The authors note the low learning and classification errors of the proposed neural networks. To realize the proposed neural network, the specialized software has been developed.

  13. [Measurement and performance analysis of functional neural network].

    PubMed

    Li, Shan; Liu, Xinyu; Chen, Yan; Wan, Hong

    2018-04-01

    The measurement of network is one of the important researches in resolving neuronal population information processing mechanism using complex network theory. For the quantitative measurement problem of functional neural network, the relation between the measure indexes, i.e. the clustering coefficient, the global efficiency, the characteristic path length and the transitivity, and the network topology was analyzed. Then, the spike-based functional neural network was established and the simulation results showed that the measured network could represent the original neural connections among neurons. On the basis of the former work, the coding of functional neural network in nidopallium caudolaterale (NCL) about pigeon's motion behaviors was studied. We found that the NCL functional neural network effectively encoded the motion behaviors of the pigeon, and there were significant differences in four indexes among the left-turning, the forward and the right-turning. Overall, the establishment method of spike-based functional neural network is available and it is an effective tool to parse the brain information processing mechanism.

  14. Neural network error correction for solving coupled ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  15. Artificial and Bayesian Neural Networks

    PubMed

    Korhani Kangi, Azam; Bahrampour, Abbas

    2018-02-26

    Introduction and purpose: In recent years the use of neural networks without any premises for investigation of prognosis in analyzing survival data has increased. Artificial neural networks (ANN) use small processors with a continuous network to solve problems inspired by the human brain. Bayesian neural networks (BNN) constitute a neural-based approach to modeling and non-linearization of complex issues using special algorithms and statistical methods. Gastric cancer incidence is the first and third ranking for men and women in Iran, respectively. The aim of the present study was to assess the value of an artificial neural network and a Bayesian neural network for modeling and predicting of probability of gastric cancer patient death. Materials and Methods: In this study, we used information on 339 patients aged from 20 to 90 years old with positive gastric cancer, referred to Afzalipoor and Shahid Bahonar Hospitals in Kerman City from 2001 to 2015. The three layers perceptron neural network (ANN) and the Bayesian neural network (BNN) were used for predicting the probability of mortality using the available data. To investigate differences between the models, sensitivity, specificity, accuracy and the area under receiver operating characteristic curves (AUROCs) were generated. Results: In this study, the sensitivity and specificity of the artificial neural network and Bayesian neural network models were 0.882, 0.903 and 0.954, 0.909, respectively. Prediction accuracy and the area under curve ROC for the two models were 0.891, 0.944 and 0.935, 0.961. The age at diagnosis of gastric cancer was most important for predicting survival, followed by tumor grade, morphology, gender, smoking history, opium consumption, receiving chemotherapy, presence of metastasis, tumor stage, receiving radiotherapy, and being resident in a village. Conclusion: The findings of the present study indicated that the Bayesian neural network is preferable to an artificial neural network for predicting survival of gastric cancer patients in Iran. Creative Commons Attribution License

  16. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  17. Factors Released from Endothelial Cells Exposed to Flow Impact Adhesion, Proliferation, and Fate Choice in the Adult Neural Stem Cell Lineage.

    PubMed

    Dumont, Courtney M; Piselli, Jennifer M; Kazi, Nadeem; Bowman, Evan; Li, Guoyun; Linhardt, Robert J; Temple, Sally; Dai, Guohao; Thompson, Deanna M

    2017-08-15

    The microvasculature within the neural stem cell (NSC) niche promotes self-renewal and regulates lineage progression. Previous work identified endothelial-produced soluble factors as key regulators of neural progenitor cell (NPC) fate and proliferation; however, endothelial cells (ECs) are sensitive to local hemodynamics, and the effect of this key physiological process has not been defined. In this study, we evaluated adult mouse NPC response to soluble factors isolated from static or dynamic (flow) EC cultures. Endothelial factors generated under dynamic conditions significantly increased neuronal differentiation, while those released under static conditions stimulated oligodendrocyte differentiation. Flow increases EC release of neurogenic factors and of heparin sulfate glycosaminoglycans that increase their bioactivity, likely underlying the enhanced neuronal differentiation. Additionally, endothelial factors, especially from static conditions, promoted adherent growth. Together, our data suggest that blood flow may impact proliferation, adhesion, and the neuron-glial fate choice of adult NPCs, with implications for diseases and aging that reduce flow.

  18. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    PubMed

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Thermalnet: a Deep Convolutional Network for Synthetic Thermal Image Generation

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Gorbatsevich, V. S.; Mizginov, V. A.

    2017-05-01

    Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.

  20. Determining geophysical properties from well log data using artificial neural networks and fuzzy inference systems

    NASA Astrophysics Data System (ADS)

    Chang, Hsien-Cheng

    Two novel synergistic systems consisting of artificial neural networks and fuzzy inference systems are developed to determine geophysical properties by using well log data. These systems are employed to improve the determination accuracy in carbonate rocks, which are generally more complex than siliciclastic rocks. One system, consisting of a single adaptive resonance theory (ART) neural network and three fuzzy inference systems (FISs), is used to determine the permeability category. The other system, which is composed of three ART neural networks and a single FIS, is employed to determine the lithofacies. The geophysical properties studied in this research, permeability category and lithofacies, are treated as categorical data. The permeability values are transformed into a "permeability category" to account for the effects of scale differences between core analyses and well logs, and heterogeneity in the carbonate rocks. The ART neural networks dynamically cluster the input data sets into different groups. The FIS is used to incorporate geologic experts' knowledge, which is usually in linguistic forms, into systems. These synergistic systems thus provide viable alternative solutions to overcome the effects of heterogeneity, the uncertainties of carbonate rock depositional environments, and the scarcity of well log data. The results obtained in this research show promising improvements over backpropagation neural networks. For the permeability category, the prediction accuracies are 68.4% and 62.8% for the multiple-single ART neural network-FIS and a single backpropagation neural network, respectively. For lithofacies, the prediction accuracies are 87.6%, 79%, and 62.8% for the single-multiple ART neural network-FIS, a single ART neural network, and a single backpropagation neural network, respectively. The sensitivity analysis results show that the multiple-single ART neural networks-FIS and a single ART neural network possess the same matching trends in determining lithofacies. This research shows that the adaptive resonance theory neural networks enable decision-makers to clearly distinguish the importance of different pieces of data which are useful in three-dimensional subsurface modeling. Geologic experts' knowledge can be easily applied and maintained by using the fuzzy inference systems.

  1. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  2. Application of the ANNA neural network chip to high-speed character recognition.

    PubMed

    Sackinger, E; Boser, B E; Bromley, J; Lecun, Y; Jackel, L D

    1992-01-01

    A neural network with 136000 connections for recognition of handwritten digits has been implemented using a mixed analog/digital neural network chip. The neural network chip is capable of processing 1000 characters/s. The recognition system has essentially the same rate (5%) as a simulation of the network with 32-b floating-point precision.

  3. Machine Learning and Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Chapline, George

    The author has previously pointed out some similarities between selforganizing neural networks and quantum mechanics. These types of neural networks were originally conceived of as away of emulating the cognitive capabilities of the human brain. Recently extensions of these networks, collectively referred to as deep learning networks, have strengthened the connection between self-organizing neural networks and human cognitive capabilities. In this note we consider whether hardware quantum devices might be useful for emulating neural networks with human-like cognitive capabilities, or alternatively whether implementations of deep learning neural networks using conventional computers might lead to better algorithms for solving the many body Schrodinger equation.

  4. Using fuzzy logic to integrate neural networks and knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Yen, John

    1991-01-01

    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.

  5. A neural network application to classification of health status of HIV/AIDS patients.

    PubMed

    Kwak, N K; Lee, C

    1997-04-01

    This paper presents an application of neural networks to classify and to predict the health status of HIV/AIDS patients. A neural network model in classifying both the well and not-well health status of HIV/AIDS patients is developed and evaluated in terms of validity and reliability of the test. Several different neural network topologies are applied to AIDS Cost and Utilization Survey (ACSUS) datasets in order to demonstrate the neural network's capability.

  6. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    NASA Astrophysics Data System (ADS)

    Chernoded, Andrey; Dudko, Lev; Myagkov, Igor; Volkov, Petr

    2017-10-01

    Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  7. Efficient reinforcement learning of a reservoir network model of parametric working memory achieved with a cluster population winner-take-all readout mechanism.

    PubMed

    Cheng, Zhenbo; Deng, Zhidong; Hu, Xiaolin; Zhang, Bo; Yang, Tianming

    2015-12-01

    The brain often has to make decisions based on information stored in working memory, but the neural circuitry underlying working memory is not fully understood. Many theoretical efforts have been focused on modeling the persistent delay period activity in the prefrontal areas that is believed to represent working memory. Recent experiments reveal that the delay period activity in the prefrontal cortex is neither static nor homogeneous as previously assumed. Models based on reservoir networks have been proposed to model such a dynamical activity pattern. The connections between neurons within a reservoir are random and do not require explicit tuning. Information storage does not depend on the stable states of the network. However, it is not clear how the encoded information can be retrieved for decision making with a biologically realistic algorithm. We therefore built a reservoir-based neural network to model the neuronal responses of the prefrontal cortex in a somatosensory delayed discrimination task. We first illustrate that the neurons in the reservoir exhibit a heterogeneous and dynamical delay period activity observed in previous experiments. Then we show that a cluster population circuit decodes the information from the reservoir with a winner-take-all mechanism and contributes to the decision making. Finally, we show that the model achieves a good performance rapidly by shaping only the readout with reinforcement learning. Our model reproduces important features of previous behavior and neurophysiology data. We illustrate for the first time how task-specific information stored in a reservoir network can be retrieved with a biologically plausible reinforcement learning training scheme. Copyright © 2015 the American Physiological Society.

  8. Improvement of the Hopfield Neural Network by MC-Adaptation Rule

    NASA Astrophysics Data System (ADS)

    Zhou, Zhen; Zhao, Hong

    2006-06-01

    We show that the performance of the Hopfield neural networks, especially the quality of the recall and the capacity of the effective storing, can be greatly improved by making use of a recently presented neural network designing method without altering the whole structure of the network. In the improved neural network, a memory pattern is recalled exactly from initial states having a given degree of similarity with the memory pattern, and thus one can avoids to apply the overlap criterion as carried out in the Hopfield neural networks.

  9. The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.

    PubMed

    Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun

    2018-01-01

    Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.

  10. An adaptive Hinfinity controller design for bank-to-turn missiles using ridge Gaussian neural networks.

    PubMed

    Lin, Chuan-Kai; Wang, Sheng-De

    2004-11-01

    A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the Hinfinity control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with Hinfinity stabilization.

  11. Constraint satisfaction adaptive neural network and heuristics combined approaches for generalized job-shop scheduling.

    PubMed

    Yang, S; Wang, D

    2000-01-01

    This paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.

  12. Financial time series prediction using spiking neural networks.

    PubMed

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  13. Non-Intrusive Gaze Tracking Using Artificial Neural Networks

    DTIC Science & Technology

    1994-01-05

    We have developed an artificial neural network based gaze tracking, system which can be customized to individual users. A three layer feed forward...empirical analysis of the performance of a large number of artificial neural network architectures for this task. Suggestions for further explorations...for neurally based gaze trackers are presented, and are related to other similar artificial neural network applications such as autonomous road following.

  14. Neural dynamics based on the recognition of neural fingerprints

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2015-01-01

    Experimental evidence has revealed the existence of characteristic spiking features in different neural signals, e.g., individual neural signatures identifying the emitter or functional signatures characterizing specific tasks. These neural fingerprints may play a critical role in neural information processing, since they allow receptors to discriminate or contextualize incoming stimuli. This could be a powerful strategy for neural systems that greatly enhances the encoding and processing capacity of these networks. Nevertheless, the study of information processing based on the identification of specific neural fingerprints has attracted little attention. In this work, we study (i) the emerging collective dynamics of a network of neurons that communicate with each other by exchange of neural fingerprints and (ii) the influence of the network topology on the self-organizing properties within the network. Complex collective dynamics emerge in the network in the presence of stimuli. Predefined inputs, i.e., specific neural fingerprints, are detected and encoded into coexisting patterns of activity that propagate throughout the network with different spatial organization. The patterns evoked by a stimulus can survive after the stimulation is over, which provides memory mechanisms to the network. The results presented in this paper suggest that neural information processing based on neural fingerprints can be a plausible, flexible, and powerful strategy. PMID:25852531

  15. Structural reliability calculation method based on the dual neural network and direct integration method.

    PubMed

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  16. Patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks.

    PubMed

    Aguiar, Manuela A D; Dias, Ana Paula S; Ferreira, Flora

    2017-01-01

    We consider feed-forward and auto-regulation feed-forward neural (weighted) coupled cell networks. In feed-forward neural networks, cells are arranged in layers such that the cells of the first layer have empty input set and cells of each other layer receive only inputs from cells of the previous layer. An auto-regulation feed-forward neural coupled cell network is a feed-forward neural network where additionally some cells of the first layer have auto-regulation, that is, they have a self-loop. Given a network structure, a robust pattern of synchrony is a space defined in terms of equalities of cell coordinates that is flow-invariant for any coupled cell system (with additive input structure) associated with the network. In this paper, we describe the robust patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks. Regarding feed-forward neural networks, we show that only cells in the same layer can synchronize. On the other hand, in the presence of auto-regulation, we prove that cells in different layers can synchronize in a robust way and we give a characterization of the possible patterns of synchrony that can occur for auto-regulation feed-forward neural networks.

  17. Pattern classification and recognition of invertebrate functional groups using self-organizing neural networks.

    PubMed

    Zhang, WenJun

    2007-07-01

    Self-organizing neural networks can be used to mimic non-linear systems. The main objective of this study is to make pattern classification and recognition on sampling information using two self-organizing neural network models. Invertebrate functional groups sampled in the irrigated rice field were classified and recognized using one-dimensional self-organizing map and self-organizing competitive learning neural networks. Comparisons between neural network models, distance (similarity) measures, and number of neurons were conducted. The results showed that self-organizing map and self-organizing competitive learning neural network models were effective in pattern classification and recognition of sampling information. Overall the performance of one-dimensional self-organizing map neural network was better than self-organizing competitive learning neural network. The number of neurons could determine the number of classes in the classification. Different neural network models with various distance (similarity) measures yielded similar classifications. Some differences, dependent upon the specific network structure, would be found. The pattern of an unrecognized functional group was recognized with the self-organizing neural network. A relative consistent classification indicated that the following invertebrate functional groups, terrestrial blood sucker; terrestrial flyer; tourist (nonpredatory species with no known functional role other than as prey in ecosystem); gall former; collector (gather, deposit feeder); predator and parasitoid; leaf miner; idiobiont (acarine ectoparasitoid), were classified into the same group, and the following invertebrate functional groups, external plant feeder; terrestrial crawler, walker, jumper or hunter; neustonic (water surface) swimmer (semi-aquatic), were classified into another group. It was concluded that reliable conclusions could be drawn from comparisons of different neural network models that use different distance (similarity) measures. Results with the larger consistency will be more reliable.

  18. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  19. Thermoelastic steam turbine rotor control based on neural network

    NASA Astrophysics Data System (ADS)

    Rzadkowski, Romuald; Dominiczak, Krzysztof; Radulski, Wojciech; Szczepanik, R.

    2015-12-01

    Considered here are Nonlinear Auto-Regressive neural networks with eXogenous inputs (NARX) as a mathematical model of a steam turbine rotor for controlling steam turbine stress on-line. In order to obtain neural networks that locate critical stress and temperature points in the steam turbine during transient states, an FE rotor model was built. This model was used to train the neural networks on the basis of steam turbine transient operating data. The training included nonlinearity related to steam turbine expansion, heat exchange and rotor material properties during transients. Simultaneous neural networks are algorithms which can be implemented on PLC controllers. This allows for the application neural networks to control steam turbine stress in industrial power plants.

  20. The use of artificial neural networks in experimental data acquisition and aerodynamic design

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1991-01-01

    It is proposed that an artificial neural network be used to construct an intelligent data acquisition system. The artificial neural networks (ANN) model has a potential for replacing traditional procedures as well as for use in computational fluid dynamics validation. Potential advantages of the ANN model are listed. As a proof of concept, the author modeled a NACA 0012 airfoil at specific conditions, using the neural network simulator NETS, developed by James Baffes of the NASA Johnson Space Center. The neural network predictions were compared to the actual data. It is concluded that artificial neural networks can provide an elegant and valuable class of mathematical tools for data analysis.

  1. Research on artificial neural network intrusion detection photochemistry based on the improved wavelet analysis and transformation

    NASA Astrophysics Data System (ADS)

    Li, Hong; Ding, Xue

    2017-03-01

    This paper combines wavelet analysis and wavelet transform theory with artificial neural network, through the pretreatment on point feature attributes before in intrusion detection, to make them suitable for improvement of wavelet neural network. The whole intrusion classification model gets the better adaptability, self-learning ability, greatly enhances the wavelet neural network for solving the problem of field detection invasion, reduces storage space, contributes to improve the performance of the constructed neural network, and reduces the training time. Finally the results of the KDDCup99 data set simulation experiment shows that, this method reduces the complexity of constructing wavelet neural network, but also ensures the accuracy of the intrusion classification.

  2. A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application.

    PubMed

    Li, Shuai; Li, Yangming; Wang, Zheng

    2013-03-01

    This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  4. Firing patterns transition and desynchronization induced by time delay in neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Shoufang; Zhang, Jiqian; Wang, Maosheng; Hu, Chin-Kun

    2018-06-01

    We used the Hindmarsh-Rose (HR) model (Hindmarsh and Rose, 1984) to study the effect of time delay on the transition of firing behaviors and desynchronization in neural networks. As time delay is increased, neural networks exhibit diversity of firing behaviors, including regular spiking or bursting and firing patterns transitions (FPTs). Meanwhile, the desynchronization of firing and unstable bursting with decreasing amplitude in neural system, are also increasingly enhanced with the increase of time delay. Furthermore, we also studied the effect of coupling strength and network randomness on these phenomena. Our results imply that time delays can induce transition and desynchronization of firing behaviors in neural networks. These findings provide new insight into the role of time delay in the firing activities of neural networks, and can help to better understand the firing phenomena in complex systems of neural networks. A possible mechanism in brain that can cause the increase of time delay is discussed.

  5. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    PubMed

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Applications of artificial neural nets in clinical biomechanics.

    PubMed

    Schöllhorn, W I

    2004-11-01

    The purpose of this article is to provide an overview of current applications of artificial neural networks in the area of clinical biomechanics. The body of literature on artificial neural networks grew intractably vast during the last 15 years. Conventional statistical models may present certain limitations that can be overcome by neural networks. Artificial neural networks in general are introduced, some limitations, and some proven benefits are discussed.

  7. Neural Networks for Rapid Design and Analysis

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.; Maghami, Peiman G.

    1998-01-01

    Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.

  8. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  9. Optimal input sizes for neural network de-interlacing

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee

    2009-02-01

    Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.

  10. Impact of leakage delay on bifurcation in high-order fractional BAM neural networks.

    PubMed

    Huang, Chengdai; Cao, Jinde

    2018-02-01

    The effects of leakage delay on the dynamics of neural networks with integer-order have lately been received considerable attention. It has been confirmed that fractional neural networks more appropriately uncover the dynamical properties of neural networks, but the results of fractional neural networks with leakage delay are relatively few. This paper primarily concentrates on the issue of bifurcation for high-order fractional bidirectional associative memory(BAM) neural networks involving leakage delay. The first attempt is made to tackle the stability and bifurcation of high-order fractional BAM neural networks with time delay in leakage terms in this paper. The conditions for the appearance of bifurcation for the proposed systems with leakage delay are firstly established by adopting time delay as a bifurcation parameter. Then, the bifurcation criteria of such system without leakage delay are successfully acquired. Comparative analysis wondrously detects that the stability performance of the proposed high-order fractional neural networks is critically weakened by leakage delay, they cannot be overlooked. Numerical examples are ultimately exhibited to attest the efficiency of the theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Coronary Artery Diagnosis Aided by Neural Network

    NASA Astrophysics Data System (ADS)

    Stefko, Kamil

    2007-01-01

    Coronary artery disease is due to atheromatous narrowing and subsequent occlusion of the coronary vessel. Application of optimised feed forward multi-layer back propagation neural network (MLBP) for detection of narrowing in coronary artery vessels is presented in this paper. The research was performed using 580 data records from traditional ECG exercise test confirmed by coronary arteriography results. Each record of training database included description of the state of a patient providing input data for the neural network. Level and slope of ST segment of a 12 lead ECG signal recorded at rest and after effort (48 floating point values) was the main component of input data for neural network was. Coronary arteriography results (verified the existence or absence of more than 50% stenosis of the particular coronary vessels) were used as a correct neural network training output pattern. More than 96% of cases were correctly recognised by especially optimised and a thoroughly verified neural network. Leave one out method was used for neural network verification so 580 data records could be used for training as well as for verification of neural network.

  12. Crack Propagation Analysis Using Acoustic Emission Sensors for Structural Health Monitoring Systems

    DOE PAGES

    Kral, Zachary; Horn, Walter; Steck, James

    2013-01-01

    Aerospace systems are expected to remain in service well beyond their designed life. Consequently, maintenance is an important issue. A novel method of implementing artificial neural networks and acoustic emission sensors to form a structural health monitoring (SHM) system for aerospace inspection routines was the focus of this research. Simple structural elements, consisting of flat aluminum plates of AL 2024-T3, were subjected to increasing static tensile loading. As the loading increased, designed cracks extended in length, releasing strain waves in the process. Strain wave signals, measured by acoustic emission sensors, were further analyzed in post-processing by artificial neural networks (ANN).more » Several experiments were performed to determine the severity and location of the crack extensions in the structure. ANNs were trained on a portion of the data acquired by the sensors and the ANNs were then validated with the remaining data. The combination of a system of acoustic emission sensors, and an ANN could determine crack extension accurately. The difference between predicted and actual crack extensions was determined to be between 0.004 in. and 0.015 in. with 95% confidence. These ANNs, coupled with acoustic emission sensors, showed promise for the creation of an SHM system for aerospace systems.« less

  13. Crack Propagation Analysis Using Acoustic Emission Sensors for Structural Health Monitoring Systems

    PubMed Central

    Horn, Walter; Steck, James

    2013-01-01

    Aerospace systems are expected to remain in service well beyond their designed life. Consequently, maintenance is an important issue. A novel method of implementing artificial neural networks and acoustic emission sensors to form a structural health monitoring (SHM) system for aerospace inspection routines was the focus of this research. Simple structural elements, consisting of flat aluminum plates of AL 2024-T3, were subjected to increasing static tensile loading. As the loading increased, designed cracks extended in length, releasing strain waves in the process. Strain wave signals, measured by acoustic emission sensors, were further analyzed in post-processing by artificial neural networks (ANN). Several experiments were performed to determine the severity and location of the crack extensions in the structure. ANNs were trained on a portion of the data acquired by the sensors and the ANNs were then validated with the remaining data. The combination of a system of acoustic emission sensors, and an ANN could determine crack extension accurately. The difference between predicted and actual crack extensions was determined to be between 0.004 in. and 0.015 in. with 95% confidence. These ANNs, coupled with acoustic emission sensors, showed promise for the creation of an SHM system for aerospace systems. PMID:24023536

  14. Brain signal variability as a window into the bidirectionality between music and language processing: moving from a linear to a nonlinear model

    PubMed Central

    Hutka, Stefanie; Bidelman, Gavin M.; Moreno, Sylvain

    2013-01-01

    There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity) that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain’s processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV) analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer. PMID:24454295

  15. An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression

    PubMed Central

    Bhatt, Deepak; Aggarwal, Priyanka; Bhattacharya, Prabir; Devabhaktuni, Vijay

    2012-01-01

    Micro Electro Mechanical System (MEMS)-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN) is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM) based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches. PMID:23012552

  16. Brain signal variability as a window into the bidirectionality between music and language processing: moving from a linear to a nonlinear model.

    PubMed

    Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain

    2013-12-30

    There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity) that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain's processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV) analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer.

  17. Predicate calculus for an architecture of multiple neural networks

    NASA Astrophysics Data System (ADS)

    Consoli, Robert H.

    1990-08-01

    Future projects with neural networks will require multiple individual network components. Current efforts along these lines are ad hoc. This paper relates the neural network to a classical device and derives a multi-part architecture from that model. Further it provides a Predicate Calculus variant for describing the location and nature of the trainings and suggests Resolution Refutation as a method for determining the performance of the system as well as the location of needed trainings for specific proofs. 2. THE NEURAL NETWORK AND A CLASSICAL DEVICE Recently investigators have been making reports about architectures of multiple neural networksL234. These efforts are appearing at an early stage in neural network investigations they are characterized by architectures suggested directly by the problem space. Touretzky and Hinton suggest an architecture for processing logical statements1 the design of this architecture arises from the syntax of a restricted class of logical expressions and exhibits syntactic limitations. In similar fashion a multiple neural netword arises out of a control problem2 from the sequence learning problem3 and from the domain of machine learning. 4 But a general theory of multiple neural devices is missing. More general attempts to relate single or multiple neural networks to classical computing devices are not common although an attempt is made to relate single neural devices to a Turing machines and Sun et a!. develop a multiple neural architecture that performs pattern classification.

  18. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  19. Altered Synchronizations among Neural Networks in Geriatric Depression

    PubMed Central

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G.; Steffens, David C.

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression. PMID:26180795

  20. Altered Synchronizations among Neural Networks in Geriatric Depression.

    PubMed

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  1. A consensual neural network

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Ersoy, O. K.; Swain, P. H.

    1991-01-01

    A neural network architecture called a consensual neural network (CNN) is proposed for the classification of data from multiple sources. Its relation to hierarchical and ensemble neural networks is discussed. CNN is based on the statistical consensus theory and uses nonlinearly transformed input data. The input data are transformed several times, and the different transformed data are applied as if they were independent inputs. The independent inputs are classified using stage neural networks and outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote-sensing data and geographic data are given.

  2. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  3. Feedback modulation of neural network synchrony and seizure susceptibility by Mdm2-p53-Nedd4-2 signaling.

    PubMed

    Jewett, Kathryn A; Christian, Catherine A; Bacos, Jonathan T; Lee, Kwan Young; Zhu, Jiuhe; Tsai, Nien-Pei

    2016-03-22

    Neural network synchrony is a critical factor in regulating information transmission through the nervous system. Improperly regulated neural network synchrony is implicated in pathophysiological conditions such as epilepsy. Despite the awareness of its importance, the molecular signaling underlying the regulation of neural network synchrony, especially after stimulation, remains largely unknown. In this study, we show that elevation of neuronal activity by the GABA(A) receptor antagonist, Picrotoxin, increases neural network synchrony in primary mouse cortical neuron cultures. The elevation of neuronal activity triggers Mdm2-dependent degradation of the tumor suppressor p53. We show here that blocking the degradation of p53 further enhances Picrotoxin-induced neural network synchrony, while promoting the inhibition of p53 with a p53 inhibitor reduces Picrotoxin-induced neural network synchrony. These data suggest that Mdm2-p53 signaling mediates a feedback mechanism to fine-tune neural network synchrony after activity stimulation. Furthermore, genetically reducing the expression of a direct target gene of p53, Nedd4-2, elevates neural network synchrony basally and occludes the effect of Picrotoxin. Finally, using a kainic acid-induced seizure model in mice, we show that alterations of Mdm2-p53-Nedd4-2 signaling affect seizure susceptibility. Together, our findings elucidate a critical role of Mdm2-p53-Nedd4-2 signaling underlying the regulation of neural network synchrony and seizure susceptibility and reveal potential therapeutic targets for hyperexcitability-associated neurological disorders.

  4. Neural network-based model reference adaptive control system.

    PubMed

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  5. Space-Time Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  6. Diagnosing Autism Spectrum Disorder from Brain Resting-State Functional Connectivity Patterns Using a Deep Neural Network with a Novel Feature Selection Method

    PubMed Central

    Guo, Xinyu; Dominick, Kelli C.; Minai, Ali A.; Li, Hailong; Erickson, Craig A.; Lu, Long J.

    2017-01-01

    The whole-brain functional connectivity (FC) pattern obtained from resting-state functional magnetic resonance imaging data are commonly applied to study neuropsychiatric conditions such as autism spectrum disorder (ASD) by using different machine learning models. Recent studies indicate that both hyper- and hypo- aberrant ASD-associated FCs were widely distributed throughout the entire brain rather than only in some specific brain regions. Deep neural networks (DNN) with multiple hidden layers have shown the ability to systematically extract lower-to-higher level information from high dimensional data across a series of neural hidden layers, significantly improving classification accuracy for such data. In this study, a DNN with a novel feature selection method (DNN-FS) is developed for the high dimensional whole-brain resting-state FC pattern classification of ASD patients vs. typical development (TD) controls. The feature selection method is able to help the DNN generate low dimensional high-quality representations of the whole-brain FC patterns by selecting features with high discriminating power from multiple trained sparse auto-encoders. For the comparison, a DNN without the feature selection method (DNN-woFS) is developed, and both of them are tested with different architectures (i.e., with different numbers of hidden layers/nodes). Results show that the best classification accuracy of 86.36% is generated by the DNN-FS approach with 3 hidden layers and 150 hidden nodes (3/150). Remarkably, DNN-FS outperforms DNN-woFS for all architectures studied. The most significant accuracy improvement was 9.09% with the 3/150 architecture. The method also outperforms other feature selection methods, e.g., two sample t-test and elastic net. In addition to improving the classification accuracy, a Fisher's score-based biomarker identification method based on the DNN is also developed, and used to identify 32 FCs related to ASD. These FCs come from or cross different pre-defined brain networks including the default-mode, cingulo-opercular, frontal-parietal, and cerebellum. Thirteen of them are statically significant between ASD and TD groups (two sample t-test p < 0.05) while 19 of them are not. The relationship between the statically significant FCs and the corresponding ASD behavior symptoms is discussed based on the literature and clinician's expert knowledge. Meanwhile, the potential reason of obtaining 19 FCs which are not statistically significant is also provided. PMID:28871217

  7. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule.

    PubMed

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-11-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  8. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than themore » SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.« less

  9. Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses

    PubMed Central

    Stephen, Emily P.; Lepage, Kyle Q.; Eden, Uri T.; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S.; Guenther, Frank H.; Kramer, Mark A.

    2014-01-01

    The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty—both in the functional network edges and the corresponding aggregate measures of network topology—are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here—appropriate for static and dynamic network inference and different statistical measures of coupling—permits the evaluation of confidence in network measures in a variety of settings common to neuroscience. PMID:24678295

  10. Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses.

    PubMed

    Stephen, Emily P; Lepage, Kyle Q; Eden, Uri T; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S; Guenther, Frank H; Kramer, Mark A

    2014-01-01

    The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty-both in the functional network edges and the corresponding aggregate measures of network topology-are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here-appropriate for static and dynamic network inference and different statistical measures of coupling-permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.

  11. Financial Time Series Prediction Using Spiking Neural Networks

    PubMed Central

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  12. Qualitative analysis of Cohen-Grossberg neural networks with multiple delays

    NASA Astrophysics Data System (ADS)

    Ye, Hui; Michel, Anthony N.; Wang, Kaining

    1995-03-01

    It is well known that a class of artificial neural networks with symmetric interconnections and without transmission delays, known as Cohen-Grossberg neural networks, possesses global stability (i.e., all trajectories tend to some equilibrium). We demonstrate in the present paper that many of the qualitative properties of Cohen-Grossberg networks will not be affected by the introduction of sufficiently small delays. Specifically, we establish some bound conditions for the time delays under which a given Cohen-Grossberg network with multiple delays is globally stable and possesses the same asymptotically stable equilibria as the corresponding network without delays. An effective method of determining the asymptotic stability of an equilibrium of a Cohen-Grossberg network with multiple delays is also presented. The present results are motivated by some of the authors earlier work [Phys. Rev. E 50, 4206 (1994)] and by some of the work of Marcus and Westervelt [Phys. Rev. A 39, 347 (1989)]. These works address qualitative analyses of Hopfield neural networks with one time delay. The present work generalizes these results to Cohen-Grossberg neural networks with multiple time delays. Hopfield neural networks constitute special cases of Cohen-Grossberg neural networks.

  13. Dynamic Neural Networks Supporting Memory Retrieval

    PubMed Central

    St. Jacques, Peggy L.; Kragel, Philip A.; Rubin, David C.

    2011-01-01

    How do separate neural networks interact to support complex cognitive processes such as remembrance of the personal past? Autobiographical memory (AM) retrieval recruits a consistent pattern of activation that potentially comprises multiple neural networks. However, it is unclear how such large-scale neural networks interact and are modulated by properties of the memory retrieval process. In the present functional MRI (fMRI) study, we combined independent component analysis (ICA) and dynamic causal modeling (DCM) to understand the neural networks supporting AM retrieval. ICA revealed four task-related components consistent with the previous literature: 1) Medial Prefrontal Cortex (PFC) Network, associated with self-referential processes, 2) Medial Temporal Lobe (MTL) Network, associated with memory, 3) Frontoparietal Network, associated with strategic search, and 4) Cingulooperculum Network, associated with goal maintenance. DCM analysis revealed that the medial PFC network drove activation within the system, consistent with the importance of this network to AM retrieval. Additionally, memory accessibility and recollection uniquely altered connectivity between these neural networks. Recollection modulated the influence of the medial PFC on the MTL network during elaboration, suggesting that greater connectivity among subsystems of the default network supports greater re-experience. In contrast, memory accessibility modulated the influence of frontoparietal and MTL networks on the medial PFC network, suggesting that ease of retrieval involves greater fluency among the multiple networks contributing to AM. These results show the integration between neural networks supporting AM retrieval and the modulation of network connectivity by behavior. PMID:21550407

  14. Coherence resonance in bursting neural networks

    NASA Astrophysics Data System (ADS)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  15. Classification of Respiratory Sounds by Using An Artificial Neural Network

    DTIC Science & Technology

    2001-10-28

    CLASSIFICATION OF RESPIRATORY SOUNDS BY USING AN ARTIFICIAL NEURAL NETWORK M.C. Sezgin, Z. Dokur, T. Ölmez, M. Korürek Department of Electronics and...successfully classified by the GAL network. Keywords-Respiratory Sounds, Classification of Biomedical Signals, Artificial Neural Network . I. INTRODUCTION...process, feature extraction, and classification by the artificial neural network . At first, the RS signal obtained from a real-time measurement equipment is

  16. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    DTIC Science & Technology

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  17. A neural net approach to space vehicle guidance

    NASA Technical Reports Server (NTRS)

    Caglayan, Alper K.; Allen, Scott M.

    1990-01-01

    The space vehicle guidance problem is formulated using a neural network approach, and the appropriate neural net architecture for modeling optimum guidance trajectories is investigated. In particular, an investigation is made of the incorporation of prior knowledge about the characteristics of the optimal guidance solution into the neural network architecture. The online classification performance of the developed network is demonstrated using a synthesized network trained with a database of optimum guidance trajectories. Such a neural-network-based guidance approach can readily adapt to environment uncertainties such as those encountered by an AOTV during atmospheric maneuvers.

  18. Neural network and its application to CT imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W.

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  19. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  20. Quantitative analysis of volatile organic compounds using ion mobility spectra and cascade correlation neural networks

    NASA Technical Reports Server (NTRS)

    Harrington, Peter DEB.; Zheng, Peng

    1995-01-01

    Ion Mobility Spectrometry (IMS) is a powerful technique for trace organic analysis in the gas phase. Quantitative measurements are difficult, because IMS has a limited linear range. Factors that may affect the instrument response are pressure, temperature, and humidity. Nonlinear calibration methods, such as neural networks, may be ideally suited for IMS. Neural networks have the capability of modeling complex systems. Many neural networks suffer from long training times and overfitting. Cascade correlation neural networks train at very fast rates. They also build their own topology, that is a number of layers and number of units in each layer. By controlling the decay parameter in training neural networks, reproducible and general models may be obtained.

  1. Newly developed double neural network concept for reliable fast plasma position control

    NASA Astrophysics Data System (ADS)

    Jeon, Young-Mu; Na, Yong-Su; Kim, Myung-Rak; Hwang, Y. S.

    2001-01-01

    Neural network is considered as a parameter estimation tool in plasma controls for next generation tokamak such as ITER. The neural network has been reported to be so accurate and fast for plasma equilibrium identification that it may be applied to the control of complex tokamak plasmas. For this application, the reliability of the conventional neural network needs to be improved. In this study, a new idea of double neural network is developed to achieve this. The new idea has been applied to simple plasma position identification of KSTAR tokamak for feasibility test. Characteristics of the concept show higher reliability and fault tolerance even in severe faulty conditions, which may make neural network applicable to plasma control reliably and widely in future tokamaks.

  2. Rule extraction from minimal neural networks for credit card screening.

    PubMed

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  3. An improved wavelet neural network medical image segmentation algorithm with combined maximum entropy

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang

    2018-05-01

    In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.

  4. Knowledge extraction from evolving spiking neural networks with rank order population coding.

    PubMed

    Soltic, Snjezana; Kasabov, Nikola

    2010-12-01

    This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.

  5. Adaptive neural network motion control of manipulators with experimental evaluations.

    PubMed

    Puga-Guzmán, S; Moreno-Valenzuela, J; Santibáñez, V

    2014-01-01

    A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller.

  6. Adaptive Neural Network Motion Control of Manipulators with Experimental Evaluations

    PubMed Central

    Puga-Guzmán, S.; Moreno-Valenzuela, J.; Santibáñez, V.

    2014-01-01

    A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller. PMID:24574910

  7. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  8. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  9. Establishing a Dynamic Self-Adaptation Learning Algorithm of the BP Neural Network and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min

    2015-12-01

    In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.

  10. Neural net target-tracking system using structured laser patterns

    NASA Astrophysics Data System (ADS)

    Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun

    1996-06-01

    In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.

  11. Chaotic simulated annealing by a neural network with a variable delay: design and application.

    PubMed

    Chen, Shyan-Shiou

    2011-10-01

    In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem. © 2011 IEEE

  12. Modeling and control of magnetorheological fluid dampers using neural networks

    NASA Astrophysics Data System (ADS)

    Wang, D. H.; Liao, W. H.

    2005-02-01

    Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.

  13. Deep neural networks for direct, featureless learning through observation: The case of two-dimensional spin models

    NASA Astrophysics Data System (ADS)

    Mills, Kyle; Tamblyn, Isaac

    2018-03-01

    We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4 ×4 Ising model. Using its success at this task, we motivate the study of the larger 8 ×8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration.

  14. Tensor Basis Neural Network v. 1.0 (beta)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ling, Julia; Templeton, Jeremy

    This software package can be used to build, train, and test a neural network machine learning model. The neural network architecture is specifically designed to embed tensor invariance properties by enforcing that the model predictions sit on an invariant tensor basis. This neural network architecture can be used in developing constitutive models for applications such as turbulence modeling, materials science, and electromagnetism.

  15. A renaissance of neural networks in drug discovery.

    PubMed

    Baskin, Igor I; Winkler, David; Tetko, Igor V

    2016-08-01

    Neural networks are becoming a very popular method for solving machine learning and artificial intelligence problems. The variety of neural network types and their application to drug discovery requires expert knowledge to choose the most appropriate approach. In this review, the authors discuss traditional and newly emerging neural network approaches to drug discovery. Their focus is on backpropagation neural networks and their variants, self-organizing maps and associated methods, and a relatively new technique, deep learning. The most important technical issues are discussed including overfitting and its prevention through regularization, ensemble and multitask modeling, model interpretation, and estimation of applicability domain. Different aspects of using neural networks in drug discovery are considered: building structure-activity models with respect to various targets; predicting drug selectivity, toxicity profiles, ADMET and physicochemical properties; characteristics of drug-delivery systems and virtual screening. Neural networks continue to grow in importance for drug discovery. Recent developments in deep learning suggests further improvements may be gained in the analysis of large chemical data sets. It's anticipated that neural networks will be more widely used in drug discovery in the future, and applied in non-traditional areas such as drug delivery systems, biologically compatible materials, and regenerative medicine.

  16. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    NASA Astrophysics Data System (ADS)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  17. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  18. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    PubMed Central

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  19. Deinterlacing using modular neural network

    NASA Astrophysics Data System (ADS)

    Woo, Dong H.; Eom, Il K.; Kim, Yoo S.

    2004-05-01

    Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.

  20. Pruning artificial neural networks using neural complexity measures.

    PubMed

    Jorgensen, Thomas D; Haynes, Barry P; Norlund, Charlotte C F

    2008-10-01

    This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.

  1. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    PubMed

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  3. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    PubMed Central

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  4. Periodicity and stability for variable-time impulsive neural networks.

    PubMed

    Li, Hongfei; Li, Chuandong; Huang, Tingwen

    2017-10-01

    The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks. Furthermore, a series of sufficient criteria are derived to ensure the existence and global exponential stability of periodic solution of variable-time impulsive neural networks, and to illustrate the same stability properties between variable-time impulsive neural networks and the fixed-time ones. The new criteria are established by applying Schaefer's fixed point theorem combined with the use of inequality technique. Finally, a numerical example is presented to show the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  6. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    PubMed

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.

  7. A novel neural-wavelet approach for process diagnostics and complex system modeling

    NASA Astrophysics Data System (ADS)

    Gao, Rong

    Neural networks have been effective in several engineering applications because of their learning abilities and robustness. However certain shortcomings, such as slow convergence and local minima, are always associated with neural networks, especially neural networks applied to highly nonlinear and non-stationary problems. These problems can be effectively alleviated by integrating a new powerful tool, wavelets, into conventional neural networks. The multi-resolution analysis and feature localization capabilities of the wavelet transform offer neural networks new possibilities for learning. A neural wavelet network approach developed in this thesis enjoys fast convergence rate with little possibility to be caught at a local minimum. It combines the localization properties of wavelets with the learning abilities of neural networks. Two different testbeds are used for testing the efficiency of the new approach. The first is magnetic flowmeter-based process diagnostics: here we extend previous work, which has demonstrated that wavelet groups contain process information, to more general process diagnostics. A loop at Applied Intelligent Systems Lab (AISL) is used for collecting and analyzing data through the neural-wavelet approach. The research is important for thermal-hydraulic processes in nuclear and other engineering fields. The neural-wavelet approach developed is also tested with data from the electric power grid. More specifically, the neural-wavelet approach is used for performing short-term and mid-term prediction of power load demand. In addition, the feasibility of determining the type of load using the proposed neural wavelet approach is also examined. The notion of cross scale product has been developed as an expedient yet reliable discriminator of loads. Theoretical issues involved in the integration of wavelets and neural networks are discussed and future work outlined.

  8. Active Control of Wind-Tunnel Model Aeroelastic Response Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.

    2000-01-01

    NASA Langley Research Center, Hampton, VA 23681 Under a joint research and development effort conducted by the National Aeronautics and Space Administration and The Boeing Company (formerly McDonnell Douglas) three neural-network based control systems were developed and tested. The control systems were experimentally evaluated using a transonic wind-tunnel model in the Langley Transonic Dynamics Tunnel. One system used a neural network to schedule flutter suppression control laws, another employed a neural network in a predictive control scheme, and the third employed a neural network in an inverse model control scheme. All three of these control schemes successfully suppressed flutter to or near the limits of the testing apparatus, and represent the first experimental applications of neural networks to flutter suppression. This paper will summarize the findings of this project.

  9. Modeling Aircraft Wing Loads from Flight Data Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Allen, Michael J.; Dibley, Ryan P.

    2003-01-01

    Neural networks were used to model wing bending-moment loads, torsion loads, and control surface hinge-moments of the Active Aeroelastic Wing (AAW) aircraft. Accurate loads models are required for the development of control laws designed to increase roll performance through wing twist while not exceeding load limits. Inputs to the model include aircraft rates, accelerations, and control surface positions. Neural networks were chosen to model aircraft loads because they can account for uncharacterized nonlinear effects while retaining the capability to generalize. The accuracy of the neural network models was improved by first developing linear loads models to use as starting points for network training. Neural networks were then trained with flight data for rolls, loaded reversals, wind-up-turns, and individual control surface doublets for load excitation. Generalization was improved by using gain weighting and early stopping. Results are presented for neural network loads models of four wing loads and four control surface hinge moments at Mach 0.90 and an altitude of 15,000 ft. An average model prediction error reduction of 18.6 percent was calculated for the neural network models when compared to the linear models. This paper documents the input data conditioning, input parameter selection, structure, training, and validation of the neural network models.

  10. Exponential H(infinity) synchronization of general discrete-time chaotic neural networks with or without time delays.

    PubMed

    Qi, Donglian; Liu, Meiqin; Qiu, Meikang; Zhang, Senlin

    2010-08-01

    This brief studies exponential H(infinity) synchronization of a class of general discrete-time chaotic neural networks with external disturbance. On the basis of the drive-response concept and H(infinity) control theory, and using Lyapunov-Krasovskii (or Lyapunov) functional, state feedback controllers are established to not only guarantee exponential stable synchronization between two general chaotic neural networks with or without time delays, but also reduce the effect of external disturbance on the synchronization error to a minimal H(infinity) norm constraint. The proposed controllers can be obtained by solving the convex optimization problems represented by linear matrix inequalities. Most discrete-time chaotic systems with or without time delays, such as Hopfield neural networks, cellular neural networks, bidirectional associative memory networks, recurrent multilayer perceptrons, Cohen-Grossberg neural networks, Chua's circuits, etc., can be transformed into this general chaotic neural network to be H(infinity) synchronization controller designed in a unified way. Finally, some illustrated examples with their simulations have been utilized to demonstrate the effectiveness of the proposed methods.

  11. Automated implementation of rule-based expert systems with neural networks for time-critical applications

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Huang, Song; Govind, Girish

    1991-01-01

    In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.

  12. Predicting Slag Generation in Sub-Scale Test Motors Using a Neural Network

    NASA Technical Reports Server (NTRS)

    Wiesenberg, Brent

    1999-01-01

    Generation of slag (aluminum oxide) is an important issue for the Reusable Solid Rocket Motor (RSRM). Thiokol performed testing to quantify the relationship between raw material variations and slag generation in solid propellants by testing sub-scale motors cast with propellant containing various combinations of aluminum fuel and ammonium perchlorate (AP) oxidizer particle sizes. The test data were analyzed using statistical methods and an artificial neural network. This paper primarily addresses the neural network results with some comparisons to the statistical results. The neural network showed that the particle sizes of both the aluminum and unground AP have a measurable effect on slag generation. The neural network analysis showed that aluminum particle size is the dominant driver in slag generation, about 40% more influential than AP. The network predictions of the amount of slag produced during firing of sub-scale motors were 16% better than the predictions of a statistically derived empirical equation. Another neural network successfully characterized the slag generated during full-scale motor tests. The success is attributable to the ability of neural networks to characterize multiple complex factors including interactions that affect slag generation.

  13. Application of Two-Dimensional AWE Algorithm in Training Multi-Dimensional Neural Network Model

    DTIC Science & Technology

    2003-07-01

    hybrid scheme . the general neural network method (Table 3.1). The training process of the software- ACKNOWLEDGMENT "Neuralmodeler" is shown in Fig. 3.2...engineering. Artificial neural networks (ANNs) have emerged Training a neural network model is the key of as a powerful technique for modeling general neural...coefficients am, the derivatives method of moments (MoM). The variables in the of matrix I have to be generated . A closed form model are frequency

  14. Center for Neural Engineering at Tennessee State University, ASSERT Annual Progress Report.

    DTIC Science & Technology

    1995-07-01

    neural networks . Their research topics are: (1) developing frequency dependent oscillatory neural networks ; (2) long term pontentiation learning rules...as applied to spatial navigation; (3) design and build a servo joint robotic arm and (4) neural network based prothesis control. One graduate student

  15. Optical-Correlator Neural Network Based On Neocognitron

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  16. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  17. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  18. Neural networks for function approximation in nonlinear control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.; Stengel, Robert F.

    1990-01-01

    Two neural network architectures are compared with a classical spline interpolation technique for the approximation of functions useful in a nonlinear control system. A standard back-propagation feedforward neural network and a cerebellar model articulation controller (CMAC) neural network are presented, and their results are compared with a B-spline interpolation procedure that is updated using recursive least-squares parameter identification. Each method is able to accurately represent a one-dimensional test function. Tradeoffs between size requirements, speed of operation, and speed of learning indicate that neural networks may be practical for identification and adaptation in a nonlinear control environment.

  19. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1997-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  20. Vibrational Analysis of Engine Components Using Neural-Net Processing and Electronic Holography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Fite, E. Brian; Mehmed, Oral; Thorp, Scott A.

    1998-01-01

    The use of computational-model trained artificial neural networks to acquire damage specific information from electronic holograms is discussed. A neural network is trained to transform two time-average holograms into a pattern related to the bending-induced-strain distribution of the vibrating component. The bending distribution is very sensitive to component damage unlike the characteristic fringe pattern or the displacement amplitude distribution. The neural network processor is fast for real-time visualization of damage. The two-hologram limit makes the processor more robust to speckle pattern decorrelation. Undamaged and cracked cantilever plates serve as effective objects for testing the combination of electronic holography and neural-net processing. The requirements are discussed for using finite-element-model trained neural networks for field inspections of engine components. The paper specifically discusses neural-network fringe pattern analysis in the presence of the laser speckle effect and the performances of two limiting cases of the neural-net architecture.

  1. Neural networks for vertical microcode compaction

    NASA Astrophysics Data System (ADS)

    Chu, Pong P.

    1992-09-01

    Neural networks provide an alternative way to solve complex optimization problems. Instead of performing a program of instructions sequentially as in a traditional computer, neural network model explores many competing hypotheses simultaneously using its massively parallel net. The paper shows how to use the neural network approach to perform vertical micro-code compaction for a micro-programmed control unit. The compaction procedure includes two basic steps. The first step determines the compatibility classes and the second step selects a minimal subset to cover the control signals. Since the selection process is an NP- complete problem, to find an optimal solution is impractical. In this study, we employ a customized neural network to obtain the minimal subset. We first formalize this problem, and then define an `energy function' and map it to a two-layer fully connected neural network. The modified network has two types of neurons and can always obtain a valid solution.

  2. Advances in Artificial Neural Networks - Methodological Development and Application

    USDA-ARS?s Scientific Manuscript database

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  3. Artificial Neural Network Metamodels of Stochastic Computer Simulations

    DTIC Science & Technology

    1994-08-10

    SUBTITLE r 5. FUNDING NUMBERS Artificial Neural Network Metamodels of Stochastic I () Computer Simulations 6. AUTHOR(S) AD- A285 951 Robert Allen...8217!298*1C2 ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC COMPUTER SIMULATIONS by Robert Allen Kilmer B.S. in Education Mathematics, Indiana...dedicate this document to the memory of my father, William Ralph Kilmer. mi ABSTRACT Signature ARTIFICIAL NEURAL NETWORK METAMODELS OF STOCHASTIC

  4. Research on wind field algorithm of wind lidar based on BP neural network and grey prediction

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei

    2018-01-01

    This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.

  5. Pheromone Static Routing Strategy for Complex Networks

    NASA Astrophysics Data System (ADS)

    Hu, Mao-Bin; Henry, Y. K. Lau; Ling, Xiang; Jiang, Rui

    2012-12-01

    We adopt the concept of using pheromones to generate a set of static paths that can reach the performance of global dynamic routing strategy [Phys. Rev. E 81 (2010) 016113]. The path generation method consists of two stages. In the first stage, a pheromone is dropped to the nodes by packets forwarded according to the global dynamic routing strategy. In the second stage, pheromone static paths are generated according to the pheromone density. The output paths can greatly improve traffic systems' overall capacity on different network structures, including scale-free networks, small-world networks and random graphs. Because the paths are static, the system needs much less computational resources than the global dynamic routing strategy.

  6. Neural network architecture for form and motion perception (Abstract Only)

    NASA Astrophysics Data System (ADS)

    Grossberg, Stephen

    1991-08-01

    Evidence is given for a new neural network theory of biological motion perception, a motion boundary contour system. This theory clarifies why parallel streams V1 yields V2 and V1 yields MT exist for static form and motion form processing among the areas V1, V2, and MT of visual cortex. The motion boundary contour system consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a motion oriented contrast (MOC) filter, for preprocessing moving images; and a cooperative-competitive feedback (CC) loop, for generating emergent boundary segmentations of the filtered signals. The present work uses the MOC filter to explain a variety of classical and recent data about short-range and long- range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed- up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte''s Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90 degree(s), whereas opposite directions differ by 180 degree(s), and why a cortical stream V1 yields V2 yields MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the motion boundary contour system design.

  7. Context and hand posture modulate the neural dynamics of tool-object perception.

    PubMed

    Natraj, Nikhilesh; Poole, Victoria; Mizelle, J C; Flumini, Andrea; Borghi, Anna M; Wheaton, Lewis A

    2013-02-01

    Prior research has linked visual perception of tools with plausible motor strategies. Thus, observing a tool activates the putative action-stream, including the left posterior parietal cortex. Observing a hand functionally grasping a tool involves the inferior frontal cortex. However, tool-use movements are performed in a contextual and grasp specific manner, rather than relative isolation. Our prior behavioral data has demonstrated that the context of tool-use (by pairing the tool with different objects) and varying hand grasp postures of the tool can interact to modulate subjects' reaction times while evaluating tool-object content. Specifically, perceptual judgment was delayed in the evaluation of functional tool-object pairings (Correct context) when the tool was non-functionally (Manipulative) grasped. Here, we hypothesized that this behavioral interference seen with the Manipulative posture would be due to increased and extended left parietofrontal activity possibly underlying motor simulations when resolving action conflict due to this particular grasp at time scales relevant to the behavioral data. Further, we hypothesized that this neural effect will be restricted to the Correct tool-object context wherein action affordances are at a maximum. 64-channel electroencephalography (EEG) was recorded from 16 right-handed subjects while viewing images depicting three classes of tool-object contexts: functionally Correct (e.g. coffee pot-coffee mug), functionally Incorrect (e.g. coffee pot-marker) and Spatial (coffee pot-milk). The Spatial context pairs a tool and object that would not functionally match, but may commonly appear in the same scene. These three contexts were modified by hand interaction: No Hand, Static Hand near the tool, Functional Hand posture and Manipulative Hand posture. The Manipulative posture is convenient for relocating a tool but does not afford a functional engagement of the tool on the target object. Subjects were instructed to visually assess whether the pictures displayed correct tool-object associations. EEG data was analyzed in time-voltage and time-frequency domains. Overall, Static Hand, Functional and Manipulative postures cause early activation (100-400ms post image onset) of parietofrontal areas, to varying intensity in each context, when compared to the No Hand control condition. However, when context is Correct, only the Manipulative Posture significantly induces extended neural responses, predominantly over right parietal and right frontal areas [400-600ms post image onset]. Significant power increase was observed in the theta band [4-8Hz] over the right frontal area, [0-500ms]. In addition, when context is Spatial, Manipulative posture alone significantly induces extended neural responses, over bilateral parietofrontal and left motor areas [400-600ms]. Significant power decrease occurred primarily in beta bands [12-16, 20-25Hz] over the aforementioned brain areas [400-600ms]. Here, we demonstrate that the neural processing of tool-object perception is sensitive to several factors. While both Functional and Manipulative postures in Correct context engage predominantly an early left parietofrontal circuit, the Manipulative posture alone extends the neural response and transitions to a late right parietofrontal network. This suggests engagement of a right neural system to evaluate action affordances when hand posture does not support action (Manipulative). Additionally, when tool-use context is ambiguous (Spatial context), there is increased bilateral parietofrontal activation and, extended neural response for the Manipulative posture. These results point to the existence of other networks evaluating tool-object associations when motoric affordances are not readily apparent and underlie corresponding delayed perceptual judgment in our prior behavioral data wherein Manipulative postures had exclusively interfered in judging tool-object content. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    PubMed

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.

  9. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    PubMed Central

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  10. SPECIAL ISSUE ON OPTICAL PROCESSING OF INFORMATION: Optical neural networks based on holographic correlators

    NASA Astrophysics Data System (ADS)

    Sokolov, V. K.; Shubnikov, E. I.

    1995-10-01

    The three most important models of neural networks — a bidirectional associative memory, Hopfield networks, and adaptive resonance networks — are used as examples to show that a holographic correlator has its place in the neural computing paradigm.

  11. Comparison of artificial intelligence classifiers for SIP attack data

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Slachta, Jiri

    2016-05-01

    Honeypot application is a source of valuable data about attacks on the network. We run several SIP honeypots in various computer networks, which are separated geographically and logically. Each honeypot runs on public IP address and uses standard SIP PBX ports. All information gathered via honeypot is periodically sent to the centralized server. This server classifies all attack data by neural network algorithm. The paper describes optimizations of a neural network classifier, which lower the classification error. The article contains the comparison of two neural network algorithm used for the classification of validation data. The first is the original implementation of the neural network described in recent work; the second neural network uses further optimizations like input normalization or cross-entropy cost function. We also use other implementations of neural networks and machine learning classification algorithms. The comparison test their capabilities on validation data to find the optimal classifier. The article result shows promise for further development of an accurate SIP attack classification engine.

  12. Parallel consensual neural networks.

    PubMed

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  13. Unfolding the neutron spectrum of a NE213 scintillator using artificial neural networks.

    PubMed

    Sharghi Ido, A; Bonyadi, M R; Etaati, G R; Shahriari, M

    2009-10-01

    Artificial neural networks technology has been applied to unfold the neutron spectra from the pulse height distribution measured with NE213 liquid scintillator. Here, both the single and multi-layer perceptron neural network models have been implemented to unfold the neutron spectrum from an Am-Be neutron source. The activation function and the connectivity of the neurons have been investigated and the results have been analyzed in terms of the network's performance. The simulation results show that the neural network that utilizes the Satlins transfer function has the best performance. In addition, omitting the bias connection of the neurons improve the performance of the network. Also, the SCINFUL code is used for generating the response functions in the training phase of the process. Finally, the results of the neural network simulation have been compared with those of the FORIST unfolding code for both (241)Am-Be and (252)Cf neutron sources. The results of neural network are in good agreement with FORIST code.

  14. A neural-network-based model for the dynamic simulation of the tire/suspension system while traversing road irregularities.

    PubMed

    Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano

    2008-09-01

    This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy.

  15. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  16. Residual perception of biological motion in cortical blindness.

    PubMed

    Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto

    2016-12-01

    From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Toward effectiveness and agility of network security situational awareness using moving target defense (MTD)

    NASA Astrophysics Data System (ADS)

    Ge, Linqiang; Yu, Wei; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Lu, Chao

    2014-06-01

    Most enterprise networks are built to operate in a static configuration (e.g., static software stacks, network configurations, and application deployments). Nonetheless, static systems make it easy for a cyber adversary to plan and launch successful attacks. To address static vulnerability, moving target defense (MTD) has been proposed to increase the difficulty for the adversary to launch successful attacks. In this paper, we first present a literature review of existing MTD techniques. We then propose a generic defense framework, which can provision an incentive-compatible MTD mechanism through dynamically migrating server locations. We also present a user-server mapping mechanism, which not only improves system resiliency, but also ensures network performance. We demonstrate a MTD with a multi-user network communication and our data shows that the proposed framework can effectively improve the resiliency and agility of the system while achieving good network timeliness and throughput performance.

  18. Plant Growth Models Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  19. Artificial Neural Network for the Prediction of Chromosomal Abnormalities in Azoospermic Males.

    PubMed

    Akinsal, Emre Can; Haznedar, Bulent; Baydilli, Numan; Kalinli, Adem; Ozturk, Ahmet; Ekmekçioğlu, Oğuz

    2018-02-04

    To evaluate whether an artifical neural network helps to diagnose any chromosomal abnormalities in azoospermic males. The data of azoospermic males attending to a tertiary academic referral center were evaluated retrospectively. Height, total testicular volume, follicle stimulating hormone, luteinising hormone, total testosterone and ejaculate volume of the patients were used for the analyses. In artificial neural network, the data of 310 azoospermics were used as the education and 115 as the test set. Logistic regression analyses and discriminant analyses were performed for statistical analyses. The tests were re-analysed with a neural network. Both logistic regression analyses and artificial neural network predicted the presence or absence of chromosomal abnormalities with more than 95% accuracy. The use of artificial neural network model has yielded satisfactory results in terms of distinguishing patients whether they have any chromosomal abnormality or not.

  20. Global exponential stability of inertial memristor-based neural networks with time-varying delays and impulses.

    PubMed

    Zhang, Wei; Huang, Tingwen; He, Xing; Li, Chuandong

    2017-11-01

    In this study, we investigate the global exponential stability of inertial memristor-based neural networks with impulses and time-varying delays. We construct inertial memristor-based neural networks based on the characteristics of the inertial neural networks and memristor. Impulses with and without delays are considered when modeling the inertial neural networks simultaneously, which are of great practical significance in the current study. Some sufficient conditions are derived under the framework of the Lyapunov stability method, as well as an extended Halanay differential inequality and a new delay impulsive differential inequality, which depend on impulses with and without delays, in order to guarantee the global exponential stability of the inertial memristor-based neural networks. Finally, two numerical examples are provided to illustrate the efficiency of the proposed methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Neural joint control for Space Shuttle Remote Manipulator System

    NASA Technical Reports Server (NTRS)

    Atkins, Mark A.; Cox, Chadwick J.; Lothers, Michael D.; Pap, Robert M.; Thomas, Charles R.

    1992-01-01

    Neural networks are being used to control a robot arm in a telerobotic operation. The concept uses neural networks for both joint and inverse kinematics in a robotic control application. An upper level neural network is trained to learn inverse kinematic mappings. The output, a trajectory, is then fed to the Decentralized Adaptive Joint Controllers. This neural network implementation has shown that the controlled arm recovers from unexpected payload changes while following the reference trajectory. The neural network-based decentralized joint controller is faster, more robust and efficient than conventional approaches. Implementations of this architecture are discussed that would relax assumptions about dynamics, obstacles, and heavy loads. This system is being developed to use with the Space Shuttle Remote Manipulator System.

  2. Application of a neural network for reflectance spectrum classification

    NASA Astrophysics Data System (ADS)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  3. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    PubMed

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  4. Nanophotonic particle simulation and inverse design using artificial neural networks.

    PubMed

    Peurifoy, John; Shen, Yichen; Jing, Li; Yang, Yi; Cano-Renteria, Fidel; DeLacy, Brendan G; Joannopoulos, John D; Tegmark, Max; Soljačić, Marin

    2018-06-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical.

  5. Application of Artificial Neural Networks in the Heart Electrical Axis Position Conclusion Modeling

    NASA Astrophysics Data System (ADS)

    Bakanovskaya, L. N.

    2016-08-01

    The article touches upon building of a heart electrical axis position conclusion model using an artificial neural network. The input signals of the neural network are the values of deflections Q, R and S; and the output signal is the value of the heart electrical axis position. Training of the network is carried out by the error propagation method. The test results allow concluding that the created neural network makes a conclusion with a high degree of accuracy.

  6. Enhancement of electrical signaling in neural networks on graphene films.

    PubMed

    Tang, Mingliang; Song, Qin; Li, Ning; Jiang, Ziyun; Huang, Rong; Cheng, Guosheng

    2013-09-01

    One of the key challenges for neural tissue engineering is to exploit supporting materials with robust functionalities not only to govern cell-specific behaviors, but also to form functional neural network. The unique electrical and mechanical properties of graphene imply it as a promising candidate for neural interfaces, but little is known about the details of neural network formation on graphene as a scaffold material for tissue engineering. Therapeutic regenerative strategies aim to guide and enhance the intrinsic capacity of the neurons to reorganize by promoting plasticity mechanisms in a controllable manner. Here, we investigated the impact of graphene on the formation and performance in the assembly of neural networks in neural stem cell (NSC) culture. Using calcium imaging and electrophysiological recordings, we demonstrate the capabilities of graphene to support the growth of functional neural circuits, and improve neural performance and electrical signaling in the network. These results offer a better understanding of interactions between graphene and NSCs, also they clearly present the great potentials of graphene as neural interface in tissue engineering. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. On the Relationships between Generative Encodings, Regularity, and Learning Abilities when Evolving Plastic Artificial Neural Networks

    PubMed Central

    Tonelli, Paul; Mouret, Jean-Baptiste

    2013-01-01

    A major goal of bio-inspired artificial intelligence is to design artificial neural networks with abilities that resemble those of animal nervous systems. It is commonly believed that two keys for evolving nature-like artificial neural networks are (1) the developmental process that links genes to nervous systems, which enables the evolution of large, regular neural networks, and (2) synaptic plasticity, which allows neural networks to change during their lifetime. So far, these two topics have been mainly studied separately. The present paper shows that they are actually deeply connected. Using a simple operant conditioning task and a classic evolutionary algorithm, we compare three ways to encode plastic neural networks: a direct encoding, a developmental encoding inspired by computational neuroscience models, and a developmental encoding inspired by morphogen gradients (similar to HyperNEAT). Our results suggest that using a developmental encoding could improve the learning abilities of evolved, plastic neural networks. Complementary experiments reveal that this result is likely the consequence of the bias of developmental encodings towards regular structures: (1) in our experimental setup, encodings that tend to produce more regular networks yield networks with better general learning abilities; (2) whatever the encoding is, networks that are the more regular are statistically those that have the best learning abilities. PMID:24236099

  8. Modular representation of layered neural networks.

    PubMed

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Bio-inspired spiking neural network for nonlinear systems control.

    PubMed

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.

    PubMed

    Woodward, Alexander; Froese, Tom; Ikegami, Takashi

    2015-02-01

    The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Criteria for Choosing the Best Neural Network: Part 1

    DTIC Science & Technology

    1991-07-24

    Touretzky, pp. 177-185. San Mateo: Morgan Kaufmann. Harp, S.A., Samad , T., and Guha, A . (1990). Designing application-specific neural networks using genetic...determining a parsimonious neural network for use in prediction/generalization based on a given fixed learning sample. Both the classification and...statistical settings, algorithms for selecting the number of hidden layer nodes in a three layer, feedforward neural network are presented. The selection

  12. Keypoint Density-Based Region Proposal for Fine-Grained Object Detection and Classification Using Regions with Convolutional Neural Network Features

    DTIC Science & Technology

    2015-12-15

    Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep

  13. Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach.

    DTIC Science & Technology

    1998-05-01

    Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for

  14. Semantic Interpretation of An Artificial Neural Network

    DTIC Science & Technology

    1995-12-01

    ARTIFICIAL NEURAL NETWORK .7,’ THESIS Stanley Dale Kinderknecht Captain, USAF 770 DEAT7ET77,’H IR O C 7... ARTIFICIAL NEURAL NETWORK THESIS Stanley Dale Kinderknecht Captain, USAF AFIT/GCS/ENG/95D-07 Approved for public release; distribution unlimited The views...Government. AFIT/GCS/ENG/95D-07 SEMANTIC INTERPRETATION OF AN ARTIFICIAL NEURAL NETWORK THESIS Presented to the Faculty of the School of Engineering of

  15. Trimaran Resistance Artificial Neural Network

    DTIC Science & Technology

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  16. Application of Fuzzy-Logic Controller and Neural Networks Controller in Gas Turbine Speed Control and Overheating Control and Surge Control on Transient Performance

    NASA Astrophysics Data System (ADS)

    Torghabeh, A. A.; Tousi, A. M.

    2007-08-01

    This paper presents Fuzzy Logic and Neural Networks approach to Gas Turbine Fuel schedules. Modeling of non-linear system using feed forward artificial Neural Networks using data generated by a simulated gas turbine program is introduced. Two artificial Neural Networks are used , depicting the non-linear relationship between gas generator speed and fuel flow, and turbine inlet temperature and fuel flow respectively . Off-line fast simulations are used for engine controller design for turbojet engine based on repeated simulation. The Mamdani and Sugeno models are used to expression the Fuzzy system . The linguistic Fuzzy rules and membership functions are presents and a Fuzzy controller will be proposed to provide an Open-Loop control for the gas turbine engine during acceleration and deceleration . MATLAB Simulink was used to apply the Fuzzy Logic and Neural Networks analysis. Both systems were able to approximate functions characterizing the acceleration and deceleration schedules . Surge and Flame-out avoidance during acceleration and deceleration phases are then checked . Turbine Inlet Temperature also checked and controls by Neural Networks controller. This Fuzzy Logic and Neural Network Controllers output results are validated and evaluated by GSP software . The validation results are used to evaluate the generalization ability of these artificial Neural Networks and Fuzzy Logic controllers.

  17. A research using hybrid RBF/Elman neural networks for intrusion detection system secure model

    NASA Astrophysics Data System (ADS)

    Tong, Xiaojun; Wang, Zhu; Yu, Haining

    2009-10-01

    A hybrid RBF/Elman neural network model that can be employed for both anomaly detection and misuse detection is presented in this paper. The IDSs using the hybrid neural network can detect temporally dispersed and collaborative attacks effectively because of its memory of past events. The RBF network is employed as a real-time pattern classification and the Elman network is employed to restore the memory of past events. The IDSs using the hybrid neural network are evaluated against the intrusion detection evaluation data sponsored by U.S. Defense Advanced Research Projects Agency (DARPA). Experimental results are presented in ROC curves. Experiments show that the IDSs using this hybrid neural network improve the detection rate and decrease the false positive rate effectively.

  18. Advanced obstacle avoidance for a laser based wheelchair using optimised Bayesian neural networks.

    PubMed

    Trieu, Hoang T; Nguyen, Hung T; Willey, Keith

    2008-01-01

    In this paper we present an advanced method of obstacle avoidance for a laser based intelligent wheelchair using optimized Bayesian neural networks. Three neural networks are designed for three separate sub-tasks: passing through a door way, corridor and wall following and general obstacle avoidance. The accurate usable accessible space is determined by including the actual wheelchair dimensions in a real-time map used as inputs to each networks. Data acquisitions are performed separately to collect the patterns required for specified sub-tasks. Bayesian frame work is used to determine the optimal neural network structure in each case. Then these networks are trained under the supervision of Bayesian rule. Experiment results showed that compare to the VFH algorithm our neural networks navigated a smoother path following a near optimum trajectory.

  19. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  20. Ultrasonographic Diagnosis of Cirrhosis Based on Preprocessing Using Pyramid Recurrent Neural Network

    NASA Astrophysics Data System (ADS)

    Lu, Jianming; Liu, Jiang; Zhao, Xueqin; Yahagi, Takashi

    In this paper, a pyramid recurrent neural network is applied to characterize the hepatic parenchymal diseases in ultrasonic B-scan texture. The cirrhotic parenchymal diseases are classified into 4 types according to the size of hypoechoic nodular lesions. The B-mode patterns are wavelet transformed , and then the compressed data are feed into a pyramid neural network to diagnose the type of cirrhotic diseases. Compared with the 3-layer neural networks, the performance of the proposed pyramid recurrent neural network is improved by utilizing the lower layer effectively. The simulation result shows that the proposed system is suitable for diagnosis of cirrhosis diseases.

  1. Application of artificial neural networks to composite ply micromechanics

    NASA Technical Reports Server (NTRS)

    Brown, D. A.; Murthy, P. L. N.; Berke, L.

    1991-01-01

    Artificial neural networks can provide improved computational efficiency relative to existing methods when an algorithmic description of functional relationships is either totally unavailable or is complex in nature. For complex calculations, significant reductions in elapsed computation time are possible. The primary goal is to demonstrate the applicability of artificial neural networks to composite material characterization. As a test case, a neural network was trained to accurately predict composite hygral, thermal, and mechanical properties when provided with basic information concerning the environment, constituent materials, and component ratios used in the creation of the composite. A brief introduction on neural networks is provided along with a description of the project itself.

  2. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  3. Decoding small surface codes with feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  4. A Comparison of Conventional Linear Regression Methods and Neural Networks for Forecasting Educational Spending.

    ERIC Educational Resources Information Center

    Baker, Bruce D.; Richards, Craig E.

    1999-01-01

    Applies neural network methods for forecasting 1991-95 per-pupil expenditures in U.S. public elementary and secondary schools. Forecasting models included the National Center for Education Statistics' multivariate regression model and three neural architectures. Regarding prediction accuracy, neural network results were comparable or superior to…

  5. An Artificial Neural Network Controller for Intelligent Transportation Systems Applications

    DOT National Transportation Integrated Search

    1996-01-01

    An Autonomous Intelligent Cruise Control (AICC) has been designed using a feedforward artificial neural network, as an example for utilizing artificial neural networks for nonlinear control problems arising in intelligent transportation systems appli...

  6. Deconvolution using a neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  7. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    PubMed

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Flank wears Simulation by using back propagation neural network when cutting hardened H-13 steel in CNC End Milling

    NASA Astrophysics Data System (ADS)

    Hazza, Muataz Hazza F. Al; Adesta, Erry Y. T.; Riza, Muhammad

    2013-12-01

    High speed milling has many advantages such as higher removal rate and high productivity. However, higher cutting speed increase the flank wear rate and thus reducing the cutting tool life. Therefore estimating and predicting the flank wear length in early stages reduces the risk of unaccepted tooling cost. This research presents a neural network model for predicting and simulating the flank wear in the CNC end milling process. A set of sparse experimental data for finish end milling on AISI H13 at hardness of 48 HRC have been conducted to measure the flank wear length. Then the measured data have been used to train the developed neural network model. Artificial neural network (ANN) was applied to predict the flank wear length. The neural network contains twenty hidden layer with feed forward back propagation hierarchical. The neural network has been designed with MATLAB Neural Network Toolbox. The results show a high correlation between the predicted and the observed flank wear which indicates the validity of the models.

  9. Using an Extended Kalman Filter Learning Algorithm for Feed-Forward Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, David J.; Mussa, Yussuf

    2004-01-01

    In this study a new extended Kalman filter (EKF) learning algorithm for feed-forward neural networks (FFN) is used. With the EKF approach, the training of the FFN can be seen as state estimation for a non-linear stationary process. The EKF method gives excellent convergence performances provided that there is enough computer core memory and that the machine precision is high. Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). The neural network was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9997. The neural network Fortran code used is available for download.

  10. Neural network classification of clinical neurophysiological data for acute care monitoring

    NASA Technical Reports Server (NTRS)

    Sgro, Joseph

    1994-01-01

    The purpose of neurophysiological monitoring of the 'acute care' patient is to allow the accurate recognition of changing or deteriorating neurological function as close to the moment of occurrence as possible, thus permitting immediate intervention. Results confirm that: (1) neural networks are able to accurately identify electroencephalogram (EEG) patterns and evoked potential (EP) wave components, and measuring EP waveform latencies and amplitudes; (2) neural networks are able to accurately detect EP and EEG recordings that have been contaminated by noise; (3) the best performance was obtained consistently with the back propagation network for EP and the HONN for EEG's; (4) neural network performed consistently better than other methods evaluated; and (5) neural network EEG and EP analyses are readily performed on multichannel data.

  11. Neural computation of arithmetic functions

    NASA Technical Reports Server (NTRS)

    Siu, Kai-Yeung; Bruck, Jehoshua

    1990-01-01

    An area of application of neural networks is considered. A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n-bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions.

  12. Recognition of Telugu characters using neural networks.

    PubMed

    Sukhaswami, M B; Seetharamulu, P; Pujari, A K

    1995-09-01

    The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.

  13. Alignment of dynamic networks.

    PubMed

    Vijayan, V; Critchlow, D; Milenkovic, T

    2017-07-15

    Network alignment (NA) aims to find a node mapping that conserves similar regions between compared networks. NA is applicable to many fields, including computational biology, where NA can guide the transfer of biological knowledge from well- to poorly-studied species across aligned network regions. Existing NA methods can only align static networks. However, most complex real-world systems evolve over time and should thus be modeled as dynamic networks. We hypothesize that aligning dynamic network representations of evolving systems will produce superior alignments compared to aligning the systems' static network representations, as is currently done. For this purpose, we introduce the first ever dynamic NA method, DynaMAGNA ++. This proof-of-concept dynamic NA method is an extension of a state-of-the-art static NA method, MAGNA++. Even though both MAGNA++ and DynaMAGNA++ optimize edge as well as node conservation across the aligned networks, MAGNA++ conserves static edges and similarity between static node neighborhoods, while DynaMAGNA++ conserves dynamic edges (events) and similarity between evolving node neighborhoods. For this purpose, we introduce the first ever measure of dynamic edge conservation and rely on our recent measure of dynamic node conservation. Importantly, the two dynamic conservation measures can be optimized with any state-of-the-art NA method and not just MAGNA++. We confirm our hypothesis that dynamic NA is superior to static NA, on synthetic and real-world networks, in computational biology and social domains. DynaMAGNA++ is parallelized and has a user-friendly graphical interface. http://nd.edu/∼cone/DynaMAGNA++/ . tmilenko@nd.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  14. Alignment of dynamic networks

    PubMed Central

    Vijayan, V.; Critchlow, D.; Milenković, T.

    2017-01-01

    Abstract Motivation: Network alignment (NA) aims to find a node mapping that conserves similar regions between compared networks. NA is applicable to many fields, including computational biology, where NA can guide the transfer of biological knowledge from well- to poorly-studied species across aligned network regions. Existing NA methods can only align static networks. However, most complex real-world systems evolve over time and should thus be modeled as dynamic networks. We hypothesize that aligning dynamic network representations of evolving systems will produce superior alignments compared to aligning the systems’ static network representations, as is currently done. Results: For this purpose, we introduce the first ever dynamic NA method, DynaMAGNA ++. This proof-of-concept dynamic NA method is an extension of a state-of-the-art static NA method, MAGNA++. Even though both MAGNA++ and DynaMAGNA++ optimize edge as well as node conservation across the aligned networks, MAGNA++ conserves static edges and similarity between static node neighborhoods, while DynaMAGNA++ conserves dynamic edges (events) and similarity between evolving node neighborhoods. For this purpose, we introduce the first ever measure of dynamic edge conservation and rely on our recent measure of dynamic node conservation. Importantly, the two dynamic conservation measures can be optimized with any state-of-the-art NA method and not just MAGNA++. We confirm our hypothesis that dynamic NA is superior to static NA, on synthetic and real-world networks, in computational biology and social domains. DynaMAGNA++ is parallelized and has a user-friendly graphical interface. Availability and implementation: http://nd.edu/∼cone/DynaMAGNA++/. Contact: tmilenko@nd.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881980

  15. Multistability in bidirectional associative memory neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Gan; Cao, Jinde

    2008-04-01

    In this Letter, the multistability issue is studied for Bidirectional Associative Memory (BAM) neural networks. Based on the existence and stability analysis of the neural networks with or without delay, it is found that the 2 n-dimensional networks can have 3 equilibria and 2 equilibria of them are locally exponentially stable, where each layer of the BAM network has n neurons. Furthermore, the results has been extended to (n+m)-dimensional BAM neural networks, where there are n and m neurons on the two layers respectively. Finally, two numerical examples are presented to illustrate the validity of our results.

  16. Application of Neural Network Optimized by Mind Evolutionary Computation in Building Energy Prediction

    NASA Astrophysics Data System (ADS)

    Song, Chen; Zhong-Cheng, Wu; Hong, Lv

    2018-03-01

    Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.

  17. Verification and Validation Methodology of Real-Time Adaptive Neural Networks for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Loparo, Kenneth; Mackall, Dale; Schumann, Johann; Soares, Fola

    2004-01-01

    Recent research has shown that adaptive neural based control systems are very effective in restoring stability and control of an aircraft in the presence of damage or failures. The application of an adaptive neural network with a flight critical control system requires a thorough and proven process to ensure safe and proper flight operation. Unique testing tools have been developed as part of a process to perform verification and validation (V&V) of real time adaptive neural networks used in recent adaptive flight control system, to evaluate the performance of the on line trained neural networks. The tools will help in certification from FAA and will help in the successful deployment of neural network based adaptive controllers in safety-critical applications. The process to perform verification and validation is evaluated against a typical neural adaptive controller and the results are discussed.

  18. Pulse Coupled Neural Networks for the Segmentation of Magnetic Resonance Brain Images.

    DTIC Science & Technology

    1996-12-01

    PULSE COUPLED NEURAL NETWORKS FOR THE SEGMENTATION OF MAGNETIC RESONANCE BRAIN IMAGES THESIS Shane Lee Abrahamson First Lieutenant, USAF AFIT/GCS/ENG...COUPLED NEURAL NETWORKS FOR THE SEGMENTATION OF MAGNETIC RESONANCE BRAIN IMAGES THESIS Shane Lee Abrahamson First Lieutenant, USAF AFIT/GCS/ENG/96D-01...research develops an automated method for segmenting Magnetic Resonance (MR) brain images based on Pulse Coupled Neural Networks (PCNN). MR brain image

  19. Angle of Arrival Detection Through Artificial Neural Network Analysis of Optical Fiber Intensity Patterns

    DTIC Science & Technology

    1990-12-01

    ARTIFICIAL NEURAL NETWORK ANALYSIS OF OPTICAL FIBER INTENSITY PATTERNS THESIS Scott Thomas Captain, USAF AFIT/GE/ENG/90D-62 DTIC...ELECTE ao • JAN08 1991 Approved for public release; distribution unlimited. AFIT/GE/ENG/90D-62 ANGLE OF ARRIVAL DETECTION THROUGH ARTIFICIAL NEURAL NETWORK ANALYSIS... ARTIFICIAL NEURAL NETWORK ANALYSIS OF OPTICAL FIBER INTENSITY PATTERNS L Introduction The optical sensors of United States Air Force reconnaissance

  20. Reconfigurable Flight Control Design using a Robust Servo LQR and Radial Basis Function Neural Networks

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    2005-01-01

    This viewgraph presentation reviews the use of a Robust Servo Linear Quadratic Regulator (LQR) and a Radial Basis Function (RBF) Neural Network in reconfigurable flight control designs in adaptation to a aircraft part failure. The method uses a robust LQR servomechanism design with model Reference adaptive control, and RBF neural networks. During the failure the LQR servomechanism behaved well, and using the neural networks improved the tracking.

  1. Bio-Inspired Computation: Clock-Free, Grid-Free, Scale-Free and Symbol Free

    DTIC Science & Technology

    2015-06-11

    for Prediction Tasks in Spiking Neural Networks ." Artificial Neural Networks and Machine Learning–ICANN 2014. Springer, 2014. pp 635-642. Gibson, T...Henderson, JA and Wiles, J. "Predicting temporal sequences using an event-based spiking neural network incorporating learnable delays." IEEE...Adelaide (2014 Jan). Gibson, T and Wiles, J "Predicting temporal sequences using an event-based spiking neural network incorporating learnable delays" at

  2. A neural network architecture for implementation of expert systems for real time monitoring

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.

    1991-01-01

    Since neural networks have the advantages of massive parallelism and simple architecture, they are good tools for implementing real time expert systems. In a rule based expert system, the antecedents of rules are in the conjunctive or disjunctive form. We constructed a multilayer feedforward type network in which neurons represent AND or OR operations of rules. Further, we developed a translator which can automatically map a given rule base into the network. Also, we proposed a new and powerful yet flexible architecture that combines the advantages of both fuzzy expert systems and neural networks. This architecture uses the fuzzy logic concepts to separate input data domains into several smaller and overlapped regions. Rule-based expert systems for time critical applications using neural networks, the automated implementation of rule-based expert systems with neural nets, and fuzzy expert systems vs. neural nets are covered.

  3. Stability and synchronization analysis of inertial memristive neural networks with time delays.

    PubMed

    Rakkiyappan, R; Premalatha, S; Chandrasekar, A; Cao, Jinde

    2016-10-01

    This paper is concerned with the problem of stability and pinning synchronization of a class of inertial memristive neural networks with time delay. In contrast to general inertial neural networks, inertial memristive neural networks is applied to exhibit the synchronization and stability behaviors due to the physical properties of memristors and the differential inclusion theory. By choosing an appropriate variable transmission, the original system can be transformed into first order differential equations. Then, several sufficient conditions for the stability of inertial memristive neural networks by using matrix measure and Halanay inequality are derived. These obtained criteria are capable of reducing computational burden in the theoretical part. In addition, the evaluation is done on pinning synchronization for an array of linearly coupled inertial memristive neural networks, to derive the condition using matrix measure strategy. Finally, the two numerical simulations are presented to show the effectiveness of acquired theoretical results.

  4. A comparison of neural network architectures for the prediction of MRR in EDM

    NASA Astrophysics Data System (ADS)

    Jena, A. R.; Das, Raja

    2017-11-01

    The aim of the research work is to predict the material removal rate of a work-piece in electrical discharge machining (EDM). Here, an effort has been made to predict the material removal rate through back-propagation neural network (BPN) and radial basis function neural network (RBFN) for a work-piece of AISI D2 steel. The input parameters for the architecture are discharge-current (Ip), pulse-duration (Ton), and duty-cycle (τ) taken for consideration to obtained the output for material removal rate of the work-piece. In the architecture, it has been observed that radial basis function neural network is comparatively faster than back-propagation neural network but logically back-propagation neural network results more real value. Therefore BPN may consider as a better process in this architecture for consistent prediction to save time and money for conducting experiments.

  5. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    DOE PAGES

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    2016-10-18

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property.more » Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.« less

  6. Synchronization of Switched Neural Networks With Communication Delays via the Event-Triggered Control.

    PubMed

    Wen, Shiping; Zeng, Zhigang; Chen, Michael Z Q; Huang, Tingwen

    2017-10-01

    This paper addresses the issue of synchronization of switched delayed neural networks with communication delays via event-triggered control. For synchronizing coupled switched neural networks, we propose a novel event-triggered control law which could greatly reduce the number of control updates for synchronization tasks of coupled switched neural networks involving embedded microprocessors with limited on-board resources. The control signals are driven by properly defined events, which depend on the measurement errors and current-sampled states. By using a delay system method, a novel model of synchronization error system with delays is proposed with the communication delays and event-triggered control in the unified framework for coupled switched neural networks. The criteria are derived for the event-triggered synchronization analysis and control synthesis of switched neural networks via the Lyapunov-Krasovskii functional method and free weighting matrix approach. A numerical example is elaborated on to illustrate the effectiveness of the derived results.

  7. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ling, Julia; Kurzawski, Andrew; Templeton, Jeremy

    There exists significant demand for improved Reynolds-averaged Navier–Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property.more » Furthermore, the Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.« less

  8. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    PubMed

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  9. Application of two neural network paradigms to the study of voluntary employee turnover.

    PubMed

    Somers, M J

    1999-04-01

    Two neural network paradigms--multilayer perceptron and learning vector quantization--were used to study voluntary employee turnover with a sample of 577 hospital employees. The objectives of the study were twofold. The 1st was to assess whether neural computing techniques offered greater predictive accuracy than did conventional turnover methodologies. The 2nd was to explore whether computer models of turnover based on neural network technologies offered new insights into turnover processes. When compared with logistic regression analysis, both neural network paradigms provided considerably more accurate predictions of turnover behavior, particularly with respect to the correct classification of leavers. In addition, these neural network paradigms captured nonlinear relationships that are relevant for theory development. Results are discussed in terms of their implications for future research.

  10. Polarity-specific high-level information propagation in neural networks.

    PubMed

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals.

  11. Polarity-specific high-level information propagation in neural networks

    PubMed Central

    Lin, Yen-Nan; Chang, Po-Yen; Hsiao, Pao-Yueh; Lo, Chung-Chuan

    2014-01-01

    Analyzing the connectome of a nervous system provides valuable information about the functions of its subsystems. Although much has been learned about the architectures of neural networks in various organisms by applying analytical tools developed for general networks, two distinct and functionally important properties of neural networks are often overlooked. First, neural networks are endowed with polarity at the circuit level: Information enters a neural network at input neurons, propagates through interneurons, and leaves via output neurons. Second, many functions of nervous systems are implemented by signal propagation through high-level pathways involving multiple and often recurrent connections rather than by the shortest paths between nodes. In the present study, we analyzed two neural networks: the somatic nervous system of Caenorhabditis elegans (C. elegans) and the partial central complex network of Drosophila, in light of these properties. Specifically, we quantified high-level propagation in the vertical and horizontal directions: the former characterizes how signals propagate from specific input nodes to specific output nodes and the latter characterizes how a signal from a specific input node is shared by all output nodes. We found that the two neural networks are characterized by very efficient vertical and horizontal propagation. In comparison, classic small-world networks show a trade-off between vertical and horizontal propagation; increasing the rewiring probability improves the efficiency of horizontal propagation but worsens the efficiency of vertical propagation. Our result provides insights into how the complex functions of natural neural networks may arise from a design that allows them to efficiently transform and combine input signals. PMID:24672472

  12. Devices and circuits for nanoelectronic implementation of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Turel, Ozgur

    Biological neural networks perform complicated information processing tasks at speeds better than conventional computers based on conventional algorithms. This has inspired researchers to look into the way these networks function, and propose artificial networks that mimic their behavior. Unfortunately, most artificial neural networks, either software or hardware, do not provide either the speed or the complexity of a human brain. Nanoelectronics, with high density and low power dissipation that it provides, may be used in developing more efficient artificial neural networks. This work consists of two major contributions in this direction. First is the proposal of the CMOL concept, hybrid CMOS-molecular hardware [1-8]. CMOL may circumvent most of the problems in posed by molecular devices, such as low yield, vet provide high active device density, ˜1012/cm 2. The second contribution is CrossNets, artificial neural networks that are based on CMOL. We showed that CrossNets, with their fault tolerance, exceptional speed (˜ 4 to 6 orders of magnitude faster than biological neural networks) can perform any task any artificial neural network can perform. Moreover, there is a hope that if their integration scale is increased to that of human cerebral cortex (˜ 1010 neurons and ˜ 1014 synapses), they may be capable of performing more advanced tasks.

  13. Artificial neural network intelligent method for prediction

    NASA Astrophysics Data System (ADS)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  14. Seismic signal auto-detecing from different features by using Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Huang, Y.; Zhou, Y.; Yue, H.; Zhou, S.

    2017-12-01

    We try Convolutional Neural Network to detect some features of seismic data and compare their efficience. The features include whether a signal is seismic signal or noise and the arrival time of P and S phase and each feature correspond to a Convolutional Neural Network. We first use traditional STA/LTA to recongnize some events and then use templete matching to find more events as training set for the Neural Network. To make the training set more various, we add some noise to the seismic data and make some synthetic seismic data and noise. The 3-component raw signal and time-frequancy ananlyze are used as the input data for our neural network. Our Training is performed on GPUs to achieve efficient convergence. Our method improved the precision in comparison with STA/LTA and template matching. We will move to recurrent neural network to see if this kind network is better in detect P and S phase.

  15. Geometrical structure of Neural Networks: Geodesics, Jeffrey's Prior and Hyper-ribbons

    NASA Astrophysics Data System (ADS)

    Hayden, Lorien; Alemi, Alex; Sethna, James

    2014-03-01

    Neural networks are learning algorithms which are employed in a host of Machine Learning problems including speech recognition, object classification and data mining. In practice, neural networks learn a low dimensional representation of high dimensional data and define a model manifold which is an embedding of this low dimensional structure in the higher dimensional space. In this work, we explore the geometrical structure of a neural network model manifold. A Stacked Denoising Autoencoder and a Deep Belief Network are trained on handwritten digits from the MNIST database. Construction of geodesics along the surface and of slices taken from the high dimensional manifolds reveal a hierarchy of widths corresponding to a hyper-ribbon structure. This property indicates that neural networks fall into the class of sloppy models, in which certain parameter combinations dominate the behavior. Employing this information could prove valuable in designing both neural network architectures and training algorithms. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No . DGE-1144153.

  16. Neural networks within multi-core optic fibers

    PubMed Central

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-01-01

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks. PMID:27383911

  17. Neural networks within multi-core optic fibers.

    PubMed

    Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael

    2016-07-07

    Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.

  18. Method of gear fault diagnosis based on EEMD and improved Elman neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Zhao, Wei; Xiao, Shungen; Song, Mengmeng

    2017-05-01

    Aiming at crack and wear and so on of gears Fault information is difficult to diagnose usually due to its weak, a gear fault diagnosis method that is based on EEMD and improved Elman neural network fusion is proposed. A number of IMF components are obtained by decomposing denoised all kinds of fault signals with EEMD, and the pseudo IMF components is eliminated by using the correlation coefficient method to obtain the effective IMF component. The energy characteristic value of each effective component is calculated as the input feature quantity of Elman neural network, and the improved Elman neural network is based on standard network by adding a feedback factor. The fault data of normal gear, broken teeth, cracked gear and attrited gear were collected by field collecting. The results were analyzed by the diagnostic method proposed in this paper. The results show that compared with the standard Elman neural network, Improved Elman neural network has the advantages of high diagnostic efficiency.

  19. Anfis Approach for Sssc Controller Design for the Improvement of Transient Stability Performance

    NASA Astrophysics Data System (ADS)

    Khuntia, Swasti R.; Panda, Sidhartha

    2011-06-01

    In this paper, Adaptive Neuro-Fuzzy Inference System (ANFIS) method based on the Artificial Neural Network (ANN) is applied to design a Static Synchronous Series Compensator (SSSC)-based controller for improvement of transient stability. The proposed ANFIS controller combines the advantages of fuzzy controller and quick response and adaptability nature of ANN. The ANFIS structures were trained using the generated database by fuzzy controller of SSSC. It is observed that the proposed SSSC controller improves greatly the voltage profile of the system under severe disturbances. The results prove that the proposed SSSC-based ANFIS controller is found to be robust to fault location and change in operating conditions. Further, the results obtained are compared with the conventional lead-lag controllers for SSSC.

  20. Nanophotonic particle simulation and inverse design using artificial neural networks

    PubMed Central

    Peurifoy, John; Shen, Yichen; Jing, Li; Cano-Renteria, Fidel; DeLacy, Brendan G.; Joannopoulos, John D.; Tegmark, Max

    2018-01-01

    We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical. PMID:29868640

  1. Neural Networks In Mining Sciences - General Overview And Some Representative Examples

    NASA Astrophysics Data System (ADS)

    Tadeusiewicz, Ryszard

    2015-12-01

    The many difficult problems that must now be addressed in mining sciences make us search for ever newer and more efficient computer tools that can be used to solve those problems. Among the numerous tools of this type, there are neural networks presented in this article - which, although not yet widely used in mining sciences, are certainly worth consideration. Neural networks are a technique which belongs to so called artificial intelligence, and originates from the attempts to model the structure and functioning of biological nervous systems. Initially constructed and tested exclusively out of scientific curiosity, as computer models of parts of the human brain, neural networks have become a surprisingly effective calculation tool in many areas: in technology, medicine, economics, and even social sciences. Unfortunately, they are relatively rarely used in mining sciences and mining technology. The article is intended to convince the readers that neural networks can be very useful also in mining sciences. It contains information how modern neural networks are built, how they operate and how one can use them. The preliminary discussion presented in this paper can help the reader gain an opinion whether this is a tool with handy properties, useful for him, and what it might come in useful for. Of course, the brief introduction to neural networks contained in this paper will not be enough for the readers who get convinced by the arguments contained here, and want to use neural networks. They will still need a considerable portion of detailed knowledge so that they can begin to independently create and build such networks, and use them in practice. However, an interested reader who decides to try out the capabilities of neural networks will also find here links to references that will allow him to start exploration of neural networks fast, and then work with this handy tool efficiently. This will be easy, because there are currently quite a few ready-made computer programs, easily available, which allow their user to quickly and effortlessly create artificial neural networks, run them, train and use in practice. The key issue is the question how to use these networks in mining sciences. The fact that this is possible and desirable is shown by convincing examples included in the second part of this study. From the very rich literature on the various applications of neural networks, we have selected several works that show how and what neural networks are used in the mining industry, and what has been achieved thanks to their use. The review of applications will continue in the next article, filed already for publication in the journal "Archives of Mining Sciences". Only studying these two articles will provide sufficient knowledge for initial guidance in the area of issues under consideration here.

  2. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    PubMed

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  3. Anti-synchronization control of BAM memristive neural networks with multiple proportional delays and stochastic perturbations

    NASA Astrophysics Data System (ADS)

    Wang, Weiping; Yuan, Manman; Luo, Xiong; Liu, Linlin; Zhang, Yao

    2018-01-01

    Proportional delay is a class of unbounded time-varying delay. A class of bidirectional associative memory (BAM) memristive neural networks with multiple proportional delays is concerned in this paper. First, we propose the model of BAM memristive neural networks with multiple proportional delays and stochastic perturbations. Furthermore, by choosing suitable nonlinear variable transformations, the BAM memristive neural networks with multiple proportional delays can be transformed into the BAM memristive neural networks with constant delays. Based on the drive-response system concept, differential inclusions theory and Lyapunov stability theory, some anti-synchronization criteria are obtained. Finally, the effectiveness of proposed criteria are demonstrated through numerical examples.

  4. A biologically inspired neural network for dynamic programming.

    PubMed

    Francelin Romero, R A; Kacpryzk, J; Gomide, F

    2001-12-01

    An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.

  5. Blood glucose prediction using neural network

    NASA Astrophysics Data System (ADS)

    Soh, Chit Siang; Zhang, Xiqin; Chen, Jianhong; Raveendran, P.; Soh, Phey Hong; Yeo, Joon Hock

    2008-02-01

    We used neural network for blood glucose level determination in this study. The data set used in this study was collected using a non-invasive blood glucose monitoring system with six laser diodes, each laser diode operating at distinct near infrared wavelength between 1500nm and 1800nm. The neural network is specifically used to determine blood glucose level of one individual who participated in an oral glucose tolerance test (OGTT) session. Partial least squares regression is also used for blood glucose level determination for the purpose of comparison with the neural network model. The neural network model performs better in the prediction of blood glucose level as compared with the partial least squares model.

  6. Neural network for solving convex quadratic bilevel programming problems.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie

    2014-03-01

    In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Typing mineral deposits using their associated rocks, grades and tonnages using a probabilistic neural network

    USGS Publications Warehouse

    Singer, D.A.

    2006-01-01

    A probabilistic neural network is employed to classify 1610 mineral deposits into 18 types using tonnage, average Cu, Mo, Ag, Au, Zn, and Pb grades, and six generalized rock types. The purpose is to examine whether neural networks might serve for integrating geoscience information available in large mineral databases to classify sites by deposit type. Successful classifications of 805 deposits not used in training - 87% with grouped porphyry copper deposits - and the nature of misclassifications demonstrate the power of probabilistic neural networks and the value of quantitative mineral-deposit models. The results also suggest that neural networks can classify deposits as well as experienced economic geologists. ?? International Association for Mathematical Geology 2006.

  8. Adaptive exponential synchronization of complex-valued Cohen-Grossberg neural networks with known and unknown parameters.

    PubMed

    Hu, Jin; Zeng, Chunna

    2017-02-01

    The complex-valued Cohen-Grossberg neural network is a special kind of complex-valued neural network. In this paper, the synchronization problem of a class of complex-valued Cohen-Grossberg neural networks with known and unknown parameters is investigated. By using Lyapunov functionals and the adaptive control method based on parameter identification, some adaptive feedback schemes are proposed to achieve synchronization exponentially between the drive and response systems. The results obtained in this paper have extended and improved some previous works on adaptive synchronization of Cohen-Grossberg neural networks. Finally, two numerical examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Stability analysis of fractional-order Hopfield neural networks with time delays.

    PubMed

    Wang, Hu; Yu, Yongguang; Wen, Guoguang

    2014-07-01

    This paper investigates the stability for fractional-order Hopfield neural networks with time delays. Firstly, the fractional-order Hopfield neural networks with hub structure and time delays are studied. Some sufficient conditions for stability of the systems are obtained. Next, two fractional-order Hopfield neural networks with different ring structures and time delays are developed. By studying the developed neural networks, the corresponding sufficient conditions for stability of the systems are also derived. It is shown that the stability conditions are independent of time delays. Finally, numerical simulations are given to illustrate the effectiveness of the theoretical results obtained in this paper. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. High variation subarctic topsoil pollutant concentration prediction using neural network residual kriging

    NASA Astrophysics Data System (ADS)

    Sergeev, A. P.; Tarasov, D. A.; Buevich, A. G.; Subbotina, I. E.; Shichkin, A. V.; Sergeeva, M. V.; Lvova, O. A.

    2017-06-01

    The work deals with the application of neural networks residual kriging (NNRK) to the spatial prediction of the abnormally distributed soil pollutant (Cr). It is known that combination of geostatistical interpolation approaches (kriging) and neural networks leads to significantly better prediction accuracy and productivity. Generalized regression neural networks and multilayer perceptrons are classes of neural networks widely used for the continuous function mapping. Each network has its own pros and cons; however both demonstrated fast training and good mapping possibilities. In the work, we examined and compared two combined techniques: generalized regression neural network residual kriging (GRNNRK) and multilayer perceptron residual kriging (MLPRK). The case study is based on the real data sets on surface contamination by chromium at a particular location of the subarctic Novy Urengoy, Russia, obtained during the previously conducted screening. The proposed models have been built, implemented and validated using ArcGIS and MATLAB environments. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. MLRPK showed the best predictive accuracy comparing to the geostatistical approach (kriging) and even to GRNNRK.

  11. Artificial Neural Network Analysis System

    DTIC Science & Technology

    2001-02-27

    Contract No. DASG60-00-M-0201 Purchase request no.: Foot in the Door-01 Title Name: Artificial Neural Network Analysis System Company: Atlantic... Artificial Neural Network Analysis System 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Powell, Bruce C 5d. PROJECT NUMBER 5e. TASK NUMBER...34) 27-02-2001 Report Type N/A Dates Covered (from... to) ("DD MON YYYY") 28-10-2000 27-02-2001 Title and Subtitle Artificial Neural Network Analysis

  12. An Evaluation of Artificial Neural Network Modeling for Manpower Analysis

    DTIC Science & Technology

    1993-09-01

    NAVAL POSTGRADUATE SCHOOL Monterey, California 0- I 1 ’(ft ADV "’r-"A THESIS AN EVALUATION OF ARTIFICIAL NEURAL NETWORK MODELING FOR MANPOWER...AGENCY USE ONLY (Leave blank) 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED September, 1993 4. TITLE AND SUBTITLE An Evaluation Of Artificial Neural Network 5...unlimited. An Evaluation of Artificial Neural Network Modeling for Manpower Analysis by Brian J. Byrne Captain, United States Marine Corps B.S

  13. An Artificial Neural Network Control System for Spacecraft Attitude Stabilization

    DTIC Science & Technology

    1990-06-01

    NAVAL POSTGRADUATE SCHOOL Monterey, California ’-DTIC 0 ELECT f NMARO 5 191 N S, U, THESIS B . AN ARTIFICIAL NEURAL NETWORK CONTROL SYSTEM FOR...NO. NO. NO ACCESSION NO 11. TITLE (Include Security Classification) AN ARTIFICIAL NEURAL NETWORK CONTROL SYSTEM FOR SPACECRAFT ATTITUDE STABILIZATION...obsolete a U.S. G v pi.. iim n P.. oiice! toog-eo.5s43 i Approved for public release; distribution is unlimited. AN ARTIFICIAL NEURAL NETWORK CONTROL

  14. Spatio-Temporal Neural Networks for Vision, Reasoning and Rapid Decision Making

    DTIC Science & Technology

    1994-08-31

    something that is obviously not pattern for long-term knowledge base (LTKB) facts. As a matter possiblc in common neural networks (as units in a...Conferences on Neural Davis, P. (19W0) Application of op~tical chaos to temporal pattern search in a Networks . Piscataway, NJ. [SC] nonlinear optical...Science Institute PROJECT TITLE: Spatio-temporal Neural Networks for Vision, Reasoning and Rapid Decision Making (N00014-93-1-1149) Number of ONR

  15. Training product unit neural networks with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Janson, D. J.; Frenzel, J. F.; Thelen, D. C.

    1991-01-01

    The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.

  16. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    PubMed

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  17. Flight control with adaptive critic neural network

    NASA Astrophysics Data System (ADS)

    Han, Dongchen

    2001-10-01

    In this dissertation, the adaptive critic neural network technique is applied to solve complex nonlinear system control problems. Based on dynamic programming, the adaptive critic neural network can embed the optimal solution into a neural network. Though trained off-line, the neural network forms a real-time feedback controller. Because of its general interpolation properties, the neurocontroller has inherit robustness. The problems solved here are an agile missile control for U.S. Air Force and a midcourse guidance law for U.S. Navy. In the first three papers, the neural network was used to control an air-to-air agile missile to implement a minimum-time heading-reverse in a vertical plane corresponding to following conditions: a system without constraint, a system with control inequality constraint, and a system with state inequality constraint. While the agile missile is a one-dimensional problem, the midcourse guidance law is the first test-bed for multiple-dimensional problem. In the fourth paper, the neurocontroller is synthesized to guide a surface-to-air missile to a fixed final condition, and to a flexible final condition from a variable initial condition. In order to evaluate the adaptive critic neural network approach, the numerical solutions for these cases are also obtained by solving two-point boundary value problem with a shooting method. All of the results showed that the adaptive critic neural network could solve complex nonlinear system control problems.

  18. Performance of an artificial neural network for vertical root fracture detection: an ex vivo study.

    PubMed

    Kositbowornchai, Suwadee; Plermkamon, Supattra; Tangkosol, Tawan

    2013-04-01

    To develop an artificial neural network for vertical root fracture detection. A probabilistic neural network design was used to clarify whether a tooth root was sound or had a vertical root fracture. Two hundred images (50 sound and 150 vertical root fractures) derived from digital radiography--used to train and test the artificial neural network--were divided into three groups according to the number of training and test data sets: 80/120,105/95 and 130/70, respectively. Either training or tested data were evaluated using grey-scale data per line passing through the root. These data were normalized to reduce the grey-scale variance and fed as input data of the neural network. The variance of function in recognition data was calculated between 0 and 1 to select the best performance of neural network. The performance of the neural network was evaluated using a diagnostic test. After testing data under several variances of function, we found the highest sensitivity (98%), specificity (90.5%) and accuracy (95.7%) occurred in Group three, for which the variance of function in recognition data was between 0.025 and 0.005. The neural network designed in this study has sufficient sensitivity, specificity and accuracy to be a model for vertical root fracture detection. © 2012 John Wiley & Sons A/S.

  19. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  20. Reservoir characterization using core, well log, and seismic data and intelligent software

    NASA Astrophysics Data System (ADS)

    Soto Becerra, Rodolfo

    We have developed intelligent software, Oilfield Intelligence (OI), as an engineering tool to improve the characterization of oil and gas reservoirs. OI integrates neural networks and multivariate statistical analysis. It is composed of five main subsystems: data input, preprocessing, architecture design, graphics design, and inference engine modules. More than 1,200 lines of programming code as M-files using the language MATLAB been written. The degree of success of many oil and gas drilling, completion, and production activities depends upon the accuracy of the models used in a reservoir description. Neural networks have been applied for identification of nonlinear systems in almost all scientific fields of humankind. Solving reservoir characterization problems is no exception. Neural networks have a number of attractive features that can help to extract and recognize underlying patterns, structures, and relationships among data. However, before developing a neural network model, we must solve the problem of dimensionality such as determining dominant and irrelevant variables. We can apply principal components and factor analysis to reduce the dimensionality and help the neural networks formulate more realistic models. We validated OI by obtaining confident models in three different oil field problems: (1) A neural network in-situ stress model using lithology and gamma ray logs for the Travis Peak formation of east Texas, (2) A neural network permeability model using porosity and gamma ray and a neural network pseudo-gamma ray log model using 3D seismic attributes for the reservoir VLE 196 Lamar field located in Block V of south-central Lake Maracaibo (Venezuela), and (3) Neural network primary ultimate oil recovery (PRUR), initial waterflooding ultimate oil recovery (IWUR), and infill drilling ultimate oil recovery (IDUR) models using reservoir parameters for San Andres and Clearfork carbonate formations in west Texas. In all cases, we compared the results from the neural network models with the results from regression statistical and non-parametric approach models. The results show that it is possible to obtain the highest cross-correlation coefficient between predicted and actual target variables, and the lowest average absolute errors using the integrated techniques of multivariate statistical analysis and neural networks in our intelligent software.

  1. Reliability analysis of C-130 turboprop engine components using artificial neural network

    NASA Astrophysics Data System (ADS)

    Qattan, Nizar A.

    In this study, we predict the failure rate of Lockheed C-130 Engine Turbine. More than thirty years of local operational field data were used for failure rate prediction and validation. The Weibull regression model and the Artificial Neural Network model including (feed-forward back-propagation, radial basis neural network, and multilayer perceptron neural network model); will be utilized to perform this study. For this purpose, the thesis will be divided into five major parts. First part deals with Weibull regression model to predict the turbine general failure rate, and the rate of failures that require overhaul maintenance. The second part will cover the Artificial Neural Network (ANN) model utilizing the feed-forward back-propagation algorithm as a learning rule. The MATLAB package will be used in order to build and design a code to simulate the given data, the inputs to the neural network are the independent variables, the output is the general failure rate of the turbine, and the failures which required overhaul maintenance. In the third part we predict the general failure rate of the turbine and the failures which require overhaul maintenance, using radial basis neural network model on MATLAB tool box. In the fourth part we compare the predictions of the feed-forward back-propagation model, with that of Weibull regression model, and radial basis neural network model. The results show that the failure rate predicted by the feed-forward back-propagation artificial neural network model is closer in agreement with radial basis neural network model compared with the actual field-data, than the failure rate predicted by the Weibull model. By the end of the study, we forecast the general failure rate of the Lockheed C-130 Engine Turbine, the failures which required overhaul maintenance and six categorical failures using multilayer perceptron neural network (MLP) model on DTREG commercial software. The results also give an insight into the reliability of the engine turbine under actual operating conditions, which can be used by aircraft operators for assessing system and component failures and customizing the maintenance programs recommended by the manufacturer.

  2. Machine Learning Topological Invariants with Neural Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Pengfei; Shen, Huitao; Zhai, Hui

    2018-02-01

    In this Letter we supervisedly train neural networks to distinguish different topological phases in the context of topological band insulators. After training with Hamiltonians of one-dimensional insulators with chiral symmetry, the neural network can predict their topological winding numbers with nearly 100% accuracy, even for Hamiltonians with larger winding numbers that are not included in the training data. These results show a remarkable success that the neural network can capture the global and nonlinear topological features of quantum phases from local inputs. By opening up the neural network, we confirm that the network does learn the discrete version of the winding number formula. We also make a couple of remarks regarding the role of the symmetry and the opposite effect of regularization techniques when applying machine learning to physical systems.

  3. Character recognition from trajectory by recurrent spiking neural networks.

    PubMed

    Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan

    2017-07-01

    Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.

  4. Exploring the structure and function of temporal networks with dynamic graphlets

    PubMed Central

    Hulovatyy, Y.; Chen, H.; Milenković, T.

    2015-01-01

    Motivation: With increasing availability of temporal real-world networks, how to efficiently study these data? One can model a temporal network as a single aggregate static network, or as a series of time-specific snapshots, each being an aggregate static network over the corresponding time window. Then, one can use established methods for static analysis on the resulting aggregate network(s), but losing in the process valuable temporal information either completely, or at the interface between different snapshots, respectively. Here, we develop a novel approach for studying a temporal network more explicitly, by capturing inter-snapshot relationships. Results: We base our methodology on well-established graphlets (subgraphs), which have been proven in numerous contexts in static network research. We develop new theory to allow for graphlet-based analyses of temporal networks. Our new notion of dynamic graphlets is different from existing dynamic network approaches that are based on temporal motifs (statistically significant subgraphs). The latter have limitations: their results depend on the choice of a null network model that is required to evaluate the significance of a subgraph, and choosing a good null model is non-trivial. Our dynamic graphlets overcome the limitations of the temporal motifs. Also, when we aim to characterize the structure and function of an entire temporal network or of individual nodes, our dynamic graphlets outperform the static graphlets. Clearly, accounting for temporal information helps. We apply dynamic graphlets to temporal age-specific molecular network data to deepen our limited knowledge about human aging. Availability and implementation: http://www.nd.edu/∼cone/DG. Contact: tmilenko@nd.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072480

  5. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting

    PubMed Central

    Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927

  6. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    PubMed

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  7. Radar signal categorization using a neural network

    NASA Technical Reports Server (NTRS)

    Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.

    1991-01-01

    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.

  8. Intelligent emissions controller for substance injection in the post-primary combustion zone of fossil-fired boilers

    DOEpatents

    Reifman, Jaques; Feldman, Earl E.; Wei, Thomas Y. C.; Glickert, Roger W.

    2003-01-01

    The control of emissions from fossil-fired boilers wherein an injection of substances above the primary combustion zone employs multi-layer feedforward artificial neural networks for modeling static nonlinear relationships between the distribution of injected substances into the upper region of the furnace and the emissions exiting the furnace. Multivariable nonlinear constrained optimization algorithms use the mathematical expressions from the artificial neural networks to provide the optimal substance distribution that minimizes emission levels for a given total substance injection rate. Based upon the optimal operating conditions from the optimization algorithms, the incremental substance cost per unit of emissions reduction, and the open-market price per unit of emissions reduction, the intelligent emissions controller allows for the determination of whether it is more cost-effective to achieve additional increments in emission reduction through the injection of additional substance or through the purchase of emission credits on the open market. This is of particular interest to fossil-fired electrical power plant operators. The intelligent emission controller is particularly adapted for determining the economical control of such pollutants as oxides of nitrogen (NO.sub.x) and carbon monoxide (CO) emitted by fossil-fired boilers by the selective introduction of multiple inputs of substances (such as natural gas, ammonia, oil, water-oil emulsion, coal-water slurry and/or urea, and combinations of these substances) above the primary combustion zone of fossil-fired boilers.

  9. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2017-01-01

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080

  10. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  11. Convergent Time-Varying Regression Models for Data Streams: Tracking Concept Drift by the Recursive Parzen-Based Generalized Regression Neural Networks.

    PubMed

    Duda, Piotr; Jaworski, Maciej; Rutkowski, Leszek

    2018-03-01

    One of the greatest challenges in data mining is related to processing and analysis of massive data streams. Contrary to traditional static data mining problems, data streams require that each element is processed only once, the amount of allocated memory is constant and the models incorporate changes of investigated streams. A vast majority of available methods have been developed for data stream classification and only a few of them attempted to solve regression problems, using various heuristic approaches. In this paper, we develop mathematically justified regression models working in a time-varying environment. More specifically, we study incremental versions of generalized regression neural networks, called IGRNNs, and we prove their tracking properties - weak (in probability) and strong (with probability one) convergence assuming various concept drift scenarios. First, we present the IGRNNs, based on the Parzen kernels, for modeling stationary systems under nonstationary noise. Next, we extend our approach to modeling time-varying systems under nonstationary noise. We present several types of concept drifts to be handled by our approach in such a way that weak and strong convergence holds under certain conditions. Finally, in the series of simulations, we compare our method with commonly used heuristic approaches, based on forgetting mechanism or sliding windows, to deal with concept drift. Finally, we apply our concept in a real life scenario solving the problem of currency exchange rates prediction.

  12. Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.

    PubMed

    Ly, Cheng

    2015-12-01

    Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.

  13. Communications and control for electric power systems: Power system stability applications of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Toomarian, N.; Kirkham, Harold

    1994-01-01

    This report investigates the application of artificial neural networks to the problem of power system stability. The field of artificial intelligence, expert systems, and neural networks is reviewed. Power system operation is discussed with emphasis on stability considerations. Real-time system control has only recently been considered as applicable to stability, using conventional control methods. The report considers the use of artificial neural networks to improve the stability of the power system. The networks are considered as adjuncts and as replacements for existing controllers. The optimal kind of network to use as an adjunct to a generator exciter is discussed.

  14. Application of Artificial Neural Network to Optical Fluid Analyzer

    NASA Astrophysics Data System (ADS)

    Kimura, Makoto; Nishida, Katsuhiko

    1994-04-01

    A three-layer artificial neural network has been applied to the presentation of optical fluid analyzer (OFA) raw data, and the accuracy of oil fraction determination has been significantly improved compared to previous approaches. To apply the artificial neural network approach to solving a problem, the first step is training to determine the appropriate weight set for calculating the target values. This involves using a series of data sets (each comprising a set of input values and an associated set of output values that the artificial neural network is required to determine) to tune artificial neural network weighting parameters so that the output of the neural network to the given set of input values is as close as possible to the required output. The physical model used to generate the series of learning data sets was the effective flow stream model, developed for OFA data presentation. The effectiveness of the training was verified by reprocessing the same input data as were used to determine the weighting parameters and then by comparing the results of the artificial neural network to the expected output values. The standard deviation of the expected and obtained values was approximately 10% (two sigma).

  15. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Re-Evaluation of the AASHTO-Flexible Pavement Design Equation with Neural Network Modeling

    PubMed Central

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance. PMID:25397962

  17. Re-evaluation of the AASHTO-flexible pavement design equation with neural network modeling.

    PubMed

    Tiğdemir, Mesut

    2014-01-01

    Here we establish that equivalent single-axle loads values can be estimated using artificial neural networks without the complex design equality of American Association of State Highway and Transportation Officials (AASHTO). More importantly, we find that the neural network model gives the coefficients to be able to obtain the actual load values using the AASHTO design values. Thus, those design traffic values that might result in deterioration can be better calculated using the neural networks model than with the AASHTO design equation. The artificial neural network method is used for this purpose. The existing AASHTO flexible pavement design equation does not currently predict the pavement performance of the strategic highway research program (Long Term Pavement Performance studies) test sections very accurately, and typically over-estimates the number of equivalent single axle loads needed to cause a measured loss of the present serviceability index. Here we aimed to demonstrate that the proposed neural network model can more accurately represent the loads values data, compared against the performance of the AASHTO formula. It is concluded that the neural network may be an appropriate tool for the development of databased-nonparametric models of pavement performance.

  18. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  19. An FPGA Implementation of a Polychronous Spiking Neural Network with Delay Adaptation.

    PubMed

    Wang, Runchun; Cohen, Gregory; Stiefel, Klaus M; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, André

    2013-01-01

    We present an FPGA implementation of a re-configurable, polychronous spiking neural network with a large capacity for spatial-temporal patterns. The proposed neural network generates delay paths de novo, so that only connections that actually appear in the training patterns will be created. This allows the proposed network to use all the axons (variables) to store information. Spike Timing Dependent Delay Plasticity is used to fine-tune and add dynamics to the network. We use a time multiplexing approach allowing us to achieve 4096 (4k) neurons and up to 1.15 million programmable delay axons on a Virtex 6 FPGA. Test results show that the proposed neural network is capable of successfully recalling more than 95% of all spikes for 96% of the stored patterns. The tests also show that the neural network is robust to noise from random input spikes.

  20. Parameter diagnostics of phases and phase transition learning by neural networks

    NASA Astrophysics Data System (ADS)

    Suchsland, Philippe; Wessel, Stefan

    2018-05-01

    We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.

  1. Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Patrick I.

    2003-09-23

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neuralmore » networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing information [2]. Each one of these cells acts as a simple processor. When individual cells interact with one another, the complex abilities of the brain are made possible. In neural networks, the input or data are processed by a propagation function that adds up the values of all the incoming data. The ending value is then compared with a threshold or specific value. The resulting value must exceed the activation function value in order to become output. The activation function is a mathematical function that a neuron uses to produce an output referring to its input value. [8] Figure 1 depicts this process. Neural networks usually have three components an input, a hidden, and an output. These layers create the end result of the neural network. A real world example is a child associating the word dog with a picture. The child says dog and simultaneously looks a picture of a dog. The input is the spoken word ''dog'', the hidden is the brain processing, and the output will be the category of the word dog based on the picture. This illustration describes how a neural network functions.« less

  2. Stylized facts in social networks: Community-based static modeling

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Murase, Yohsuke; Török, János; Kertész, János; Kaski, Kimmo

    2018-06-01

    The past analyses of datasets of social networks have enabled us to make empirical findings of a number of aspects of human society, which are commonly featured as stylized facts of social networks, such as broad distributions of network quantities, existence of communities, assortative mixing, and intensity-topology correlations. Since the understanding of the structure of these complex social networks is far from complete, for deeper insight into human society more comprehensive datasets and modeling of the stylized facts are needed. Although the existing dynamical and static models can generate some stylized facts, here we take an alternative approach by devising a community-based static model with heterogeneous community sizes and larger communities having smaller link density and weight. With these few assumptions we are able to generate realistic social networks that show most stylized facts for a wide range of parameters, as demonstrated numerically and analytically. Since our community-based static model is simple to implement and easily scalable, it can be used as a reference system, benchmark, or testbed for further applications.

  3. Geometric Bioinspired Networks for Recognition of 2-D and 3-D Low-Level Structures and Transformations.

    PubMed

    Bayro-Corrochano, Eduardo; Vazquez-Santacruz, Eduardo; Moya-Sanchez, Eduardo; Castillo-Munis, Efrain

    2016-10-01

    This paper presents the design of radial basis function geometric bioinspired networks and their applications. Until now, the design of neural networks has been inspired by the biological models of neural networks but mostly using vector calculus and linear algebra. However, these designs have never shown the role of geometric computing. The question is how biological neural networks handle complex geometric representations involving Lie group operations like rotations. Even though the actual artificial neural networks are biologically inspired, they are just models which cannot reproduce a plausible biological process. Until now researchers have not shown how, using these models, one can incorporate them into the processing of geometric computing. Here, for the first time in the artificial neural networks domain, we address this issue by designing a kind of geometric RBF using the geometric algebra framework. As a result, using our artificial networks, we show how geometric computing can be carried out by the artificial neural networks. Such geometric neural networks have a great potential in robot vision. This is the most important aspect of this contribution to propose artificial geometric neural networks for challenging tasks in perception and action. In our experimental analysis, we show the applicability of our geometric designs, and present interesting experiments using 2-D data of real images and 3-D screw axis data. In general, our models should be used to process different types of inputs, such as visual cues, touch (texture, elasticity, temperature), taste, and sound. One important task of a perception-action system is to fuse a variety of cues coming from the environment and relate them via a sensor-motor manifold with motor modules to carry out diverse reasoned actions.

  4. Neural-Network Quantum States, String-Bond States, and Chiral Topological States

    NASA Astrophysics Data System (ADS)

    Glasser, Ivan; Pancotti, Nicola; August, Moritz; Rodriguez, Ivan D.; Cirac, J. Ignacio

    2018-01-01

    Neural-network quantum states have recently been introduced as an Ansatz for describing the wave function of quantum many-body systems. We show that there are strong connections between neural-network quantum states in the form of restricted Boltzmann machines and some classes of tensor-network states in arbitrary dimensions. In particular, we demonstrate that short-range restricted Boltzmann machines are entangled plaquette states, while fully connected restricted Boltzmann machines are string-bond states with a nonlocal geometry and low bond dimension. These results shed light on the underlying architecture of restricted Boltzmann machines and their efficiency at representing many-body quantum states. String-bond states also provide a generic way of enhancing the power of neural-network quantum states and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of tensor networks and the efficiency of neural-network quantum states into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional tensor networks, we show that, because of their nonlocal geometry, neural-network quantum states and their string-bond-state extension can describe a lattice fractional quantum Hall state exactly. In addition, we provide numerical evidence that neural-network quantum states can approximate a chiral spin liquid with better accuracy than entangled plaquette states and local string-bond states. Our results demonstrate the efficiency of neural networks to describe complex quantum wave functions and pave the way towards the use of string-bond states as a tool in more traditional machine-learning applications.

  5. Road Risk Modeling and Cloud-Aided Safety-Based Route Planning.

    PubMed

    Li, Zhaojian; Kolmanovsky, Ilya; Atkins, Ella; Lu, Jianbo; Filev, Dimitar P; Michelini, John

    2016-11-01

    This paper presents a safety-based route planner that exploits vehicle-to-cloud-to-vehicle (V2C2V) connectivity. Time and road risk index (RRI) are considered as metrics to be balanced based on user preference. To evaluate road segment risk, a road and accident database from the highway safety information system is mined with a hybrid neural network model to predict RRI. Real-time factors such as time of day, day of the week, and weather are included as correction factors to the static RRI prediction. With real-time RRI and expected travel time, route planning is formulated as a multiobjective network flow problem and further reduced to a mixed-integer programming problem. A V2C2V implementation of our safety-based route planning approach is proposed to facilitate access to real-time information and computing resources. A real-world case study, route planning through the city of Columbus, Ohio, is presented. Several scenarios illustrate how the "best" route can be adjusted to favor time versus safety metrics.

  6. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shoujun, E-mail: sunnyway@nwpu.edu.cn; Ge, Lefei; Ma, Shaojie

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, themore » nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.« less

  7. Fault Analysis of Space Station DC Power Systems-Using Neural Network Adaptive Wavelets to Detect Faults

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Wang, Yanchun; Dolce, James L.

    1997-01-01

    This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.

  8. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  9. Using Neural Networks in the Mapping of Mixed Discrete/Continuous Design Spaces With Application to Structural Design

    DTIC Science & Technology

    1994-02-01

    desired that the problem to which the design space mapping techniques were applied be easily analyzed, yet provide a design space with realistic complexity...consistent fully stressed solution. 3 DESIGN SPACE MAPPING In order to reduce the computational expense required to optimize design spaces, neural networks...employed in this study. Some of the issues involved in using neural networks to do design space mapping are how to configure the neural network, how much

  10. Neural Networks: A Primer

    DTIC Science & Technology

    1991-05-01

    AL-TP-1 991-0011 LA )_ NEURAL NETWORKS : A PRIMER.R • M 1 - T< R Vince L Wiggins 0 RRC, Incorporated N 3833 Texas Avenue, Suite 256 G Bryan, TX 77802T...5.av bln)2FUOTDTEF.-EOTTP NDIN NUMBCOERSD Neural Networks : A Primer C - F41 689-88-D-0251 PE - 62205F PR - 7719 6. AUTHOR(S) TA - 20 Vin~ce L Wiggins...Maximum 200 words) Neural network technology has recently demonstrated capabilities in areas important to personnel research such as statistical analysis

  11. Development and application of deep convolutional neural network in target detection

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Wang, Chunping; Fu, Qiang

    2018-04-01

    With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.

  12. Genetic algorithm for neural networks optimization

    NASA Astrophysics Data System (ADS)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  13. Neural networks and MIMD-multiprocessors

    NASA Technical Reports Server (NTRS)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  14. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  15. Cascade Back-Propagation Learning in Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2003-01-01

    The cascade back-propagation (CBP) algorithm is the basis of a conceptual design for accelerating learning in artificial neural networks. The neural networks would be implemented as analog very-large-scale integrated (VLSI) circuits, and circuits to implement the CBP algorithm would be fabricated on the same VLSI circuit chips with the neural networks. Heretofore, artificial neural networks have learned slowly because it has been necessary to train them via software, for lack of a good on-chip learning technique. The CBP algorithm is an on-chip technique that provides for continuous learning in real time. Artificial neural networks are trained by example: A network is presented with training inputs for which the correct outputs are known, and the algorithm strives to adjust the weights of synaptic connections in the network to make the actual outputs approach the correct outputs. The input data are generally divided into three parts. Two of the parts, called the "training" and "cross-validation" sets, respectively, must be such that the corresponding input/output pairs are known. During training, the cross-validation set enables verification of the status of the input-to-output transformation learned by the network to avoid over-learning. The third part of the data, termed the "test" set, consists of the inputs that are required to be transformed into outputs; this set may or may not include the training set and/or the cross-validation set. Proposed neural-network circuitry for on-chip learning would be divided into two distinct networks; one for training and one for validation. Both networks would share the same synaptic weights.

  16. Neural-like growing networks

    NASA Astrophysics Data System (ADS)

    Yashchenko, Vitaliy A.

    2000-03-01

    On the basis of the analysis of scientific ideas reflecting the law in the structure and functioning the biological structures of a brain, and analysis and synthesis of knowledge, developed by various directions in Computer Science, also there were developed the bases of the theory of a new class neural-like growing networks, not having the analogue in world practice. In a base of neural-like growing networks the synthesis of knowledge developed by classical theories - semantic and neural of networks is. The first of them enable to form sense, as objects and connections between them in accordance with construction of the network. With thus each sense gets a separate a component of a network as top, connected to other tops. In common it quite corresponds to structure reflected in a brain, where each obvious concept is presented by certain structure and has designating symbol. Secondly, this network gets increased semantic clearness at the expense owing to formation not only connections between neural by elements, but also themselves of elements as such, i.e. here has a place not simply construction of a network by accommodation sense structures in environment neural of elements, and purely creation of most this environment, as of an equivalent of environment of memory. Thus neural-like growing networks are represented by the convenient apparatus for modeling of mechanisms of teleological thinking, as a fulfillment of certain psychophysiological of functions.

  17. Neural Networks for Modeling and Control of Particle Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.

    Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less

  18. Neural Networks for Modeling and Control of Particle Accelerators

    NASA Astrophysics Data System (ADS)

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.

    2016-04-01

    Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  19. Neural Networks for Modeling and Control of Particle Accelerators

    DOE PAGES

    Edelen, A. L.; Biedron, S. G.; Chase, B. E.; ...

    2016-04-01

    Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less

  20. An optimally evolved connective ratio of neural networks that maximizes the occurrence of synchronized bursting behavior

    PubMed Central

    2012-01-01

    Background Synchronized bursting activity (SBA) is a remarkable dynamical behavior in both ex vivo and in vivo neural networks. Investigations of the underlying structural characteristics associated with SBA are crucial to understanding the system-level regulatory mechanism of neural network behaviors. Results In this study, artificial pulsed neural networks were established using spike response models to capture fundamental dynamics of large scale ex vivo cortical networks. Network simulations with synaptic parameter perturbations showed the following two findings. (i) In a network with an excitatory ratio (ER) of 80-90%, its connective ratio (CR) was within a range of 10-30% when the occurrence of SBA reached the highest expectation. This result was consistent with the experimental observation in ex vivo neuronal networks, which were reported to possess a matured inhibitory synaptic ratio of 10-20% and a CR of 10-30%. (ii) No SBA occurred when a network does not contain any all-positive-interaction feedback loop (APFL) motif. In a neural network containing APFLs, the number of APFLs presented an optimal range corresponding to the maximal occurrence of SBA, which was very similar to the optimal CR. Conclusions In a neural network, the evolutionarily selected CR (10-30%) optimizes the occurrence of SBA, and APFL serves a pivotal network motif required to maximize the occurrence of SBA. PMID:22462685

  1. Real-time classification of signals from three-component seismic sensors using neural nets

    NASA Astrophysics Data System (ADS)

    Bowman, B. C.; Dowla, F.

    1992-05-01

    Adaptive seismic data acquisition systems with capabilities of signal discrimination and event classification are important in treaty monitoring, proliferation, and earthquake early detection systems. Potential applications include monitoring underground chemical explosions, as well as other military, cultural, and natural activities where characteristics of signals change rapidly and without warning. In these applications, the ability to detect and interpret events rapidly without falling behind the influx of the data is critical. We developed a system for real-time data acquisition, analysis, learning, and classification of recorded events employing some of the latest technology in computer hardware, software, and artificial neural networks methods. The system is able to train dynamically, and updates its knowledge based on new data. The software is modular and hardware-independent; i.e., the front-end instrumentation is transparent to the analysis system. The software is designed to take advantage of the multiprocessing environment of the Unix operating system. The Unix System V shared memory and static RAM protocols for data access and the semaphore mechanism for interprocess communications were used. As the three-component sensor detects a seismic signal, it is displayed graphically on a color monitor using X11/Xlib graphics with interactive screening capabilities. For interesting events, the triaxial signal polarization is computed, a fast Fourier Transform (FFT) algorithm is applied, and the normalized power spectrum is transmitted to a backpropagation neural network for event classification. The system is currently capable of handling three data channels with a sampling rate of 500 Hz, which covers the bandwidth of most seismic events. The system has been tested in laboratory setting with artificial events generated in the vicinity of a three-component sensor.

  2. A neural network approach to burst detection.

    PubMed

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.

  3. Application of artificial neural networks in nonlinear analysis of trusses

    NASA Technical Reports Server (NTRS)

    Alam, J.; Berke, L.

    1991-01-01

    A method is developed to incorporate neural network model based upon the Backpropagation algorithm for material response into nonlinear elastic truss analysis using the initial stiffness method. Different network configurations are developed to assess the accuracy of neural network modeling of nonlinear material response. In addition to this, a scheme based upon linear interpolation for material data, is also implemented for comparison purposes. It is found that neural network approach can yield very accurate results if used with care. For the type of problems under consideration, it offers a viable alternative to other material modeling methods.

  4. Design of a MIMD neural network processor

    NASA Astrophysics Data System (ADS)

    Saeks, Richard E.; Priddy, Kevin L.; Pap, Robert M.; Stowell, S.

    1994-03-01

    The Accurate Automation Corporation (AAC) neural network processor (NNP) module is a fully programmable multiple instruction multiple data (MIMD) parallel processor optimized for the implementation of neural networks. The AAC NNP design fully exploits the intrinsic sparseness of neural network topologies. Moreover, by using a MIMD parallel processing architecture one can update multiple neurons in parallel with efficiency approaching 100 percent as the size of the network increases. Each AAC NNP module has 8 K neurons and 32 K interconnections and is capable of 140,000,000 connections per second with an eight processor array capable of over one billion connections per second.

  5. Neural network regulation driven by autonomous neural firings

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  6. A Decade of Neural Networks: Practical Applications and Prospects

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina E.

    1994-01-01

    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization.

  7. A decade of neural networks: Practical applications and prospects

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina (Editor); Thakoor, Anil (Editor)

    1994-01-01

    On May 11-13, 1994, JPL's Center for Space Microelectronics Technology (CSMT) hosted a neural network workshop entitled, 'A Decade of Neural Networks: Practical Applications and Prospects,' sponsored by DOD and NASA. The past ten years of renewed activity in neural network research has brought the technology to a crossroads regarding the overall scope of its future practical applicability. The purpose of the workshop was to bring together the sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and development prospects, with emphasis on practical applications. Of the 93 participants, roughly 15% were from government agencies, 30% were from industry, 20% were from universities, and 35% were from Federally Funded Research and Development Centers (FFRDC's).

  8. Reduced Synchronization Persistence in Neural Networks Derived from Atm-Deficient Mice

    PubMed Central

    Levine-Small, Noah; Yekutieli, Ziv; Aljadeff, Jonathan; Boccaletti, Stefano; Ben-Jacob, Eshel; Barzilai, Ari

    2011-01-01

    Many neurodegenerative diseases are characterized by malfunction of the DNA damage response. Therefore, it is important to understand the connection between system level neural network behavior and DNA. Neural networks drawn from genetically engineered animals, interfaced with micro-electrode arrays allowed us to unveil connections between networks’ system level activity properties and such genome instability. We discovered that Atm protein deficiency, which in humans leads to progressive motor impairment, leads to a reduced synchronization persistence compared to wild type synchronization, after chemically imposed DNA damage. Not only do these results suggest a role for DNA stability in neural network activity, they also establish an experimental paradigm for empirically determining the role a gene plays on the behavior of a neural network. PMID:21519382

  9. Fast Recall for Complex-Valued Hopfield Neural Networks with Projection Rules.

    PubMed

    Kobayashi, Masaki

    2017-01-01

    Many models of neural networks have been extended to complex-valued neural networks. A complex-valued Hopfield neural network (CHNN) is a complex-valued version of a Hopfield neural network. Complex-valued neurons can represent multistates, and CHNNs are available for the storage of multilevel data, such as gray-scale images. The CHNNs are often trapped into the local minima, and their noise tolerance is low. Lee improved the noise tolerance of the CHNNs by detecting and exiting the local minima. In the present work, we propose a new recall algorithm that eliminates the local minima. We show that our proposed recall algorithm not only accelerated the recall but also improved the noise tolerance through computer simulations.

  10. A neural network model for credit risk evaluation.

    PubMed

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  11. On Extended Dissipativity of Discrete-Time Neural Networks With Time Delay.

    PubMed

    Feng, Zhiguang; Zheng, Wei Xing

    2015-12-01

    In this brief, the problem of extended dissipativity analysis for discrete-time neural networks with time-varying delay is investigated. The definition of extended dissipativity of discrete-time neural networks is proposed, which unifies several performance measures, such as the H∞ performance, passivity, l2 - l∞ performance, and dissipativity. By introducing a triple-summable term in Lyapunov function, the reciprocally convex approach is utilized to bound the forward difference of the triple-summable term and then the extended dissipativity criterion for discrete-time neural networks with time-varying delay is established. The derived condition guarantees not only the extended dissipativity but also the stability of the neural networks. Two numerical examples are given to demonstrate the reduced conservatism and effectiveness of the obtained results.

  12. A new delay-independent condition for global robust stability of neural networks with time delays.

    PubMed

    Samli, Ruya

    2015-06-01

    This paper studies the problem of robust stability of dynamical neural networks with discrete time delays under the assumptions that the network parameters of the neural system are uncertain and norm-bounded, and the activation functions are slope-bounded. By employing the results of Lyapunov stability theory and matrix theory, new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for delayed neural networks are presented. The results reported in this paper can be easily tested by checking some special properties of symmetric matrices associated with the parameter uncertainties of neural networks. We also present a numerical example to show the effectiveness of the proposed theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Method and system for determining induction motor speed

    DOEpatents

    Parlos, Alexander G.; Bharadwaj, Raj M.

    2004-03-30

    A non-linear, semi-parametric neural network-based adaptive filter is utilized to determine the dynamic speed of a rotating rotor within an induction motor, without the explicit use of a speed sensor, such as a tachometer, is disclosed. The neural network-based filter is developed using actual motor current measurements, voltage measurements, and nameplate information. The neural network-based adaptive filter is trained using an estimated speed calculator derived from the actual current and voltage measurements. The neural network-based adaptive filter uses voltage and current measurements to determine the instantaneous speed of a rotating rotor. The neural network-based adaptive filter also includes an on-line adaptation scheme that permits the filter to be readily adapted for new operating conditions during operations.

  14. An intercomparison of artificial intelligence approaches for polar scene identification

    NASA Technical Reports Server (NTRS)

    Tovinkere, V. R.; Penaloza, M.; Logar, A.; Lee, J.; Weger, R. C.; Berendes, T. A.; Welch, R. M.

    1993-01-01

    The following six different artificial-intelligence (AI) approaches to polar scene identification are examined: (1) a feed forward back propagation neural network, (2) a probabilistic neural network, (3) a hybrid neural network, (4) a 'don't care' feed forward perception model, (5) a 'don't care' feed forward back propagation neural network, and (6) a fuzzy logic based expert system. The ten classes into which six AVHRR local-coverage arctic scenes were classified were: water, solid sea ice, broken sea ice, snow-covered mountains, land, stratus over ice, stratus over water, cirrus over water, cumulus over water, and multilayer cloudiness. It was found that 'don't care' back propagation neural network produced the highest accuracies. This approach has also low CPU requirement.

  15. Impaired neural processing of dynamic faces in left-onset Parkinson's disease.

    PubMed

    Garrido-Vásquez, Patricia; Pell, Marc D; Paulmann, Silke; Sehm, Bernhard; Kotz, Sonja A

    2016-02-01

    Parkinson's disease (PD) affects patients beyond the motor domain. According to previous evidence, one mechanism that may be impaired in the disease is face processing. However, few studies have investigated this process at the neural level in PD. Moreover, research using dynamic facial displays rather than static pictures is scarce, but highly warranted due to the higher ecological validity of dynamic stimuli. In the present study we aimed to investigate how PD patients process emotional and non-emotional dynamic face stimuli at the neural level using event-related potentials. Since the literature has revealed a predominantly right-lateralized network for dynamic face processing, we divided the group into patients with left (LPD) and right (RPD) motor symptom onset (right versus left cerebral hemisphere predominantly affected, respectively). Participants watched short video clips of happy, angry, and neutral expressions and engaged in a shallow gender decision task in order to avoid confounds of task difficulty in the data. In line with our expectations, the LPD group showed significant face processing deficits compared to controls. While there were no group differences in early, sensory-driven processing (fronto-central N1 and posterior P1), the vertex positive potential, which is considered the fronto-central counterpart of the face-specific posterior N170 component, had a reduced amplitude and delayed latency in the LPD group. This may indicate disturbances of structural face processing in LPD. Furthermore, the effect was independent of the emotional content of the videos. In contrast, static facial identity recognition performance in LPD was not significantly different from controls, and comprehensive testing of cognitive functions did not reveal any deficits in this group. We therefore conclude that PD, and more specifically the predominant right-hemispheric affection in left-onset PD, is associated with impaired processing of dynamic facial expressions, which could be one of the mechanisms behind the often reported problems of PD patients in their social lives. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Solving differential equations with unknown constitutive relations as recurrent neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagge, Tobias J.; Stinis, Panagiotis; Yeung, Enoch H.

    We solve a system of ordinary differential equations with an unknown functional form of a sink (reaction rate) term. We assume that the measurements (time series) of state variables are partially available, and use a recurrent neural network to “learn” the reaction rate from this data. This is achieved by including discretized ordinary differential equations as part of a recurrent neural network training problem. We extend TensorFlow’s recurrent neural network architecture to create a simple but scalable and effective solver for the unknown functions, and apply it to a fedbatch bioreactor simulation problem. Use of techniques from recent deep learningmore » literature enables training of functions with behavior manifesting over thousands of time steps. Our networks are structurally similar to recurrent neural networks, but differ in purpose, and require modified training strategies.« less

  17. Neural Network Computing and Natural Language Processing.

    ERIC Educational Resources Information Center

    Borchardt, Frank

    1988-01-01

    Considers the application of neural network concepts to traditional natural language processing and demonstrates that neural network computing architecture can: (1) learn from actual spoken language; (2) observe rules of pronunciation; and (3) reproduce sounds from the patterns derived by its own processes. (Author/CB)

  18. An overview on development of neural network technology

    NASA Technical Reports Server (NTRS)

    Lin, Chun-Shin

    1993-01-01

    The study has been to obtain a bird's-eye view of the current neural network technology and the neural network research activities in NASA. The purpose was two fold. One was to provide a reference document for NASA researchers who want to apply neural network techniques to solve their problems. Another one was to report out survey results regarding NASA research activities and provide a view on what NASA is doing, what potential difficulty exists and what NASA can/should do. In a ten week study period, we interviewed ten neural network researchers in the Langley Research Center and sent out 36 survey forms to researchers at the Johnson Space Center, Lewis Research Center, Ames Research Center and Jet Propulsion Laboratory. We also sent out 60 similar forms to educators and corporation researchers to collect general opinions regarding this field. Twenty-eight survey forms, 11 from NASA researchers and 17 from outside, were returned. Survey results were reported in our final report. In the final report, we first provided an overview on the neural network technology. We reviewed ten neural network structures, discussed the applications in five major areas, and compared the analog, digital and hybrid electronic implementation of neural networks. In the second part, we summarized known NASA neural network research studies and reported the results of the questionnaire survey. Survey results show that most studies are still in the development and feasibility study stage. We compared the techniques, application areas, researchers' opinions on this technology, and many aspects between NASA and non-NASA groups. We also summarized their opinions on difficulties encountered. Applications are considered the top research priority by most researchers. Hardware development and learning algorithm improvement are the next. The lack of financial and management support is among the difficulties in research study. All researchers agree that the use of neural networks could result in cost saving. Fault tolerance has been claimed as one important feature of neural computing. However, the survey indicates that very few studies address this issue. Fault tolerance is important in space mission and aircraft control. We believe that it is worthy for NASA to devote more efforts into the utilization of this feature.

  19. Bidirectional neural interface: Closed-loop feedback control for hybrid neural systems.

    PubMed

    Chou, Zane; Lim, Jeffrey; Brown, Sophie; Keller, Melissa; Bugbee, Joseph; Broccard, Frédéric D; Khraiche, Massoud L; Silva, Gabriel A; Cauwenberghs, Gert

    2015-01-01

    Closed-loop neural prostheses enable bidirectional communication between the biological and artificial components of a hybrid system. However, a major challenge in this field is the limited understanding of how these components, the two separate neural networks, interact with each other. In this paper, we propose an in vitro model of a closed-loop system that allows for easy experimental testing and modification of both biological and artificial network parameters. The interface closes the system loop in real time by stimulating each network based on recorded activity of the other network, within preset parameters. As a proof of concept we demonstrate that the bidirectional interface is able to establish and control network properties, such as synchrony, in a hybrid system of two neural networks more significantly more effectively than the same system without the interface or with unidirectional alternatives. This success holds promise for the application of closed-loop systems in neural prostheses, brain-machine interfaces, and drug testing.

  20. Relationship between isoseismal area and magnitude of historical earthquakes in Greece by a hybrid fuzzy neural network method

    NASA Astrophysics Data System (ADS)

    Tselentis, G.-A.; Sokos, E.

    2012-01-01

    In this paper we suggest the use of diffusion-neural-networks, (neural networks with intrinsic fuzzy logic abilities) to assess the relationship between isoseismal area and earthquake magnitude for the region of Greece. It is of particular importance to study historical earthquakes for which we often have macroseismic information in the form of isoseisms but it is statistically incomplete to assess magnitudes from an isoseismal area or to train conventional artificial neural networks for magnitude estimation. Fuzzy relationships are developed and used to train a feed forward neural network with a back propagation algorithm to obtain the final relationships. Seismic intensity data from 24 earthquakes in Greece have been used. Special attention is being paid to the incompleteness and contradictory patterns in scanty historical earthquake records. The results show that the proposed processing model is very effective, better than applying classical artificial neural networks since the magnitude macroseismic intensity target function has a strong nonlinearity and in most cases the macroseismic datasets are very small.

  1. Tutorial: Neural networks and their potential application in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhrig, R.E.

    A neural network is a data processing system consisting of a number of simple, highly interconnected processing elements in an architecture inspired by the structure of the cerebral cortex portion of the brain. Hence, neural networks are often capable of doing things which humans or animals do well but which conventional computers often do poorly. Neural networks have emerged in the past few years as an area of unusual opportunity for research, development and application to a variety of real world problems. Indeed, neural networks exhibit characteristics and capabilities not provided by any other technology. Examples include reading Japanese Kanjimore » characters and human handwriting, reading a typewritten manuscript aloud, compensating for alignment errors in robots, interpreting very noise'' signals (e.g. electroencephalograms), modeling complex systems that cannot be modelled mathematically, and predicting whether proposed loans will be good or fail. This paper presents a brief tutorial on neural networks and describes research on the potential applications to nuclear power plants.« less

  2. Classification of E-Nose Aroma Data of Four Fruit Types by ABC-Based Neural Network

    PubMed Central

    Adak, M. Fatih; Yumusak, Nejat

    2016-01-01

    Electronic nose technology is used in many areas, and frequently in the beverage industry for classification and quality-control purposes. In this study, four different aroma data (strawberry, lemon, cherry, and melon) were obtained using a MOSES II electronic nose for the purpose of fruit classification. To improve the performance of the classification, the training phase of the neural network with two hidden layers was optimized using artificial bee colony algorithm (ABC), which is known to be successful in exploration. Test data were given to two different neural networks, each of which were trained separately with backpropagation (BP) and ABC, and average test performances were measured as 60% for the artificial neural network trained with BP and 76.39% for the artificial neural network trained with ABC. Training and test phases were repeated 30 times to obtain these average performance measurements. This level of performance shows that the artificial neural network trained with ABC is successful in classifying aroma data. PMID:26927124

  3. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  4. Artificial neural networks applied to forecasting time series.

    PubMed

    Montaño Moreno, Juan J; Palmer Pol, Alfonso; Muñoz Gracia, Pilar

    2011-04-01

    This study offers a description and comparison of the main models of Artificial Neural Networks (ANN) which have proved to be useful in time series forecasting, and also a standard procedure for the practical application of ANN in this type of task. The Multilayer Perceptron (MLP), Radial Base Function (RBF), Generalized Regression Neural Network (GRNN), and Recurrent Neural Network (RNN) models are analyzed. With this aim in mind, we use a time series made up of 244 time points. A comparative study establishes that the error made by the four neural network models analyzed is less than 10%. In accordance with the interpretation criteria of this performance, it can be concluded that the neural network models show a close fit regarding their forecasting capacity. The model with the best performance is the RBF, followed by the RNN and MLP. The GRNN model is the one with the worst performance. Finally, we analyze the advantages and limitations of ANN, the possible solutions to these limitations, and provide an orientation towards future research.

  5. Method and apparatus for in-process sensing of manufacturing quality

    DOEpatents

    Hartman, Daniel A [Santa Fe, NM; Dave, Vivek R [Los Alamos, NM; Cola, Mark J [Santa Fe, NM; Carpenter, Robert W [Los Alamos, NM

    2005-02-22

    A method for determining the quality of an examined weld joint comprising the steps of providing acoustical data from the examined weld joint, and performing a neural network operation on the acoustical data determine the quality of the examined weld joint produced by a friction weld process. The neural network may be trained by the steps of providing acoustical data and observable data from at least one test weld joint, and training the neural network based on the acoustical data and observable data to form a trained neural network so that the trained neural network is capable of determining the quality of a examined weld joint based on acoustical data from the examined weld joint. In addition, an apparatus having a housing, acoustical sensors mounted therein, and means for mounting the housing on a friction weld device so that the acoustical sensors do not contact the weld joint. The apparatus may sample the acoustical data necessary for the neural network to determine the quality of a weld joint.

  6. Using Neural Networks in Decision Making for a Reconfigurable Electro Mechanical Actuator (EMA)

    NASA Technical Reports Server (NTRS)

    Latino, Carl D.

    2001-01-01

    The objectives of this project were to demonstrate applicability and advantages of a neural network approach for evaluating the performance of an electro-mechanical actuator (EMA). The EMA in question was intended for the X-37 Advanced Technology Vehicle. It will have redundant components for safety and reliability. The neural networks for this application are to monitor the operation of the redundant electronics that control the actuator in real time and decide on the operating configuration. The system we proposed consists of the actuator, sensors, control circuitry and dedicated (embedded) processors. The main purpose of the study was to develop suitable hardware and neural network capable of allowing real time reconfiguration decisions to be made. This approach was to be compared to other methods such as fuzzy logic and knowledge based systems considered for the same application. Over the course of the project a more general objective was the identification of the other neural network applications and the education of interested NASA personnel on the topic of Neural Networks.

  7. Neural networks: Alternatives to conventional techniques for automatic docking

    NASA Technical Reports Server (NTRS)

    Vinz, Bradley L.

    1994-01-01

    Automatic docking of orbiting spacecraft is a crucial operation involving the identification of vehicle orientation as well as complex approach dynamics. The chaser spacecraft must be able to recognize the target spacecraft within a scene and achieve accurate closing maneuvers. In a video-based system, a target scene must be captured and transformed into a pattern of pixels. Successful recognition lies in the interpretation of this pattern. Due to their powerful pattern recognition capabilities, artificial neural networks offer a potential role in interpretation and automatic docking processes. Neural networks can reduce the computational time required by existing image processing and control software. In addition, neural networks are capable of recognizing and adapting to changes in their dynamic environment, enabling enhanced performance, redundancy, and fault tolerance. Most neural networks are robust to failure, capable of continued operation with a slight degradation in performance after minor failures. This paper discusses the particular automatic docking tasks neural networks can perform as viable alternatives to conventional techniques.

  8. Training Data Requirement for a Neural Network to Predict Aerodynamic Coefficients

    NASA Technical Reports Server (NTRS)

    Korsmeyer, David (Technical Monitor); Rajkumar, T.; Bardina, Jorge

    2003-01-01

    Basic aerodynamic coefficients are modeled as functions of angle of attack, speed brake deflection angle, Mach number, and side slip angle. Most of the aerodynamic parameters can be well-fitted using polynomial functions. We previously demonstrated that a neural network is a fast, reliable way of predicting aerodynamic coefficients. We encountered few under fitted and/or over fitted results during prediction. The training data for the neural network are derived from wind tunnel test measurements and numerical simulations. The basic questions that arise are: how many training data points are required to produce an efficient neural network prediction, and which type of transfer functions should be used between the input-hidden layer and hidden-output layer. In this paper, a comparative study of the efficiency of neural network prediction based on different transfer functions and training dataset sizes is presented. The results of the neural network prediction reflect the sensitivity of the architecture, transfer functions, and training dataset size.

  9. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations.

    PubMed

    Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke

    2018-02-01

    In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Periodicity and global exponential stability of generalized Cohen-Grossberg neural networks with discontinuous activations and mixed delays.

    PubMed

    Wang, Dongshu; Huang, Lihong

    2014-03-01

    In this paper, we investigate the periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and the fixed point theorem of multi-valued maps, the existence of periodic solutions for the neural networks is obtained. After that, we derive some sufficient conditions for the global exponential stability and convergence of the neural networks, in terms of nonsmooth analysis theory with generalized Lyapunov approach. Without assuming the boundedness (or the growth condition) and monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not only on discrete time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on discrete time-varying and distributed delayed neural networks with discontinuous activations. We give some numerical examples to show the applicability and effectiveness of our main results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Classification of E-Nose Aroma Data of Four Fruit Types by ABC-Based Neural Network.

    PubMed

    Adak, M Fatih; Yumusak, Nejat

    2016-02-27

    Electronic nose technology is used in many areas, and frequently in the beverage industry for classification and quality-control purposes. In this study, four different aroma data (strawberry, lemon, cherry, and melon) were obtained using a MOSES II electronic nose for the purpose of fruit classification. To improve the performance of the classification, the training phase of the neural network with two hidden layers was optimized using artificial bee colony algorithm (ABC), which is known to be successful in exploration. Test data were given to two different neural networks, each of which were trained separately with backpropagation (BP) and ABC, and average test performances were measured as 60% for the artificial neural network trained with BP and 76.39% for the artificial neural network trained with ABC. Training and test phases were repeated 30 times to obtain these average performance measurements. This level of performance shows that the artificial neural network trained with ABC is successful in classifying aroma data.

  12. Evolutionary Wavelet Neural Network ensembles for breast cancer and Parkinson's disease prediction.

    PubMed

    Khan, Maryam Mahsal; Mendes, Alexandre; Chalup, Stephan K

    2018-01-01

    Wavelet Neural Networks are a combination of neural networks and wavelets and have been mostly used in the area of time-series prediction and control. Recently, Evolutionary Wavelet Neural Networks have been employed to develop cancer prediction models. The present study proposes to use ensembles of Evolutionary Wavelet Neural Networks. The search for a high quality ensemble is directed by a fitness function that incorporates the accuracy of the classifiers both independently and as part of the ensemble itself. The ensemble approach is tested on three publicly available biomedical benchmark datasets, one on Breast Cancer and two on Parkinson's disease, using a 10-fold cross-validation strategy. Our experimental results show that, for the first dataset, the performance was similar to previous studies reported in literature. On the second dataset, the Evolutionary Wavelet Neural Network ensembles performed better than all previous methods. The third dataset is relatively new and this study is the first to report benchmark results.

  13. Evolutionary Wavelet Neural Network ensembles for breast cancer and Parkinson’s disease prediction

    PubMed Central

    Mendes, Alexandre; Chalup, Stephan K.

    2018-01-01

    Wavelet Neural Networks are a combination of neural networks and wavelets and have been mostly used in the area of time-series prediction and control. Recently, Evolutionary Wavelet Neural Networks have been employed to develop cancer prediction models. The present study proposes to use ensembles of Evolutionary Wavelet Neural Networks. The search for a high quality ensemble is directed by a fitness function that incorporates the accuracy of the classifiers both independently and as part of the ensemble itself. The ensemble approach is tested on three publicly available biomedical benchmark datasets, one on Breast Cancer and two on Parkinson’s disease, using a 10-fold cross-validation strategy. Our experimental results show that, for the first dataset, the performance was similar to previous studies reported in literature. On the second dataset, the Evolutionary Wavelet Neural Network ensembles performed better than all previous methods. The third dataset is relatively new and this study is the first to report benchmark results. PMID:29420578

  14. Method and Apparatus for In-Process Sensing of Manufacturing Quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, D.A.; Dave, V.R.; Cola, M.J.

    2005-02-22

    A method for determining the quality of an examined weld joint comprising the steps of providing acoustical data from the examined weld joint, and performing a neural network operation on the acoustical data determine the quality of the examined weld joint produced by a friction weld process. The neural network may be trained by the steps of providing acoustical data and observable data from at least one test weld joint, and training the neural network based on the acoustical data and observable data to form a trained neural network so that the trained neural network is capable of determining themore » quality of a examined weld joint based on acoustical data from the examined weld joint. In addition, an apparatus having a housing, acoustical sensors mounted therein, and means for mounting the housing on a friction weld device so that the acoustical sensors do not contact the weld joint. The apparatus may sample the acoustical data necessary for the neural network to determine the quality of a weld joint.« less

  15. Real-Time Adaptive Color Segmentation by Neural Networks

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    2004-01-01

    Artificial neural networks that would utilize the cascade error projection (CEP) algorithm have been proposed as means of autonomous, real-time, adaptive color segmentation of images that change with time. In the original intended application, such a neural network would be used to analyze digitized color video images of terrain on a remote planet as viewed from an uninhabited spacecraft approaching the planet. During descent toward the surface of the planet, information on the segmentation of the images into differently colored areas would be updated adaptively in real time to capture changes in contrast, brightness, and resolution, all in an effort to identify a safe and scientifically productive landing site and provide control feedback to steer the spacecraft toward that site. Potential terrestrial applications include monitoring images of crops to detect insect invasions and monitoring of buildings and other facilities to detect intruders. The CEP algorithm is reliable and is well suited to implementation in very-large-scale integrated (VLSI) circuitry. It was chosen over other neural-network learning algorithms because it is better suited to realtime learning: It provides a self-evolving neural-network structure, requires fewer iterations to converge and is more tolerant to low resolution (that is, fewer bits) in the quantization of neural-network synaptic weights. Consequently, a CEP neural network learns relatively quickly, and the circuitry needed to implement it is relatively simple. Like other neural networks, a CEP neural network includes an input layer, hidden units, and output units (see figure). As in other neural networks, a CEP network is presented with a succession of input training patterns, giving rise to a set of outputs that are compared with the desired outputs. Also as in other neural networks, the synaptic weights are updated iteratively in an effort to bring the outputs closer to target values. A distinctive feature of the CEP neural network and algorithm is that each update of synaptic weights takes place in conjunction with the addition of another hidden unit, which then remains in place as still other hidden units are added on subsequent iterations. For a given training pattern, the synaptic weight between (1) the inputs and the previously added hidden units and (2) the newly added hidden unit is updated by an amount proportional to the partial derivative of a quadratic error function with respect to the synaptic weight. The synaptic weight between the newly added hidden unit and each output unit is given by a more complex function that involves the errors between the outputs and their target values, the transfer functions (hyperbolic tangents) of the neural units, and the derivatives of the transfer functions.

  16. High Performance Implementation of 3D Convolutional Neural Networks on a GPU.

    PubMed

    Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.

  17. High Performance Implementation of 3D Convolutional Neural Networks on a GPU

    PubMed Central

    Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109

  18. Neural network for processing both spatial and temporal data with time based back-propagation

    NASA Technical Reports Server (NTRS)

    Villarreal, James A. (Inventor); Shelton, Robert O. (Inventor)

    1993-01-01

    Neural networks are computing systems modeled after the paradigm of the biological brain. For years, researchers using various forms of neural networks have attempted to model the brain's information processing and decision-making capabilities. Neural network algorithms have impressively demonstrated the capability of modeling spatial information. On the other hand, the application of parallel distributed models to the processing of temporal data has been severely restricted. The invention introduces a novel technique which adds the dimension of time to the well known back-propagation neural network algorithm. In the space-time neural network disclosed herein, the synaptic weights between two artificial neurons (processing elements) are replaced with an adaptable-adjustable filter. Instead of a single synaptic weight, the invention provides a plurality of weights representing not only association, but also temporal dependencies. In this case, the synaptic weights are the coefficients to the adaptable digital filters. Novelty is believed to lie in the disclosure of a processing element and a network of the processing elements which are capable of processing temporal as well as spacial data.

  19. Higher-order neural network software for distortion invariant object recognition

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly

    1991-01-01

    The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.

  20. Predicting neural network firing pattern from phase resetting curve

    NASA Astrophysics Data System (ADS)

    Oprisan, Sorinel; Oprisan, Ana

    2007-04-01

    Autonomous neural networks called central pattern generators (CPG) are composed of endogenously bursting neurons and produce rhythmic activities, such as flying, swimming, walking, chewing, etc. Simplified CPGs for quadrupedal locomotion and swimming are modeled by a ring of neural oscillators such that the output of one oscillator constitutes the input for the subsequent neural oscillator. The phase response curve (PRC) theory discards the detailed conductance-based description of the component neurons of a network and reduces them to ``black boxes'' characterized by a transfer function, which tabulates the transient change in the intrinsic period of a neural oscillator subject to external stimuli. Based on open-loop PRC, we were able to successfully predict the phase-locked period and relative phase between neurons in a half-center network. We derived existence and stability criteria for heterogeneous ring neural networks that are in good agreement with experimental data.

  1. Experiments in Neural-Network Control of a Free-Flying Space Robot

    NASA Technical Reports Server (NTRS)

    Wilson, Edward

    1995-01-01

    Four important generic issues are identified and addressed in some depth in this thesis as part of the development of an adaptive neural network based control system for an experimental free flying space robot prototype. The first issue concerns the importance of true system level design of the control system. A new hybrid strategy is developed here, in depth, for the beneficial integration of neural networks into the total control system. A second important issue in neural network control concerns incorporating a priori knowledge into the neural network. In many applications, it is possible to get a reasonably accurate controller using conventional means. If this prior information is used purposefully to provide a starting point for the optimizing capabilities of the neural network, it can provide much faster initial learning. In a step towards addressing this issue, a new generic Fully Connected Architecture (FCA) is developed for use with backpropagation. A third issue is that neural networks are commonly trained using a gradient based optimization method such as backpropagation; but many real world systems have Discrete Valued Functions (DVFs) that do not permit gradient based optimization. One example is the on-off thrusters that are common on spacecraft. A new technique is developed here that now extends backpropagation learning for use with DVFs. The fourth issue is that the speed of adaptation is often a limiting factor in the implementation of a neural network control system. This issue has been strongly resolved in the research by drawing on the above new contributions.

  2. Modified-hybrid optical neural network filter for multiple object recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.

    2009-08-01

    Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.

  3. A case for spiking neural network simulation based on configurable multiple-FPGA systems.

    PubMed

    Yang, Shufan; Wu, Qiang; Li, Renfa

    2011-09-01

    Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.

  4. Using neural networks for prediction of air pollution index in industrial city

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.; Panchenko, A. A.; Safarov, A. M.

    2017-10-01

    This scientific paper is dedicated to the use of artificial neural networks for the ecological prediction of state of the atmospheric air of an industrial city for capability of the operative environmental decisions. In the paper, there is also the described development of two types of prediction models for determining of the air pollution index on the basis of neural networks: a temporal (short-term forecast of the pollutants content in the air for the nearest days) and a spatial (forecast of atmospheric pollution index in any point of city). The stages of development of the neural network models are briefly overviewed and description of their parameters is also given. The assessment of the adequacy of the prediction models, based on the calculation of the correlation coefficient between the output and reference data, is also provided. Moreover, due to the complexity of perception of the «neural network code» of the offered models by the ordinary users, the software implementations allowing practical usage of neural network models are also offered. It is established that the obtained neural network models provide sufficient reliable forecast, which means that they are an effective tool for analyzing and predicting the behavior of dynamics of the air pollution in an industrial city. Thus, this scientific work successfully develops the urgent matter of forecasting of the atmospheric air pollution index in industrial cities based on the use of neural network models.

  5. Deep learning for computational chemistry.

    PubMed

    Goh, Garrett B; Hodas, Nathan O; Vishnu, Abhinav

    2017-06-15

    The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on multilayer neural networks. Within the last few years, we have seen the transformative impact of deep learning in many domains, particularly in speech recognition and computer vision, to the extent that the majority of expert practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. In this review, we provide an introductory overview into the theory of deep neural networks and their unique properties that distinguish them from traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including quantitative structure activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non-neural networks state-of-the-art models across disparate research topics, and deep neural network-based models often exceeded the "glass ceiling" expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a valuable tool for computational chemistry. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Deep learning for computational chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav

    The rise and fall of artificial neural networks is well documented in the scientific literature of both the fields of computer science and computational chemistry. Yet almost two decades later, we are now seeing a resurgence of interest in deep learning, a machine learning algorithm based on “deep” neural networks. Within the last few years, we have seen the transformative impact of deep learning the computer science domain, notably in speech recognition and computer vision, to the extent that the majority of practitioners in those field are now regularly eschewing prior established models in favor of deep learning models. Inmore » this review, we provide an introductory overview into the theory of deep neural networks and their unique properties as compared to traditional machine learning algorithms used in cheminformatics. By providing an overview of the variety of emerging applications of deep neural networks, we highlight its ubiquity and broad applicability to a wide range of challenges in the field, including QSAR, virtual screening, protein structure modeling, QM calculations, materials synthesis and property prediction. In reviewing the performance of deep neural networks, we observed a consistent outperformance against non neural networks state-of-the-art models across disparate research topics, and deep neural network based models often exceeded the “glass ceiling” expectations of their respective tasks. Coupled with the maturity of GPU-accelerated computing for training deep neural networks and the exponential growth of chemical data on which to train these networks on, we anticipate that deep learning algorithms will be a useful tool and may grow into a pivotal role for various challenges in the computational chemistry field.« less

  7. Artificial neural network modeling and optimization of ultrahigh pressure extraction of green tea polyphenols.

    PubMed

    Xi, Jun; Xue, Yujing; Xu, Yinxiang; Shen, Yuhong

    2013-11-01

    In this study, the ultrahigh pressure extraction of green tea polyphenols was modeled and optimized by a three-layer artificial neural network. A feed-forward neural network trained with an error back-propagation algorithm was used to evaluate the effects of pressure, liquid/solid ratio and ethanol concentration on the total phenolic content of green tea extracts. The neural network coupled with genetic algorithms was also used to optimize the conditions needed to obtain the highest yield of tea polyphenols. The obtained optimal architecture of artificial neural network model involved a feed-forward neural network with three input neurons, one hidden layer with eight neurons and one output layer including single neuron. The trained network gave the minimum value in the MSE of 0.03 and the maximum value in the R(2) of 0.9571, which implied a good agreement between the predicted value and the actual value, and confirmed a good generalization of the network. Based on the combination of neural network and genetic algorithms, the optimum extraction conditions for the highest yield of green tea polyphenols were determined as follows: 498.8 MPa for pressure, 20.8 mL/g for liquid/solid ratio and 53.6% for ethanol concentration. The total phenolic content of the actual measurement under the optimum predicated extraction conditions was 582.4 ± 0.63 mg/g DW, which was well matched with the predicted value (597.2mg/g DW). This suggests that the artificial neural network model described in this work is an efficient quantitative tool to predict the extraction efficiency of green tea polyphenols. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  8. Hideen Markov Models and Neural Networks for Fault Detection in Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic

    1994-01-01

    None given. (From conclusion): Neural networks plus Hidden Markov Models(HMM)can provide excellene detection and false alarm rate performance in fault detection applications. Modified models allow for novelty detection. Also covers some key contributions of neural network model, and application status.

  9. Rapid Simulation of Blast Wave Propagation in Built Environments Using Coarse-Grain Based Intelligent Modeling Methods

    DTIC Science & Technology

    2011-04-01

    experiments was performed using an artificial neural network to try to capture the nonlinearities. The radial Gaussian artificial neural network system...Modeling Blast-Wave Propagation using Artificial Neural Network Methods‖, in International Journal of Advanced Engineering Informatics, Elsevier

  10. Applications of Temporal Graph Metrics to Real-World Networks

    NASA Astrophysics Data System (ADS)

    Tang, John; Leontiadis, Ilias; Scellato, Salvatore; Nicosia, Vincenzo; Mascolo, Cecilia; Musolesi, Mirco; Latora, Vito

    Real world networks exhibit rich temporal information: friends are added and removed over time in online social networks; the seasons dictate the predator-prey relationship in food webs; and the propagation of a virus depends on the network of human contacts throughout the day. Recent studies have demonstrated that static network analysis is perhaps unsuitable in the study of real world network since static paths ignore time order, which, in turn, results in static shortest paths overestimating available links and underestimating their true corresponding lengths. Temporal extensions to centrality and efficiency metrics based on temporal shortest paths have also been proposed. Firstly, we analyse the roles of key individuals of a corporate network ranked according to temporal centrality within the context of a bankruptcy scandal; secondly, we present how such temporal metrics can be used to study the robustness of temporal networks in presence of random errors and intelligent attacks; thirdly, we study containment schemes for mobile phone malware which can spread via short range radio, similar to biological viruses; finally, we study how the temporal network structure of human interactions can be exploited to effectively immunise human populations. Through these applications we demonstrate that temporal metrics provide a more accurate and effective analysis of real-world networks compared to their static counterparts.

  11. Neural electrical activity and neural network growth.

    PubMed

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Limitation of degree information for analyzing the interaction evolution in online social networks

    NASA Astrophysics Data System (ADS)

    Shang, Ke-Ke; Yan, Wei-Sheng; Xu, Xiao-Ke

    2014-04-01

    Previously many studies on online social networks simply analyze the static topology in which the friend relationship once established, then the links and nodes will not disappear, but this kind of static topology may not accurately reflect temporal interactions on online social services. In this study, we define four types of users and interactions in the interaction (dynamic) network. We found that active, disappeared, new and super nodes (users) have obviously different strength distribution properties and this result also can be revealed by the degree characteristics of the unweighted interaction and friendship (static) networks. However, the active, disappeared, new and super links (interactions) only can be reflected by the strength distribution in the weighted interaction network. This result indicates the limitation of the static topology data on analyzing social network evolutions. In addition, our study uncovers the approximately stable statistics for the dynamic social network in which there are a large variation for users and interaction intensity. Our findings not only verify the correctness of our definitions, but also helped to study the customer churn and evaluate the commercial value of valuable customers in online social networks.

  13. Two-Stage Approach to Image Classification by Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Ososkov, Gennady; Goncharov, Pavel

    2018-02-01

    The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  14. Generalised Transfer Functions of Neural Networks

    NASA Astrophysics Data System (ADS)

    Fung, C. F.; Billings, S. A.; Zhang, H.

    1997-11-01

    When artificial neural networks are used to model non-linear dynamical systems, the system structure which can be extremely useful for analysis and design, is buried within the network architecture. In this paper, explicit expressions for the frequency response or generalised transfer functions of both feedforward and recurrent neural networks are derived in terms of the network weights. The derivation of the algorithm is established on the basis of the Taylor series expansion of the activation functions used in a particular neural network. This leads to a representation which is equivalent to the non-linear recursive polynomial model and enables the derivation of the transfer functions to be based on the harmonic expansion method. By mapping the neural network into the frequency domain information about the structure of the underlying non-linear system can be recovered. Numerical examples are included to demonstrate the application of the new algorithm. These examples show that the frequency response functions appear to be highly sensitive to the network topology and training, and that the time domain properties fail to reveal deficiencies in the trained network structure.

  15. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  16. Fuzzy and neural control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1992-01-01

    Fuzzy logic and neural networks provide new methods for designing control systems. Fuzzy logic controllers do not require a complete analytical model of a dynamic system and can provide knowledge-based heuristic controllers for ill-defined and complex systems. Neural networks can be used for learning control. In this chapter, we discuss hybrid methods using fuzzy logic and neural networks which can start with an approximate control knowledge base and refine it through reinforcement learning.

  17. Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene.

    PubMed

    Li, Jun; Mei, Xue; Prokhorov, Danil; Tao, Dacheng

    2017-03-01

    Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling.

  18. Neural network based architectures for aerospace applications

    NASA Technical Reports Server (NTRS)

    Ricart, Richard

    1987-01-01

    A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed.

  19. Improved Adjoint-Operator Learning For A Neural Network

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1995-01-01

    Improved method of adjoint-operator learning reduces amount of computation and associated computational memory needed to make electronic neural network learn temporally varying pattern (e.g., to recognize moving object in image) in real time. Method extension of method described in "Adjoint-Operator Learning for a Neural Network" (NPO-18352).

  20. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  1. The use of neural networks for approximation of nuclear data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korovin, Yu. A.; Maksimushkina, A. V., E-mail: AVMaksimushkina@mephi.ru

    2015-12-15

    The article discusses the possibility of using neural networks for approximation or reconstruction of data such as the reaction cross sections. The quality of the approximation using fitting criteria is also evaluated. The activity of materials under irradiation is calculated from data obtained using neural networks.

  2. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  3. Machinery Monitoring and Diagnostics Using Pseudo Wigner-Ville Distribution and Backpropagation Neural Network

    DTIC Science & Technology

    1993-09-01

    frequency, which when used as an input to an artificial neural network will aide in the detection of location and severity of machinery faults...Research is presented where the union of an artificial neural network , utilizing the highly successful backpropagation paradigm, and the pseudo wigner

  4. Neural Networks for Handwritten English Alphabet Recognition

    NASA Astrophysics Data System (ADS)

    Perwej, Yusuf; Chaturvedi, Ashish

    2011-04-01

    This paper demonstrates the use of neural networks for developing a system that can recognize hand-written English alphabets. In this system, each English alphabet is represented by binary values that are used as input to a simple feature extraction system, whose output is fed to our neural network system.

  5. Using Neural Networks to Predict MBA Student Success

    ERIC Educational Resources Information Center

    Naik, Bijayananda; Ragothaman, Srinivasan

    2004-01-01

    Predicting MBA student performance for admission decisions is crucial for educational institutions. This paper evaluates the ability of three different models--neural networks, logit, and probit to predict MBA student performance in graduate programs. The neural network technique was used to classify applicants into successful and marginal student…

  6. Neuromorphic Computing for Temporal Scientific Data Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuman, Catherine D.; Potok, Thomas E.; Young, Steven

    In this work, we apply a spiking neural network model and an associated memristive neuromorphic implementation to an application in classifying temporal scientific data. We demonstrate that the spiking neural network model achieves comparable results to a previously reported convolutional neural network model, with significantly fewer neurons and synapses required.

  7. Discrete-time BAM neural networks with variable delays

    NASA Astrophysics Data System (ADS)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  8. New results for global exponential synchronization in neural networks via functional differential inclusions.

    PubMed

    Wang, Dongshu; Huang, Lihong; Tang, Longkun

    2015-08-01

    This paper is concerned with the synchronization dynamical behaviors for a class of delayed neural networks with discontinuous neuron activations. Continuous and discontinuous state feedback controller are designed such that the neural networks model can realize exponential complete synchronization in view of functional differential inclusions theory, Lyapunov functional method and inequality technique. The new proposed results here are very easy to verify and also applicable to neural networks with continuous activations. Finally, some numerical examples show the applicability and effectiveness of our main results.

  9. Vectorized algorithms for spiking neural network simulation.

    PubMed

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  10. Representation of the Characteristics of Piezoelectric Fiber Composites with Neural Networks

    NASA Astrophysics Data System (ADS)

    Yapici, A.; Bickraj, K.; Yenilmez, A.; Li, M.; Tansel, I. N.; Martin, S. A.; Pereira, C. M.; Roth, L. E.

    2007-03-01

    Ideal sensors for the future should be economical, efficient, highly intelligent, and capable of obtaining their operation power from the environment. The use of piezoelectric fiber composites coupled with a low power microprocessor and backpropagation type neural networks is proposed for the development of a simple sensor to estimate the characteristics of harmonic forces. Three neural networks were used for the estimation of amplitude, gain and variation of the load in the time domain. The average estimation errors of the neural networks were less than 8% in all of the studied cases.

  11. Forecasting the mortality rates of Indonesian population by using neural network

    NASA Astrophysics Data System (ADS)

    Safitri, Lutfiani; Mardiyati, Sri; Rahim, Hendrisman

    2018-03-01

    A model that can represent a problem is required in conducting a forecasting. One of the models that has been acknowledged by the actuary community in forecasting mortality rate is the Lee-Certer model. Lee Carter model supported by Neural Network will be used to calculate mortality forecasting in Indonesia. The type of Neural Network used is feedforward neural network aligned with backpropagation algorithm in python programming language. And the final result of this study is mortality rate in forecasting Indonesia for the next few years

  12. Calibration of a shock wave position sensor using artificial neural networks

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    1993-01-01

    This report discusses the calibration of a shock wave position sensor. The position sensor works by using artificial neural networks to map cropped CCD frames of the shadows of the shock wave into the value of the shock wave position. This project was done as a tutorial demonstration of method and feasibility. It used a laboratory shadowgraph, nozzle, and commercial neural network package. The results were quite good, indicating that artificial neural networks can be used efficiently to automate the semi-quantitative applications of flow visualization.

  13. Predicting cloud-to-ground lightning with neural networks

    NASA Technical Reports Server (NTRS)

    Barnes, Arnold A., Jr.; Frankel, Donald; Draper, James Stark

    1991-01-01

    A neural network is being trained to predict lightning at Cape Canaveral for periods up to two hours in advance. Inputs consist of ground based field mill data, meteorological tower data, lightning location data, and radiosonde data. High values of the field mill data and rapid changes in the field mill data, offset in time, provide the forecasts or desired output values used to train the neural network through backpropagation. Examples of input data are shown and an example of data compression using a hidden layer in the neural network is discussed.

  14. The application of neural network PID controller to control the light gasoline etherification

    NASA Astrophysics Data System (ADS)

    Cheng, Huanxin; Zhang, Yimin; Kong, Lingling; Meng, Xiangyong

    2017-06-01

    Light gasoline etherification technology can effectively improve the quality of gasoline, which is environmental- friendly and economical. By combining BP neural network and PID control and using BP neural network self-learning ability for online parameter tuning, this method optimizes the parameters of PID controller and applies this to the Fcc gas flow control to achieve the control of the final product- heavy oil concentration. Finally, through MATLAB simulation, it is found that the PID control based on BP neural network has better controlling effect than traditional PID control.

  15. A novel joint-processing adaptive nonlinear equalizer using a modular recurrent neural network for chaotic communication systems.

    PubMed

    Zhao, Haiquan; Zeng, Xiangping; Zhang, Jiashu; Liu, Yangguang; Wang, Xiaomin; Li, Tianrui

    2011-01-01

    To eliminate nonlinear channel distortion in chaotic communication systems, a novel joint-processing adaptive nonlinear equalizer based on a pipelined recurrent neural network (JPRNN) is proposed, using a modified real-time recurrent learning (RTRL) algorithm. Furthermore, an adaptive amplitude RTRL algorithm is adopted to overcome the deteriorating effect introduced by the nesting process. Computer simulations illustrate that the proposed equalizer outperforms the pipelined recurrent neural network (PRNN) and recurrent neural network (RNN) equalizers. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Nutrient Stress Detection in Corn Using Neural Networks and AVIRIS Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Estep, Lee

    2001-01-01

    AVIRIS image cube data has been processed for the detection of nutrient stress in corn by both known, ratio-type algorithms and by trained neural networks. The USDA Shelton, NE, ARS Variable Rate Nitrogen Application (VRAT) experimental farm was the site used in the study. Upon application of ANOVA and Dunnett multiple comparsion tests on the outcome of both the neural network processing and the ratio-type algorithm results, it was found that the neural network methodology provides a better overall capability to separate nutrient stressed crops from in-field controls.

  17. Constructive autoassociative neural network for facial recognition.

    PubMed

    Fernandes, Bruno J T; Cavalcanti, George D C; Ren, Tsang I

    2014-01-01

    Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.

  18. Global asymptotic stability to a generalized Cohen-Grossberg BAM neural networks of neutral type delays.

    PubMed

    Zhang, Zhengqiu; Liu, Wenbin; Zhou, Dongming

    2012-01-01

    In this paper, we first discuss the existence of a unique equilibrium point of a generalized Cohen-Grossberg BAM neural networks of neutral type delays by means of the Homeomorphism theory and inequality technique. Then, by applying the existence result of an equilibrium point and constructing a Lyapunov functional, we study the global asymptotic stability of the equilibrium solution to the above Cohen-Grossberg BAM neural networks of neutral type. In our results, the hypothesis for boundedness in the existing paper, which discussed Cohen-Grossberg neural networks of neutral type on the activation functions, are removed. Finally, we give an example to demonstrate the validity of our global asymptotic stability result for the above neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Neural networks for self-learning control systems

    NASA Technical Reports Server (NTRS)

    Nguyen, Derrick H.; Widrow, Bernard

    1990-01-01

    It is shown how a neural network can learn of its own accord to control a nonlinear dynamic system. An emulator, a multilayered neural network, learns to identify the system's dynamic characteristics. The controller, another multilayered neural network, next learns to control the emulator. The self-trained controller is then used to control the actual dynamic system. The learning process continues as the emulator and controller improve and track the physical process. An example is given to illustrate these ideas. The 'truck backer-upper,' a neural network controller that steers a trailer truck while the truck is backing up to a loading dock, is demonstrated. The controller is able to guide the truck to the dock from almost any initial position. The technique explored should be applicable to a wide variety of nonlinear control problems.

  20. Multidisciplinary Design Optimization for Aeropropulsion Engines and Solid Modeling/Animation via the Integrated Forced Methods

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The grant closure report is organized in the following four chapters: Chapter describes the two research areas Design optimization and Solid mechanics. Ten journal publications are listed in the second chapter. Five highlights is the subject matter of chapter three. CHAPTER 1. The Design Optimization Test Bed CometBoards. CHAPTER 2. Solid Mechanics: Integrated Force Method of Analysis. CHAPTER 3. Five Highlights: Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft. Neural Network and Regression Soft Model Extended for PX-300 Aircraft Engine. Engine with Regression and Neural Network Approximators Designed. Cascade Optimization Strategy with Neural network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design. Neural Network and Regression Approximations Used in Aircraft Design.

  1. Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration.

    PubMed

    Xia, Youshen; Sun, Changyin; Zheng, Wei Xing

    2012-05-01

    There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.

  2. Control of autonomous robot using neural networks

    NASA Astrophysics Data System (ADS)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  3. An Expedient Study on Back-Propagation (BPN) Neural Networks for Modeling Automated Evaluation of the Answers and Progress of Deaf Students' That Possess Basic Knowledge of the English Language and Computer Skills

    NASA Astrophysics Data System (ADS)

    Vrettaros, John; Vouros, George; Drigas, Athanasios S.

    This article studies the expediency of using neural networks technology and the development of back-propagation networks (BPN) models for modeling automated evaluation of the answers and progress of deaf students' that possess basic knowledge of the English language and computer skills, within a virtual e-learning environment. The performance of the developed neural models is evaluated with the correlation factor between the neural networks' response values and the real value data as well as the percentage measurement of the error between the neural networks' estimate values and the real value data during its training process and afterwards with unknown data that weren't used in the training process.

  4. Egg production forecasting: Determining efficient modeling approaches.

    PubMed

    Ahmad, H A

    2011-12-01

    Several mathematical or statistical and artificial intelligence models were developed to compare egg production forecasts in commercial layers. Initial data for these models were collected from a comparative layer trial on commercial strains conducted at the Poultry Research Farms, Auburn University. Simulated data were produced to represent new scenarios by using means and SD of egg production of the 22 commercial strains. From the simulated data, random examples were generated for neural network training and testing for the weekly egg production prediction from wk 22 to 36. Three neural network architectures-back-propagation-3, Ward-5, and the general regression neural network-were compared for their efficiency to forecast egg production, along with other traditional models. The general regression neural network gave the best-fitting line, which almost overlapped with the commercial egg production data, with an R(2) of 0.71. The general regression neural network-predicted curve was compared with original egg production data, the average curves of white-shelled and brown-shelled strains, linear regression predictions, and the Gompertz nonlinear model. The general regression neural network was superior in all these comparisons and may be the model of choice if the initial overprediction is managed efficiently. In general, neural network models are efficient, are easy to use, require fewer data, and are practical under farm management conditions to forecast egg production.

  5. Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.

    PubMed

    Zhang, Yanjun; Tao, Gang; Chen, Mou

    2016-09-01

    This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method.

  6. An artificial neural network improves prediction of observed survival in patients with laryngeal squamous carcinoma.

    PubMed

    Jones, Andrew S; Taktak, Azzam G F; Helliwell, Timothy R; Fenton, John E; Birchall, Martin A; Husband, David J; Fisher, Anthony C

    2006-06-01

    The accepted method of modelling and predicting failure/survival, Cox's proportional hazards model, is theoretically inferior to neural network derived models for analysing highly complex systems with large datasets. A blinded comparison of the neural network versus the Cox's model in predicting survival utilising data from 873 treated patients with laryngeal cancer. These were divided randomly and equally into a training set and a study set and Cox's and neural network models applied in turn. Data were then divided into seven sets of binary covariates and the analysis repeated. Overall survival was not significantly different on Kaplan-Meier plot, or with either test model. Although the network produced qualitatively similar results to Cox's model it was significantly more sensitive to differences in survival curves for age and N stage. We propose that neural networks are capable of prediction in systems involving complex interactions between variables and non-linearity.

  7. Set selection dynamical system neural networks with partial memories, with applications to Sudoku and KenKen puzzles.

    PubMed

    Boreland, B; Clement, G; Kunze, H

    2015-08-01

    After reviewing set selection and memory model dynamical system neural networks, we introduce a neural network model that combines set selection with partial memories (stored memories on subsets of states in the network). We establish that feasible equilibria with all states equal to ± 1 correspond to answers to a particular set theoretic problem. We show that KenKen puzzles can be formulated as a particular case of this set theoretic problem and use the neural network model to solve them; in addition, we use a similar approach to solve Sudoku. We illustrate the approach in examples. As a heuristic experiment, we use online or print resources to identify the difficulty of the puzzles and compare these difficulties to the number of iterations used by the appropriate neural network solver, finding a strong relationship. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Method for Constructing Composite Response Surfaces by Combining Neural Networks with other Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2003-01-01

    A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.

  9. Global asymptotical ω-periodicity of a fractional-order non-autonomous neural networks.

    PubMed

    Chen, Boshan; Chen, Jiejie

    2015-08-01

    We study the global asymptotic ω-periodicity for a fractional-order non-autonomous neural networks. Firstly, based on the Caputo fractional-order derivative it is shown that ω-periodic or autonomous fractional-order neural networks cannot generate exactly ω-periodic signals. Next, by using the contraction mapping principle we discuss the existence and uniqueness of S-asymptotically ω-periodic solution for a class of fractional-order non-autonomous neural networks. Then by using a fractional-order differential and integral inequality technique, we study global Mittag-Leffler stability and global asymptotical periodicity of the fractional-order non-autonomous neural networks, which shows that all paths of the networks, starting from arbitrary points and responding to persistent, nonconstant ω-periodic external inputs, asymptotically converge to the same nonconstant ω-periodic function that may be not a solution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A new bio-inspired stimulator to suppress hyper-synchronized neural firing in a cortical network.

    PubMed

    Amiri, Masoud; Amiri, Mahmood; Nazari, Soheila; Faez, Karim

    2016-12-07

    Hyper-synchronous neural oscillations are the character of several neurological diseases such as epilepsy. On the other hand, glial cells and particularly astrocytes can influence neural synchronization. Therefore, based on the recent researches, a new bio-inspired stimulator is proposed which basically is a dynamical model of the astrocyte biophysical model. The performance of the new stimulator is investigated on a large-scale, cortical network. Both excitatory and inhibitory synapses are also considered in the simulated spiking neural network. The simulation results show that the new stimulator has a good performance and is able to reduce recurrent abnormal excitability which in turn avoids the hyper-synchronous neural firing in the spiking neural network. In this way, the proposed stimulator has a demand controlled characteristic and is a good candidate for deep brain stimulation (DBS) technique to successfully suppress the neural hyper-synchronization. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Convergence dynamics and pseudo almost periodicity of a class of nonautonomous RFDEs with applications

    NASA Astrophysics Data System (ADS)

    Fan, Meng; Ye, Dan

    2005-09-01

    This paper studies the dynamics of a system of retarded functional differential equations (i.e., RF=Es), which generalize the Hopfield neural network models, the bidirectional associative memory neural networks, the hybrid network models of the cellular neural network type, and some population growth model. Sufficient criteria are established for the globally exponential stability and the existence and uniqueness of pseudo almost periodic solution. The approaches are based on constructing suitable Lyapunov functionals and the well-known Banach contraction mapping principle. The paper ends with some applications of the main results to some neural network models and population growth models and numerical simulations.

  12. Adaptive Optimization of Aircraft Engine Performance Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Long, Theresa W.

    1995-01-01

    Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.

  13. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  14. Stem cells from human exfoliated deciduous teeth differentiate toward neural cells in a medium dynamically cultured with Schwann cells in a series of polydimethylsiloxanes scaffolds

    NASA Astrophysics Data System (ADS)

    Su, Wen-Ta; Pan, Yu-Jing

    2016-08-01

    Objective. Schwann cells (SCs) are primary structural and functional cells in the peripheral nervous system. These cells play a crucial role in peripheral nerve regeneration by releasing neurotrophic factors. This study evaluated the neural differentiation potential effects of stem cells from human exfoliated deciduous teeth (SHEDs) in a rat Schwann cell (RSC) culture medium. Approach. SHEDs and RSCs were individually cultured on a polydimethylsiloxane (PDMS) scaffold, and the effects of the RSC medium on the SHEDs differentiation between static and dynamic cultures were compared. Main results. Results demonstrated that the SHED cells differentiated by the RSC cultured medium in the static culture formed neurospheres after 7 days at the earliest, and SHED cells formed neurospheres within 3 days in the dynamic culture. These results confirm that the RSC culture medium can induce neurospheres formation, the speed of formation and the number of neurospheres (19.16 folds high) in a dynamic culture was superior to the static culture for 3 days culture. The SHED-derived spheres were further incubated in the RSCs culture medium, these neurospheres continuously differentiated into neurons and neuroglial cells. Immunofluorescent staining and RT-PCR revealed nestin, β-III tubulin, GFAP, and γ-enolase of neural markers on the differentiated cells. Significance. These results indicated that the RSC culture medium can induce the neural differentiation of SHED cells, and can be used as a new therapeutic tool to repair nerve damage.

  15. Interactions between neural networks: a mechanism for tuning chaos and oscillations.

    PubMed

    Wang, Lipo

    2007-06-01

    We show that chaos and oscillations in a higher-order binary neural network can be tuned effectively using interactions between neural networks. Our results suggest that network interactions may be useful as a means of adjusting the level of dynamic activities in systems that employ chaos and oscillations for information processing, or as a means of suppressing oscillatory behaviors in systems that require stability.

  16. Analysis of the characteristics of the synchronous clusters in the adaptive Kuramoto network and neural network of the epileptic brain

    NASA Astrophysics Data System (ADS)

    Hramov, Alexander E.; Kharchenko, Alexander A.; Makarov, Vladimir V.; Khramova, Marina V.; Koronovskii, Alexey A.; Pavlov, Alexey N.; Dana, Syamal K.

    2016-04-01

    In the paper we study the mechanisms of phase synchronization in the adaptive model network of Kuramoto oscillators and the neural network of brain by consideration of the integral characteristics of the observed networks signals. As the integral characteristics of the model network we consider the summary signal produced by the oscillators. Similar to the model situation we study the ECoG signal as the integral characteristic of neural network of the brain. We show that the establishment of the phase synchronization results in the increase of the peak, corresponding to synchronized oscillators, on the wavelet energy spectrum of the integral signals. The observed correlation between the phase relations of the elements and the integral characteristics of the whole network open the way to detect the size of synchronous clusters in the neural networks of the epileptic brain before and during seizure.

  17. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment.

    PubMed

    Li, Yongcheng; Sun, Rong; Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei

    2016-01-01

    We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.

  18. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment

    PubMed Central

    Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei

    2016-01-01

    We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot’s performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks. PMID:27806074

  19. Efficient and self-adaptive in-situ learning in multilayer memristor neural networks.

    PubMed

    Li, Can; Belkin, Daniel; Li, Yunning; Yan, Peng; Hu, Miao; Ge, Ning; Jiang, Hao; Montgomery, Eric; Lin, Peng; Wang, Zhongrui; Song, Wenhao; Strachan, John Paul; Barnell, Mark; Wu, Qing; Williams, R Stanley; Yang, J Joshua; Xia, Qiangfei

    2018-06-19

    Memristors with tunable resistance states are emerging building blocks of artificial neural networks. However, in situ learning on a large-scale multiple-layer memristor network has yet to be demonstrated because of challenges in device property engineering and circuit integration. Here we monolithically integrate hafnium oxide-based memristors with a foundry-made transistor array into a multiple-layer neural network. We experimentally demonstrate in situ learning capability and achieve competitive classification accuracy on a standard machine learning dataset, which further confirms that the training algorithm allows the network to adapt to hardware imperfections. Our simulation using the experimental parameters suggests that a larger network would further increase the classification accuracy. The memristor neural network is a promising hardware platform for artificial intelligence with high speed-energy efficiency.

  20. Sea ice classification using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.

Top