Science.gov

Sample records for active neural networks

  1. Neural network with formed dynamics of activity

    SciTech Connect

    Dunin-Barkovskii, V.L.; Osovets, N.B.

    1995-03-01

    The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.

  2. Models of neural networks with fuzzy activation functions

    NASA Astrophysics Data System (ADS)

    Nguyen, A. T.; Korikov, A. M.

    2017-02-01

    This paper investigates the application of a new form of neuron activation functions that are based on the fuzzy membership functions derived from the theory of fuzzy systems. On the basis of the results regarding neuron models with fuzzy activation functions, we created the models of fuzzy-neural networks. These fuzzy-neural network models differ from conventional networks that employ the fuzzy inference systems using the methods of neural networks. While conventional fuzzy-neural networks belong to the first type, fuzzy-neural networks proposed here are defined as the second-type models. The simulation results show that the proposed second-type model can successfully solve the problem of the property prediction for time – dependent signals. Neural networks with fuzzy impulse activation functions can be widely applied in many fields of science, technology and mechanical engineering to solve the problems of classification, prediction, approximation, etc.

  3. Deep Neural Networks with Multistate Activation Functions

    PubMed Central

    Cai, Chenghao; Xu, Yanyan; Ke, Dengfeng; Su, Kaile

    2015-01-01

    We propose multistate activation functions (MSAFs) for deep neural networks (DNNs). These MSAFs are new kinds of activation functions which are capable of representing more than two states, including the N-order MSAFs and the symmetrical MSAF. DNNs with these MSAFs can be trained via conventional Stochastic Gradient Descent (SGD) as well as mean-normalised SGD. We also discuss how these MSAFs perform when used to resolve classification problems. Experimental results on the TIMIT corpus reveal that, on speech recognition tasks, DNNs with MSAFs perform better than the conventional DNNs, getting a relative improvement of 5.60% on phoneme error rates. Further experiments also reveal that mean-normalised SGD facilitates the training processes of DNNs with MSAFs, especially when being with large training sets. The models can also be directly trained without pretraining when the training set is sufficiently large, which results in a considerable relative improvement of 5.82% on word error rates. PMID:26448739

  4. The effect of the neural activity on topological properties of growing neural networks.

    PubMed

    Gafarov, F M; Gafarova, V R

    2016-09-01

    The connectivity structure in cortical networks defines how information is transmitted and processed, and it is a source of the complex spatiotemporal patterns of network's development, and the process of creation and deletion of connections is continuous in the whole life of the organism. In this paper, we study how neural activity influences the growth process in neural networks. By using a two-dimensional activity-dependent growth model we demonstrated the neural network growth process from disconnected neurons to fully connected networks. For making quantitative investigation of the network's activity influence on its topological properties we compared it with the random growth network not depending on network's activity. By using the random graphs theory methods for the analysis of the network's connections structure it is shown that the growth in neural networks results in the formation of a well-known "small-world" network.

  5. Neural Networks

    SciTech Connect

    Smith, Patrick I.

    2003-09-23

    information [2]. Each one of these cells acts as a simple processor. When individual cells interact with one another, the complex abilities of the brain are made possible. In neural networks, the input or data are processed by a propagation function that adds up the values of all the incoming data. The ending value is then compared with a threshold or specific value. The resulting value must exceed the activation function value in order to become output. The activation function is a mathematical function that a neuron uses to produce an output referring to its input value. [8] Figure 1 depicts this process. Neural networks usually have three components an input, a hidden, and an output. These layers create the end result of the neural network. A real world example is a child associating the word dog with a picture. The child says dog and simultaneously looks a picture of a dog. The input is the spoken word ''dog'', the hidden is the brain processing, and the output will be the category of the word dog based on the picture. This illustration describes how a neural network functions.

  6. Systematic fluctuation expansion for neural network activity equations.

    PubMed

    Buice, Michael A; Cowan, Jack D; Chow, Carson C

    2010-02-01

    Population rate or activity equations are the foundation of a common approach to modeling for neural networks. These equations provide mean field dynamics for the firing rate or activity of neurons within a network given some connectivity. The shortcoming of these equations is that they take into account only the average firing rate, while leaving out higher-order statistics like correlations between firing. A stochastic theory of neural networks that includes statistics at all orders was recently formulated. We describe how this theory yields a systematic extension to population rate equations by introducing equations for correlations and appropriate coupling terms. Each level of the approximation yields closed equations; they depend only on the mean and specific correlations of interest, without an ad hoc criterion for doing so. We show in an example of an all-to-all connected network how our system of generalized activity equations captures phenomena missed by the mean field rate equations alone.

  7. Neural Networks

    DTIC Science & Technology

    1990-01-01

    FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT ELEMENT NO. NO. NO. ACCESSION NO 11 TITLE (Include Security Classification) NEURAL NETWORKS 12. PERSONAL...SUB-GROUP Neural Networks Optical Architectures Nonlinear Optics Adaptation 19. ABSTRACT (Continue on reverse if necessary and identify by block number...341i Y C-odes , lo iii/(iv blank) 1. INTRODUCTION Neural networks are a type of distributed processing system [1

  8. Sparse Neural Network Models of Antimicrobial Peptide-Activity Relationships.

    PubMed

    Müller, Alex T; Kaymaz, Aral C; Gabernet, Gisela; Posselt, Gernot; Wessler, Silja; Hiss, Jan A; Schneider, Gisbert

    2016-12-01

    We present an adaptive neural network model for chemical data classification. The method uses an evolutionary algorithm for optimizing the network structure by seeking sparsely connected architectures. The number of hidden layers, the number of neurons in each layer and their connectivity are free variables of the system. We used the method for predicting antimicrobial peptide activity from the amino acid sequence. Visualization of the evolved sparse network structures suggested a high charge density and a low aggregation potential in solution as beneficial for antimicrobial activity. However, different training data sets and peptide representations resulted in greatly varying network structures. Overall, the sparse network models turned out to be less accurate than fully-connected networks. In a prospective application, we synthesized and tested 10 de novo generated peptides that were predicted to either possess antimicrobial activity, or to be inactive. Two of the predicted antibacterial peptides showed cosiderable bacteriostatic effects against both Staphylococcus aureus and Escherichia coli. None of the predicted inactive peptides possessed antibacterial properties. Molecular dynamics simulations of selected peptide structures in water and TFE suggest a pronounced peptide helicity in a hydrophobic environment. The results of this study underscore the applicability of neural networks for guiding the computer-assisted design of new peptides with desired properties.

  9. Application of neural networks to seismic active control

    SciTech Connect

    Tang, Yu

    1995-07-01

    An exploratory study on seismic active control using an artificial neural network (ANN) is presented in which a singledegree-of-freedom (SDF) structural system is controlled by a trained neural network. A feed-forward neural network and the backpropagation training method are used in the study. In backpropagation training, the learning rate is determined by ensuring the decrease of the error function at each training cycle. The training patterns for the neural net are generated randomly. Then, the trained ANN is used to compute the control force according to the control algorithm. The control strategy proposed herein is to apply the control force at every time step to destroy the build-up of the system response. The ground motions considered in the simulations are the N21E and N69W components of the Lake Hughes No. 12 record that occurred in the San Fernando Valley in California on February 9, 1971. Significant reduction of the structural response by one order of magnitude is observed. Also, it is shown that the proposed control strategy has the ability to reduce the peak that occurs during the first few cycles of the time history. These promising results assert the potential of applying ANNs to active structural control under seismic loads.

  10. Multiview fusion for activity recognition using deep neural networks

    NASA Astrophysics Data System (ADS)

    Kavi, Rahul; Kulathumani, Vinod; Rohit, Fnu; Kecojevic, Vlad

    2016-07-01

    Convolutional neural networks (ConvNets) coupled with long short term memory (LSTM) networks have been recently shown to be effective for video classification as they combine the automatic feature extraction capabilities of a neural network with additional memory in the temporal domain. This paper shows how multiview fusion can be applied to such a ConvNet LSTM architecture. Two different fusion techniques are presented. The system is first evaluated in the context of a driver activity recognition system using data collected in a multicamera driving simulator. These results show significant improvement in accuracy with multiview fusion and also show that deep learning performs better than a traditional approach using spatiotemporal features even without requiring any background subtraction. The system is also validated on another publicly available multiview action recognition dataset that has 12 action classes and 8 camera views.

  11. Structural damage detection using active members and neural networks

    NASA Astrophysics Data System (ADS)

    Manning, R. A.

    1994-06-01

    The detection of damage in structures is a topic which has considerable interest in many fields. In the past many methods for detecting damage in structures has relied on finite element model refinement methods. This note presents a structural damage methodology in which only active member transfer function data are used in conjunction with an artificial neural network to detect damage in structures. Specifically, the method relies on training a neural network using active member transfer function pole/zero information to classify damaged structure measurements and to predict the degree of damage in the structure. The method differs from many of the past damage detection algorithms in that no attempt is made to update a finite element model or to match measured data with new finite element analyses of the structure in a damaged state.

  12. Multivariate neural network operators with sigmoidal activation functions.

    PubMed

    Costarelli, Danilo; Spigler, Renato

    2013-12-01

    In this paper, we study pointwise and uniform convergence, as well as order of approximation, of a family of linear positive multivariate neural network (NN) operators with sigmoidal activation functions. The order of approximation is studied for functions belonging to suitable Lipschitz classes and using a moment-type approach. The special cases of NN operators, activated by logistic, hyperbolic tangent, and ramp sigmoidal functions are considered. Multivariate NNs approximation finds applications, typically, in neurocomputing processes. Our approach to NN operators allows us to extend previous convergence results and, in some cases, to improve the order of approximation. The case of multivariate quasi-interpolation operators constructed with sigmoidal functions is also considered.

  13. Lag Synchronization of Switched Neural Networks via Neural Activation Function and Applications in Image Encryption.

    PubMed

    Wen, Shiping; Zeng, Zhigang; Huang, Tingwen; Meng, Qinggang; Yao, Wei

    2015-07-01

    This paper investigates the problem of global exponential lag synchronization of a class of switched neural networks with time-varying delays via neural activation function and applications in image encryption. The controller is dependent on the output of the system in the case of packed circuits, since it is hard to measure the inner state of the circuits. Thus, it is critical to design the controller based on the neuron activation function. Comparing the results, in this paper, with the existing ones shows that we improve and generalize the results derived in the previous literature. Several examples are also given to illustrate the effectiveness and potential applications in image encryption.

  14. Generalized activity equations for spiking neural network dynamics

    PubMed Central

    Buice, Michael A.; Chow, Carson C.

    2013-01-01

    Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales—the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances. PMID:24298252

  15. Studying modulation on simultaneously activated SSVEP neural networks by a cognitive task.

    PubMed

    Wu, Zhenghua

    2014-01-01

    Since the discovery of steady-state visually evoked potential (SSVEP), it has been used in many fields. Numerous studies suggest that there exist three SSVEP neural networks in different frequency bands. An obvious phenomenon has been observed, that the amplitude and phase of SSVEP can be modulated by a cognitive task. Previous works have studied this modulation on separately activated SSVEP neural networks by a cognitive task. If two or more SSVEP neural networks are activated simultaneously in the process of a cognitive task, is the modulation on different SSVEP neural networks the same? In this study, two different SSVEP neural networks were activated simultaneously by two different frequency flickers, with a working memory task irrelevant to the flickers being conducted at the same time. The modulated SSVEP waves were compared with each other and to those only under one flicker in previous studies. The comparison results show that the cognitive task can modulate different SSVEP neural networks with a similar style.

  16. Neural Networks

    NASA Astrophysics Data System (ADS)

    Schwindling, Jerome

    2010-04-01

    This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p.) corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  17. Absolute exponential stability of recurrent neural networks with Lipschitz-continuous activation functions and time delays.

    PubMed

    Cao, Jinde; Wang, Jun

    2004-04-01

    This paper investigates the absolute exponential stability of a general class of delayed neural networks, which require the activation functions to be partially Lipschitz continuous and monotone nondecreasing only, but not necessarily differentiable or bounded. Three new sufficient conditions are derived to ascertain whether or not the equilibrium points of the delayed neural networks with additively diagonally stable interconnection matrices are absolutely exponentially stable by using delay Halanay-type inequality and Lyapunov function. The stability criteria are also suitable for delayed optimization neural networks and delayed cellular neural networks whose activation functions are often nondifferentiable or unbounded. The results herein answer a question: if a neural network without any delay is absolutely exponentially stable, then under what additional conditions, the neural networks with delay is also absolutely exponentially stable.

  18. Neural Network Function Classifier

    DTIC Science & Technology

    2003-02-07

    neural network sets. Each of the neural networks in a particular set is trained to recognize a particular data set type. The best function representation of the data set is determined from the neural network output. The system comprises sets of trained neural networks having neural networks trained to identify different types of data. The number of neural networks within each neural network set will depend on the number of function types that are represented. The system further comprises

  19. Natural lecithin promotes neural network complexity and activity

    PubMed Central

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-01-01

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called “essential” fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications. PMID:27228907

  20. Critical Branching Neural Networks

    ERIC Educational Resources Information Center

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  1. Active Control of Complex Systems via Dynamic (Recurrent) Neural Networks

    DTIC Science & Technology

    1992-05-30

    course, to on-going changes brought about by learning processes. As research in neurodynamics proceeded, the concept of reverberatory information flows...Microstructure of Cognition . Vol. 1: Foundations, M.I.T. Press, Cambridge, Massachusetts, pp. 354-361, 1986. 100 I Schwarz, G., "Estimating the dimension of a...Continually Running Fully Recurrent Neural Networks, ICS Report 8805, Institute of Cognitive Science, University of California at San Diego, 1988. 10 II

  2. Frequency domain active vibration control of a flexible plate based on neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Jinxin; Chen, Xuefeng; He, Zhengjia

    2013-06-01

    A neural-network (NN)-based active control system was proposed to reduce the low frequency noise radiation of the simply supported flexible plate. Feedback control system was built, in which neural network controller (NNC) and neural network identifier (NNI) were applied. Multi-frequency control in frequency domain was achieved by simulation through the NN-based control systems. A pre-testing experiment of the control system on a real simply supported plate was conducted. The NN-based control algorithm was shown to perform effectively. These works lay a solid foundation for the active vibration control of mechanical structures.

  3. The optimization of force inputs for active structural acoustic control using a neural network

    NASA Technical Reports Server (NTRS)

    Cabell, R. H.; Lester, H. C.; Silcox, R. J.

    1992-01-01

    This paper investigates the use of a neural network to determine which force actuators, of a multi-actuator array, are best activated in order to achieve structural-acoustic control. The concept is demonstrated using a cylinder/cavity model on which the control forces, produced by piezoelectric actuators, are applied with the objective of reducing the interior noise. A two-layer neural network is employed and the back propagation solution is compared with the results calculated by a conventional, least-squares optimization analysis. The ability of the neural network to accurately and efficiently control actuator activation for interior noise reduction is demonstrated.

  4. Visualizing the Hidden Activity of Artificial Neural Networks.

    PubMed

    Rauber, Paulo E; Fadel, Samuel G; Falcao, Alexandre X; Telea, Alexandru C

    2017-01-01

    In machine learning, pattern classification assigns high-dimensional vectors (observations) to classes based on generalization from examples. Artificial neural networks currently achieve state-of-the-art results in this task. Although such networks are typically used as black-boxes, they are also widely believed to learn (high-dimensional) higher-level representations of the original observations. In this paper, we propose using dimensionality reduction for two tasks: visualizing the relationships between learned representations of observations, and visualizing the relationships between artificial neurons. Through experiments conducted in three traditional image classification benchmark datasets, we show how visualization can provide highly valuable feedback for network designers. For instance, our discoveries in one of these datasets (SVHN) include the presence of interpretable clusters of learned representations, and the partitioning of artificial neurons into groups with apparently related discriminative roles.

  5. Neural Network Studies

    DTIC Science & Technology

    1993-07-01

    basic useful theorems and general rules which apply to neural networks (in ’Overview of Neural Network Theory’), studies of training time as the...The Neural Network , Bayes- Gaussian, and k-Nearest Neighbor Classifiers’), an analysis of fuzzy logic and its relationship to neural network (in ’Fuzzy

  6. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  7. Time structure of the activity in neural network models

    NASA Astrophysics Data System (ADS)

    Gerstner, Wulfram

    1995-01-01

    Several neural network models in continuous time are reconsidered in the framework of a general mean-field theory which is exact in the limit of a large and fully connected network. The theory assumes pointlike spikes which are generated by a renewal process. The effect of spikes on a receiving neuron is described by a linear response kernel which is the dominant term in a weak-coupling expansion. It is shown that the resulting ``spike response model'' is the most general renewal model with linear inputs. The standard integrate-and-fire model forms a special case. In a network structure with several pools of identical spiking neurons, the global states and the dynamic evolution are determined by a nonlinear integral equation which describes the effective interaction within and between different pools. We derive explicit stability criteria for stationary (incoherent) and oscillatory (coherent) solutions. It is shown that the stationary state of noiseless systems is ``almost always'' unstable. Noise suppresses fast oscillations and stabilizes the system. Furthermore, collective oscillations are stable only if the firing occurs while the synaptic potential is increasing. In particular, collective oscillations in a network with delayless excitatory interaction are at most semistable. Inhibitory interactions with short delays or excitatory interactions with long delays lead to stable oscillations. Our general results allow a straightforward application to different network models with spiking neurons. Furthermore, the theory allows an estimation of the errors introduced in firing rate or ``graded-response'' models.

  8. Dynamical Behaviors of Multiple Equilibria in Competitive Neural Networks With Discontinuous Nonmonotonic Piecewise Linear Activation Functions.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing

    2016-03-01

    This paper addresses the problem of coexistence and dynamical behaviors of multiple equilibria for competitive neural networks. First, a general class of discontinuous nonmonotonic piecewise linear activation functions is introduced for competitive neural networks. Then based on the fixed point theorem and theory of strict diagonal dominance matrix, it is shown that under some conditions, such n -neuron competitive neural networks can have 5(n) equilibria, among which 3(n) equilibria are locally stable and the others are unstable. More importantly, it is revealed that the neural networks with the discontinuous activation functions introduced in this paper can have both more total equilibria and locally stable equilibria than the ones with other activation functions, such as the continuous Mexican-hat-type activation function and discontinuous two-level activation function. Furthermore, the 3(n) locally stable equilibria given in this paper are located in not only saturated regions, but also unsaturated regions, which is different from the existing results on multistability of neural networks with multiple level activation functions. A simulation example is provided to illustrate and validate the theoretical findings.

  9. Self-regulated homoclinic chaos in neural networks activity

    NASA Astrophysics Data System (ADS)

    Volman, Vladislav; Baruchi, Itay; Ben-Jacob, Eshel

    2004-12-01

    We compare the recorded activity of cultured neuronal networks with hybridized model simulations, in which the model neurons are driven by the recorded activity of special neurons. The latter, named `spiker' neurons, that exhibit fast firing with homoclinic chaos like characteristics, are expected to play an important role in the networks' self regulation. The cultured networks are grown from dissociated mixtures of cortical neurons and glia cells. Despite the artificial manner of their construction, the spontaneous activity of these networks exhibits rich dynamical behavior, marked by the formation of temporal sequences of synchronized bursting events (SBEs), and additional features which seemingly reflect the action of underlying regulating mechanism, rather than arbitrary causes and effects. Our model neurons are composed of soma described by the two Morris-Lecar dynamical variables (voltage and fraction of open potassium channels), with dynamical synapses described by the Tsodyks-Markram three variables dynamics. To study the recorded and simulated activities we evaluated the inter-neuron correlation matrices, and analyzed them utilizing the functional holography approach: the correlations are re-normalized by the correlation distances — Euclidean distances between the matrix columns. Then, we project the N-dimensional (for N channels) space spanned by the matrix of re-normalized correlations, or correlation affinities, onto a corresponding 3-D causal manifold (3-D Cartesian space constructed by the 3 leading principal vectors of the N-dimensional space. The neurons are located by their principal eigenvalues and linked by their original (not-normalized) correlations. This reveals hidden causal motifs: the neuron locations and their links form simple structures. Similar causal motifs are exhibited by the model simulations when feeded by the recorded activity of the spiker neurons. We illustrate that the homoclinic chaotic behavior of the spiker neurons can be

  10. Real-time Neural Network predictions of geomagnetic activity indices

    NASA Astrophysics Data System (ADS)

    Bala, R.; Reiff, P. H.

    2009-12-01

    The Boyle potential or the Boyle Index (BI), Φ (kV)=10-4 (V/(km/s))2 + 11.7 (B/nT) sin3(θ/2), is an empirically-derived formula that can characterize the Earth's polar cap potential, which is readily derivable in real time using the solar wind data from ACE (Advanced Composition Explorer). The BI has a simplistic form that utilizes a non-magnetic "viscous" and a magnetic "merging" component to characterize the magnetospheric behavior in response to the solar wind. We have investigated its correlation with two of conventional geomagnetic activity indices in Kp and the AE index. We have shown that the logarithms of both 3-hr and 1-hr averages of the BI correlate well with the subsequent Kp: Kp = 8.93 log10(BI) - 12.55 along with 1-hr BI correlating with the subsequent log10(AE): log10(AE) = 1.78 log10(BI) - 3.6. We have developed a new set of algorithms based on Artificial Neural Networks (ANNs) suitable for short term space weather forecasts with an enhanced lead-time and better accuracy in predicting Kp and AE over some leading models; the algorithms omit the time history of its targets to utilize only the solar wind data. Inputs to our ANN models benefit from the BI and its proven record as a forecasting parameter since its initiation in October, 2003. We have also performed time-sensitivity tests using cross-correlation analysis to demonstrate that our models are as efficient as those that incorporates the time history of the target indices in their inputs. Our algorithms can predict the upcoming full 3-hr Kp, purely from the solar wind data and achieve a linear correlation coefficient of 0.840, which means that it predicts the upcoming Kp value on average to within 1.3 step, which is approximately the resolution of the real-time Kp estimate. Our success in predicting Kp during a recent unexpected event (22 July ’09) is shown in the figure. Also, when predicting an equivalent "one hour Kp'', the correlation coefficient is 0.86, meaning on average a prediction

  11. Application of neural networks with orthogonal activation functions in control of dynamical systems

    NASA Astrophysics Data System (ADS)

    Nikolić, Saša S.; Antić, Dragan S.; Milojković, Marko T.; Milovanović, Miroslav B.; Perić, Staniša Lj.; Mitić, Darko B.

    2016-04-01

    In this article, we present a new method for the synthesis of almost and quasi-orthogonal polynomials of arbitrary order. Filters designed on the bases of these functions are generators of generalised quasi-orthogonal signals for which we derived and presented necessary mathematical background. Based on theoretical results, we designed and practically implemented generalised first-order (k = 1) quasi-orthogonal filter and proved its quasi-orthogonality via performed experiments. Designed filters can be applied in many scientific areas. In this article, generated functions were successfully implemented in Nonlinear Auto Regressive eXogenous (NARX) neural network as activation functions. One practical application of the designed orthogonal neural network is demonstrated through the example of control of the complex technical non-linear system - laboratory magnetic levitation system. Obtained results were compared with neural networks with standard activation functions and orthogonal functions of trigonometric shape. The proposed network demonstrated superiority over existing solutions in the sense of system performances.

  12. Generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2013-03-01

    In this work a new radial basis function based classification neural network named as generalized classifier neural network, is proposed. The proposed generalized classifier neural network has five layers, unlike other radial basis function based neural networks such as generalized regression neural network and probabilistic neural network. They are input, pattern, summation, normalization and output layers. In addition to topological difference, the proposed neural network has gradient descent based optimization of smoothing parameter approach and diverge effect term added calculation improvements. Diverge effect term is an improvement on summation layer calculation to supply additional separation ability and flexibility. Performance of generalized classifier neural network is compared with that of the probabilistic neural network, multilayer perceptron algorithm and radial basis function neural network on 9 different data sets and with that of generalized regression neural network on 3 different data sets include only two classes in MATLAB environment. Better classification performance up to %89 is observed. Improved classification performances proved the effectivity of the proposed neural network.

  13. Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.

    PubMed

    Li, Shuai; Li, Yangming

    2013-10-28

    The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.

  14. Fractal patterns of neural activity exist within the suprachiasmatic nucleus and require extrinsic network interactions.

    PubMed

    Hu, Kun; Meijer, Johanna H; Shea, Steven A; vanderLeest, Henk Tjebbe; Pittman-Polletta, Benjamin; Houben, Thijs; van Oosterhout, Floor; Deboer, Tom; Scheer, Frank A J L

    2012-01-01

    The mammalian central circadian pacemaker (the suprachiasmatic nucleus, SCN) contains thousands of neurons that are coupled through a complex network of interactions. In addition to the established role of the SCN in generating rhythms of ~24 hours in many physiological functions, the SCN was recently shown to be necessary for normal self-similar/fractal organization of motor activity and heart rate over a wide range of time scales--from minutes to 24 hours. To test whether the neural network within the SCN is sufficient to generate such fractal patterns, we studied multi-unit neural activity of in vivo and in vitro SCNs in rodents. In vivo SCN-neural activity exhibited fractal patterns that are virtually identical in mice and rats and are similar to those in motor activity at time scales from minutes up to 10 hours. In addition, these patterns remained unchanged when the main afferent signal to the SCN, namely light, was removed. However, the fractal patterns of SCN-neural activity are not autonomous within the SCN as these patterns completely broke down in the isolated in vitro SCN despite persistence of circadian rhythmicity. Thus, SCN-neural activity is fractal in the intact organism and these fractal patterns require network interactions between the SCN and extra-SCN nodes. Such a fractal control network could underlie the fractal regulation observed in many physiological functions that involve the SCN, including motor control and heart rate regulation.

  15. [Artificial neural networks in Neurosciences].

    PubMed

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María

    2011-11-01

    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  16. New exponential synchronization criteria for time-varying delayed neural networks with discontinuous activations.

    PubMed

    Cai, Zuowei; Huang, Lihong; Zhang, Lingling

    2015-05-01

    This paper investigates the problem of exponential synchronization of time-varying delayed neural networks with discontinuous neuron activations. Under the extended Filippov differential inclusion framework, by designing discontinuous state-feedback controller and using some analytic techniques, new testable algebraic criteria are obtained to realize two different kinds of global exponential synchronization of the drive-response system. Moreover, we give the estimated rate of exponential synchronization which depends on the delays and system parameters. The obtained results extend some previous works on synchronization of delayed neural networks not only with continuous activations but also with discontinuous activations. Finally, numerical examples are provided to show the correctness of our analysis via computer simulations. Our method and theoretical results have a leading significance in the design of synchronized neural network circuits involving discontinuous factors and time-varying delays.

  17. Neural networks counting chimes.

    PubMed Central

    Amit, D J

    1988-01-01

    It is shown that the ideas that led to neural networks capable of recalling associatively and asynchronously temporal sequences of patterns can be extended to produce a neural network that automatically counts the cardinal number in a sequence of identical external stimuli. The network is explicitly constructed, analyzed, and simulated. Such a network may account for the cognitive effect of the automatic counting of chimes to tell the hour. A more general implication is that different electrophysiological responses to identical stimuli, at certain stages of cortical processing, do not necessarily imply synaptic modification, a la Hebb. Such differences may arise from the fact that consecutive identical inputs find the network in different stages of an active temporal sequence of cognitive states. These types of networks are then situated within a program for the study of cognition, which assigns the detection of meaning as the primary role of attractor neural networks rather than computation, in contrast to the parallel distributed processing attitude to the connectionist project. This interpretation is free of homunculus, as well as from the criticism raised against the cognitive model of symbol manipulation. Computation is then identified as the syntax of temporal sequences of quasi-attractors. PMID:3353371

  18. Noise influence on spike activation in a Hindmarsh-Rose small-world neural network

    NASA Astrophysics Data System (ADS)

    Zhe, Sun; Micheletto, Ruggero

    2016-07-01

    We studied the role of noise in neural networks, especially focusing on its relation to the propagation of spike activity in a small sized system. We set up a source of information using a single neuron that is constantly spiking. This element called initiator x o feeds spikes to the rest of the network that is initially quiescent and subsequently reacts with vigorous spiking after a transitional period of time. We found that noise quickly suppresses the initiator’s influence and favors spontaneous spike activity and, using a decibel representation of noise intensity, we established a linear relationship between noise amplitude and the interval from the initiator’s first spike and the rest of the network activation. We studied the same process with networks of different sizes (number of neurons) and found that the initiator x o has a measurable influence on small networks, but as the network grows in size, spontaneous spiking emerges disrupting its effects on networks of more than about N = 100 neurons. This suggests that the mechanism of internal noise generation allows information transmission within a small neural neighborhood, but decays for bigger network domains. We also analyzed the Fourier spectrum of the whole network membrane potential and verified that noise provokes the reduction of main θ and α peaks before transitioning into chaotic spiking. However, network size does not reproduce a similar phenomena; instead we recorded a reduction in peaks’ amplitude, a better sharpness and definition of Fourier peaks, but not the evident degeneration to chaos observed with increasing external noise. This work aims to contribute to the understanding of the fundamental mechanisms of propagation of spontaneous spiking in neural networks and gives a quantitative assessment of how noise can be used to control and modulate this phenomenon in Hindmarsh-Rose (H-R) neural networks.

  19. Solar geomagnetic activity prediction using the fractal analysis and neural network

    NASA Astrophysics Data System (ADS)

    Ouadfeul, Sid-Ali; Aliouane, Leila

    2010-05-01

    The main goal of this work is to predict the Solar geomagnetic field activity using the neural network combined with the fractal analysis, first a multilayer perceptron neural network model is proposed to predict the future Solar geomagnetic field, the input of this machine is the geographic Coordinates and the time .The output is the three geomagnetic field components and the total field intensity recorded by the Orsted Satellite Mission. Holder Exponents of the measured geomagnetic field components and the total field intensity are calculated using the continuous wavelet transform. The Set of Holder exponents is used to train a Kohonen's Self-Organizing Map (SOM) neural machine which will become a classifier of the solar magnetic activity nature. The SOM neural network machine is used to predict the future solar magnetic storms, in this step the input is the calculated set of the Holder exponents of the predicted geomagnetic field components and the total field intensity. Obtained results show that the proposed technique is a powerful tool and can enhance the solar magnetic field activity prediction. Keywords: Solar geomagnetic activity, neural network, prediction, Orsted, Holder Exponents, Solar magnetic storms.

  20. Optogenetics in Silicon: A Neural Processor for Predicting Optically Active Neural Networks.

    PubMed

    Luo, Junwen; Nikolic, Konstantin; Evans, Benjamin D; Dong, Na; Sun, Xiaohan; Andras, Peter; Yakovlev, Alex; Degenaar, Patrick

    2016-08-17

    We present a reconfigurable neural processor for real-time simulation and prediction of opto-neural behaviour. We combined a detailed Hodgkin-Huxley CA3 neuron integrated with a four-state Channelrhodopsin-2 (ChR2) model into reconfigurable silicon hardware. Our architecture consists of a Field Programmable Gated Array (FPGA) with a custom-built computing data-path, a separate data management system and a memory approach based router. Advancements over previous work include the incorporation of short and long-term calcium and light-dependent ion channels in reconfigurable hardware. Also, the developed processor is computationally efficient, requiring only 0.03 ms processing time per sub-frame for a single neuron and 9.7 ms for a fully connected network of 500 neurons with a given FPGA frequency of 56.7 MHz. It can therefore be utilized for exploration of closed loop processing and tuning of biologically realistic optogenetic circuitry.

  1. Nonlinear Neural Network Oscillator.

    DTIC Science & Technology

    A nonlinear oscillator (10) includes a neural network (12) having at least one output (12a) for outputting a one dimensional vector. The neural ... neural network and the input of the input layer for modifying a magnitude and/or a polarity of the one dimensional output vector prior to the sample of...first or a second direction. Connection weights of the neural network are trained on a deterministic sequence of data from a chaotic source or may be a

  2. Performance of Deep and Shallow Neural Networks, the Universal Approximation Theorem, Activity Cliffs, and QSAR.

    PubMed

    Winkler, David A; Le, Tu C

    2017-01-01

    Neural networks have generated valuable Quantitative Structure-Activity/Property Relationships (QSAR/QSPR) models for a wide variety of small molecules and materials properties. They have grown in sophistication and many of their initial problems have been overcome by modern mathematical techniques. QSAR studies have almost always used so-called "shallow" neural networks in which there is a single hidden layer between the input and output layers. Recently, a new and potentially paradigm-shifting type of neural network based on Deep Learning has appeared. Deep learning methods have generated impressive improvements in image and voice recognition, and are now being applied to QSAR and QSAR modelling. This paper describes the differences in approach between deep and shallow neural networks, compares their abilities to predict the properties of test sets for 15 large drug data sets (the kaggle set), discusses the results in terms of the Universal Approximation theorem for neural networks, and describes how DNN may ameliorate or remove troublesome "activity cliffs" in QSAR data sets.

  3. Nonsmooth finite-time stabilization of neural networks with discontinuous activations.

    PubMed

    Liu, Xiaoyang; Park, Ju H; Jiang, Nan; Cao, Jinde

    2014-04-01

    This paper is concerned with the finite-time stabilization for a class of neural networks (NNs) with discontinuous activations. The purpose of the addressed problem is to design a discontinuous controller to stabilize the states of such neural networks in finite time. Unlike the previous works, such stabilization objective will be realized for neural networks when the activations and controllers are both discontinuous. Based on the famous finite-time stability theorem of nonlinear systems and nonsmooth analysis in mathematics, sufficient conditions are established to ensure the finite-time stability of the dynamics of NNs. Then, the upper bound of the settling time for stabilization can be estimated in two forms due to two different methods of proof. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design method.

  4. Improved training of neural networks for the nonlinear active control of sound and vibration.

    PubMed

    Bouchard, M; Paillard, B; Le Dinh, C T

    1999-01-01

    Active control of sound and vibration has been the subject of a lot of research in recent years, and examples of applications are now numerous. However, few practical implementations of nonlinear active controllers have been realized. Nonlinear active controllers may be required in cases where the actuators used in active control systems exhibit nonlinear characteristics, or in cases when the structure to be controlled exhibits a nonlinear behavior. A multilayer perceptron neural-network based control structure was previously introduced as a nonlinear active controller, with a training algorithm based on an extended backpropagation scheme. This paper introduces new heuristical training algorithms for the same neural-network control structure. The objective is to develop new algorithms with faster convergence speed (by using nonlinear recursive-least-squares algorithms) and/or lower computational loads (by using an alternative approach to compute the instantaneous gradient of the cost function). Experimental results of active sound control using a nonlinear actuator with linear and nonlinear controllers are presented. The results show that some of the new algorithms can greatly improve the learning rate of the neural-network control structure, and that for the considered experimental setup a neural-network controller can outperform linear controllers.

  5. An investigation of the relationship between activation of a social cognitive neural network and social functioning.

    PubMed

    Pinkham, Amy E; Hopfinger, Joseph B; Ruparel, Kosha; Penn, David L

    2008-07-01

    Previous work examining the neurobiological substrates of social cognition in healthy individuals has reported modulation of a social cognitive network such that increased activation of the amygdala, fusiform gyrus, and superior temporal sulcus are evident when individuals judge a face to be untrustworthy as compared with trustworthy. We examined whether this pattern would be present in individuals with schizophrenia who are known to show reduced activation within these same neural regions when processing faces. Additionally, we sought to determine how modulation of this social cognitive network may relate to social functioning. Neural activation was measured using functional magnetic resonance imaging with blood oxygenation level dependent contrast in 3 groups of individuals--nonparanoid individuals with schizophrenia, paranoid individuals with schizophrenia, and healthy controls--while they rated faces as either trustworthy or untrustworthy. Analyses of mean percent signal change extracted from a priori regions of interest demonstrated that both controls and nonparanoid individuals with schizophrenia showed greater activation of this social cognitive network when they rated a face as untrustworthy relative to trustworthy. In contrast, paranoid individuals did not show a significant difference in levels of activation based on how they rated faces. Further, greater activation of this social cognitive network to untrustworthy faces was significantly and positively correlated with social functioning. These findings indicate that impaired modulation of neural activity while processing social stimuli may underlie deficits in social cognition and social dysfunction in schizophrenia.

  6. Neural Network Hurricane Tracker

    DTIC Science & Technology

    1998-05-27

    data about the hurricane and supplying the data to a trained neural network for yielding a predicted path for the hurricane. The system further includes...a device for displaying the predicted path of the hurricane. A method for using and training the neural network in the system is described. In the...method, the neural network is trained using information about hurricanes in a specific geographical area maintained in a database. The training involves

  7. Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations.

    PubMed

    Wang, Sheng-Jun; Hilgetag, Claus C; Zhou, Changsong

    2011-01-01

    Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. In particular, they are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality (SOC). We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. Previously, it was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We found that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and SOC, which are not present in the respective random networks. The mechanism underlying the sustained activity is that each dense module cannot sustain activity on its own, but displays SOC in the presence of weak perturbations. Therefore, the hierarchical modular networks provide the coupling among subsystems with SOC. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivity of critical states and the predictability and timing of oscillations for efficient information

  8. Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.

    PubMed

    Li, Xiumin; Small, Michael

    2012-06-01

    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.

  9. Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Small, Michael

    2012-06-01

    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both invivo and invitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.

  10. Studies in Neural Networks

    DTIC Science & Technology

    1991-01-01

    N00014-87-K-0377 TITLE: "Studies in Neural Networks " fl.U Q l~~izie JUL 021991 "" " F.: L9’CO37 "I! c-1(.d Contract No.: N00014-87-K-0377 Final...34) have been very useful, both in understanding the dynamics of neural networks and in engineering networks to perform particular tasks. We have noted...understanding more complex network computation. Interest in applying ideas from biological neural networks to real problems of engineering raises the issues of

  11. Exponential stability of delayed and impulsive cellular neural networks with partially Lipschitz continuous activation functions.

    PubMed

    Song, Xueli; Xin, Xing; Huang, Wenpo

    2012-05-01

    The paper discusses exponential stability of distributed delayed and impulsive cellular neural networks with partially Lipschitz continuous activation functions. By relative nonlinear measure method, some novel criteria are obtained for the uniqueness and exponential stability of the equilibrium point. Our method abandons usual assumptions on global Lipschitz continuity, boundedness and monotonicity of activation functions. Our results are generalization and improvement of some existing ones. Finally, two examples and their simulations are presented to illustrate the correctness of our analysis.

  12. Emergence of gamma motor activity in an artificial neural network model of the corticospinal system.

    PubMed

    Grandjean, Bernard; Maier, Marc A

    2017-02-01

    Muscle spindle discharge during active movement is a function of mechanical and neural parameters. Muscle length changes (and their derivatives) represent its primary mechanical, fusimotor drive its neural component. However, neither the action nor the function of fusimotor and in particular of γ-drive, have been clearly established, since γ-motor activity during voluntary, non-locomotor movements remains largely unknown. Here, using a computational approach, we explored whether γ-drive emerges in an artificial neural network model of the corticospinal system linked to a biomechanical antagonist wrist simulator. The wrist simulator included length-sensitive and γ-drive-dependent type Ia and type II muscle spindle activity. Network activity and connectivity were derived by a gradient descent algorithm to generate reciprocal, known target α-motor unit activity during wrist flexion-extension (F/E) movements. Two tasks were simulated: an alternating F/E task and a slow F/E tracking task. Emergence of γ-motor activity in the alternating F/E network was a function of α-motor unit drive: if muscle afferent (together with supraspinal) input was required for driving α-motor units, then γ-drive emerged in the form of α-γ coactivation, as predicted by empirical studies. In the slow F/E tracking network, γ-drive emerged in the form of α-γ dissociation and provided critical, bidirectional muscle afferent activity to the cortical network, containing known bidirectional target units. The model thus demonstrates the complementary aspects of spindle output and hence γ-drive: i) muscle spindle activity as a driving force of α-motor unit activity, and ii) afferent activity providing continuous sensory information, both of which crucially depend on γ-drive.

  13. Video-based convolutional neural networks for activity recognition from robot-centric videos

    NASA Astrophysics Data System (ADS)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  14. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  15. Forecast and restoration of geomagnetic activity indices by using the software-computational neural network complex

    NASA Astrophysics Data System (ADS)

    Barkhatov, Nikolay; Revunov, Sergey

    2010-05-01

    It is known that currently used indices of geomagnetic activity to some extent reflect the physical processes occurring in the interaction of the perturbed solar wind with Earth's magnetosphere. Therefore, they are connected to each other and with the parameters of near-Earth space. The establishment of such nonlinear connections is interest. For such purposes when the physical problem is complex or has many parameters the technology of artificial neural networks is applied. Such approach for development of the automated forecast and restoration method of geomagnetic activity indices with the establishment of creative software-computational neural network complex is used. Each neural network experiments were carried out at this complex aims to search for a specific nonlinear relation between the analyzed indices and parameters. At the core of the algorithm work program a complex scheme of the functioning of artificial neural networks (ANN) of different types is contained: back propagation Elman network, feed forward network, fuzzy logic network and Kohonen layer classification network. Tools of the main window of the complex (the application) the settings used by neural networks allow you to change: the number of hidden layers, the number of neurons in the layer, the input and target data, the number of cycles of training. Process and the quality of training the ANN is a dynamic plot of changing training error. Plot of comparison of network response with the test sequence is result of the network training. The last-trained neural network with established nonlinear connection for repeated numerical experiments can be run. At the same time additional training is not executed and the previously trained network as a filter input parameters get through and output parameters with the test event are compared. At statement of the large number of different experiments provided the ability to run the program in a "batch" mode is stipulated. For this purpose the user a

  16. Modeling the Dynamics of Human Brain Activity with Recurrent Neural Networks

    PubMed Central

    Güçlü, Umut; van Gerven, Marcel A. J.

    2017-01-01

    Encoding models are used for predicting brain activity in response to sensory stimuli with the objective of elucidating how sensory information is represented in the brain. Encoding models typically comprise a nonlinear transformation of stimuli to features (feature model) and a linear convolution of features to responses (response model). While there has been extensive work on developing better feature models, the work on developing better response models has been rather limited. Here, we investigate the extent to which recurrent neural network models can use their internal memories for nonlinear processing of arbitrary feature sequences to predict feature-evoked response sequences as measured by functional magnetic resonance imaging. We show that the proposed recurrent neural network models can significantly outperform established response models by accurately estimating long-term dependencies that drive hemodynamic responses. The results open a new window into modeling the dynamics of brain activity in response to sensory stimuli. PMID:28232797

  17. Global asymptotical stability of continuous-time delayed neural networks without global Lipschitz activation functions

    NASA Astrophysics Data System (ADS)

    Tan, Yong; Tan, Mingjia

    2009-11-01

    This paper investigates the global asymptotic stability of equilibrium for a class of continuous-time neural networks with delays. Based on suitable Lyapunov functionals and the homeomorphism theory, some sufficient conditions for the existence and uniqueness of the equilibrium point are derived. These results extend the previously works without assuming boundedness and Lipschitz conditions of the activation functions and any symmetry of interconnections. A numerical example is also given to show the improvements of the paper.

  18. Probabilistic Analysis of Neural Networks

    DTIC Science & Technology

    1990-11-26

    provide an understanding of the basic mechanisms of learning and recognition in neural networks . The main areas of progress were analysis of neural ... networks models, study of network connectivity, and investigation of computer network theory.

  19. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  20. Analytically tractable studies of traveling waves of activity in integrate-and-fire neural networks.

    PubMed

    Zhang, Jie; Osan, Remus

    2016-05-01

    In contrast to other large-scale network models for propagation of electrical activity in neural tissue that have no analytical solutions for their dynamics, we show that for a specific class of integrate and fire neural networks the acceleration depends quadratically on the instantaneous speed of the activity propagation. We use this property to analytically compute the network spike dynamics and to highlight the emergence of a natural time scale for the evolution of the traveling waves. These results allow us to examine other applications of this model such as the effect that a nonconductive gap of tissue has on further activity propagation. Furthermore we show that activity propagation also depends on local conditions for other more general connectivity functions, by converting the evolution equations for network dynamics into a low-dimensional system of ordinary differential equations. This approach greatly enhances our intuition into the mechanisms of the traveling waves evolution and significantly reduces the simulation time for this class of models.

  1. Analytically tractable studies of traveling waves of activity in integrate-and-fire neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Osan, Remus

    2016-05-01

    In contrast to other large-scale network models for propagation of electrical activity in neural tissue that have no analytical solutions for their dynamics, we show that for a specific class of integrate and fire neural networks the acceleration depends quadratically on the instantaneous speed of the activity propagation. We use this property to analytically compute the network spike dynamics and to highlight the emergence of a natural time scale for the evolution of the traveling waves. These results allow us to examine other applications of this model such as the effect that a nonconductive gap of tissue has on further activity propagation. Furthermore we show that activity propagation also depends on local conditions for other more general connectivity functions, by converting the evolution equations for network dynamics into a low-dimensional system of ordinary differential equations. This approach greatly enhances our intuition into the mechanisms of the traveling waves evolution and significantly reduces the simulation time for this class of models.

  2. Optimal Recognition Method of Human Activities Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Oniga, Stefan; József, Sütő

    2015-12-01

    The aim of this research is an exhaustive analysis of the various factors that may influence the recognition rate of the human activity using wearable sensors data. We made a total of 1674 simulations on a publically released human activity database by a group of researcher from the University of California at Berkeley. In a previous research, we analyzed the influence of the number of sensors and their placement. In the present research we have examined the influence of the number of sensor nodes, the type of sensor node, preprocessing algorithms, type of classifier and its parameters. The final purpose is to find the optimal setup for best recognition rates with lowest hardware and software costs.

  3. Active vibration control of flexible cantilever plates using piezoelectric materials and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Abdeljaber, Osama; Avci, Onur; Inman, Daniel J.

    2016-02-01

    The study presented in this paper introduces a new intelligent methodology to mitigate the vibration response of flexible cantilever plates. The use of the piezoelectric sensor/actuator pairs for active control of plates is discussed. An intelligent neural network based controller is designed to control the optimal voltage applied on the piezoelectric patches. The control technique utilizes a neurocontroller along with a Kalman Filter to compute the appropriate actuator command. The neurocontroller is trained based on an algorithm that incorporates a set of emulator neural networks which are also trained to predict the future response of the cantilever plate. Then, the neurocontroller is evaluated by comparing the uncontrolled and controlled responses under several types of dynamic excitations. It is observed that the neurocontroller reduced the vibration response of the flexible cantilever plate significantly; the results demonstrated the success and robustness of the neurocontroller independent of the type and distribution of the excitation force.

  4. Determination of DPPH free radical scavenging activity: application of artificial neural networks.

    PubMed

    Musa, Khalid Hamid; Abdullah, Aminah; Al-Haiqi, Ahmed

    2016-03-01

    A new computational approach for the determination of 2,2-diphenyl-1-picrylhydrazyl free radical scavenging activity (DPPH-RSA) in food is reported, based on the concept of machine learning. Trolox standard was mix with DPPH at different concentrations to produce different colors from purple to yellow. Artificial neural network (ANN) was trained on a typical set of images of the DPPH radical reacting with different levels of Trolox. This allowed the neural network to classify future images of any sample into the correct class of RSA level. The ANN was then able to determine the DPPH-RSA of cinnamon, clove, mung bean, red bean, red rice, brown rice, black rice and tea extract and the results were compared with data obtained using a spectrophotometer. The application of ANN correlated well to the spectrophotometric classical procedure and thus do not require the use of spectrophotometer, and it could be used to obtain semi-quantitative results of DPPH-RSA.

  5. A wearable sensor module with a neural-network-based activity classification algorithm for daily energy expenditure estimation.

    PubMed

    Lin, Che-Wei; Yang, Ya-Ting C; Wang, Jeen-Shing; Yang, Yi-Ching

    2012-09-01

    This paper presents a wearable module and neural-network-based activity classification algorithm for energy expenditure estimation. The purpose of our design is first to categorize physical activities with similar intensity levels, and then to construct energy expenditure regression (EER) models using neural networks in order to optimize the estimation performance. The classification of physical activities for EER model construction is based on the acceleration and ECG signal data collected by wearable sensor modules developed by our research lab. The proposed algorithm consists of procedures for data collection, data preprocessing, activity classification, feature selection, and construction of EER models using neural networks. In order to reduce the computational load and achieve satisfactory estimation performance, we employed sequential forward and backward search strategies for feature selection. Two representative neural networks, a radial basis function network (RBFN) and a generalized regression neural network (GRNN), were employed as EER models for performance comparisons. Our experimental results have successfully validated the effectiveness of our wearable sensor module and its neural-network-based activity classification algorithm for energy expenditure estimation. In addition, our results demonstrate the superior performance of GRNN as compared to RBFN.

  6. VLSI implementation of neural networks.

    PubMed

    Wilamowski, B M; Binfet, J; Kaynak, M O

    2000-06-01

    Currently, fuzzy controllers are the most popular choice for hardware implementation of complex control surfaces because they are easy to design. Neural controllers are more complex and hard to train, but provide an outstanding control surface with much less error than that of a fuzzy controller. There are also some problems that have to be solved before the networks can be implemented on VLSI chips. First, an approximation function needs to be developed because CMOS neural networks have an activation function different than any function used in neural network software. Next, this function has to be used to train the network. Finally, the last problem for VLSI designers is the quantization effect caused by discrete values of the channel length (L) and width (W) of MOS transistor geometries. Two neural networks were designed in 1.5 microm technology. Using adequate approximation functions solved the problem of activation function. With this approach, trained networks were characterized by very small errors. Unfortunately, when the weights were quantized, errors were increased by an order of magnitude. However, even though the errors were enlarged, the results obtained from neural network hardware implementations were superior to the results obtained with fuzzy system approach.

  7. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    PubMed Central

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  8. Neural Networks: A Primer

    DTIC Science & Technology

    1991-05-01

    capture underlying relationships directly from observed behavior is one of the primary capabilities of neural networks. 29 Back P’ropagation Approximailon...model complex behavior patterns. Particularly in areas traditionally addressed by regression and other functional based techniques, neural networks...to.be determined directly from the observed behavior of a system or sample of individuals. This ability should prove important in personnel analysis and

  9. Using convolutional neural networks for human activity classification on micro-Doppler radar spectrograms

    NASA Astrophysics Data System (ADS)

    Jordan, Tyler S.

    2016-05-01

    This paper presents the findings of using convolutional neural networks (CNNs) to classify human activity from micro-Doppler features. An emphasis on activities involving potential security threats such as holding a gun are explored. An automotive 24 GHz radar on chip was used to collect the data and a CNN (normally applied to image classification) was trained on the resulting spectrograms. The CNN achieves an error rate of 1.65 % on classifying running vs. walking, 17.3 % error on armed walking vs. unarmed walking, and 22 % on classifying six different actions.

  10. Active Control of Wind-Tunnel Model Aeroelastic Response Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.

    2000-01-01

    NASA Langley Research Center, Hampton, VA 23681 Under a joint research and development effort conducted by the National Aeronautics and Space Administration and The Boeing Company (formerly McDonnell Douglas) three neural-network based control systems were developed and tested. The control systems were experimentally evaluated using a transonic wind-tunnel model in the Langley Transonic Dynamics Tunnel. One system used a neural network to schedule flutter suppression control laws, another employed a neural network in a predictive control scheme, and the third employed a neural network in an inverse model control scheme. All three of these control schemes successfully suppressed flutter to or near the limits of the testing apparatus, and represent the first experimental applications of neural networks to flutter suppression. This paper will summarize the findings of this project.

  11. Implications of the Dependence of Neuronal Activity on Neural Network States for the Design of Brain-Machine Interfaces

    PubMed Central

    Panzeri, Stefano; Safaai, Houman; De Feo, Vito; Vato, Alessandro

    2016-01-01

    Brain-machine interfaces (BMIs) can improve the quality of life of patients with sensory and motor disabilities by both decoding motor intentions expressed by neural activity, and by encoding artificially sensed information into patterns of neural activity elicited by causal interventions on the neural tissue. Yet, current BMIs can exchange relatively small amounts of information with the brain. This problem has proved difficult to overcome by simply increasing the number of recording or stimulating electrodes, because trial-to-trial variability of neural activity partly arises from intrinsic factors (collectively known as the network state) that include ongoing spontaneous activity and neuromodulation, and so is shared among neurons. Here we review recent progress in characterizing the state dependence of neural responses, and in particular of how neural responses depend on endogenous slow fluctuations of network excitability. We then elaborate on how this knowledge may be used to increase the amount of information that BMIs exchange with brain. Knowledge of network state can be used to fine-tune the stimulation pattern that should reliably elicit a target neural response used to encode information in the brain, and to discount part of the trial-by-trial variability of neural responses, so that they can be decoded more accurately. PMID:27147955

  12. Implications of the Dependence of Neuronal Activity on Neural Network States for the Design of Brain-Machine Interfaces.

    PubMed

    Panzeri, Stefano; Safaai, Houman; De Feo, Vito; Vato, Alessandro

    2016-01-01

    Brain-machine interfaces (BMIs) can improve the quality of life of patients with sensory and motor disabilities by both decoding motor intentions expressed by neural activity, and by encoding artificially sensed information into patterns of neural activity elicited by causal interventions on the neural tissue. Yet, current BMIs can exchange relatively small amounts of information with the brain. This problem has proved difficult to overcome by simply increasing the number of recording or stimulating electrodes, because trial-to-trial variability of neural activity partly arises from intrinsic factors (collectively known as the network state) that include ongoing spontaneous activity and neuromodulation, and so is shared among neurons. Here we review recent progress in characterizing the state dependence of neural responses, and in particular of how neural responses depend on endogenous slow fluctuations of network excitability. We then elaborate on how this knowledge may be used to increase the amount of information that BMIs exchange with brain. Knowledge of network state can be used to fine-tune the stimulation pattern that should reliably elicit a target neural response used to encode information in the brain, and to discount part of the trial-by-trial variability of neural responses, so that they can be decoded more accurately.

  13. Programming neural networks

    SciTech Connect

    Anderson, J.A.; Markman, A.B.; Viscuso, S.R.; Wisniewski, E.J.

    1988-09-01

    Neural networks ''compute'' though not in the way that traditional computers do. One must accept their weaknesses to use their strengths. The authors present several applications of a particular non-linear network (the BSB model) to illustrate some of the peculiarities inherent in this architecture.

  14. Absolute exponential stability of recurrent neural networks with generalized activation function.

    PubMed

    Xu, Jun; Cao, Yong-Yan; Sun, Youxian; Tang, Jinshan

    2008-06-01

    In this paper, the recurrent neural networks (RNNs) with a generalized activation function class is proposed. In this proposed model, every component of the neuron's activation function belongs to a convex hull which is bounded by two odd symmetric piecewise linear functions that are convex or concave over the real space. All of the convex hulls are composed of generalized activation function classes. The novel activation function class is not only with a more flexible and more specific description of the activation functions than other function classes but it also generalizes some traditional activation function classes. The absolute exponential stability (AEST) of the RNN with a generalized activation function class is studied through three steps. The first step is to demonstrate the global exponential stability (GES) of the equilibrium point of original RNN with a generalized activation function being equivalent to that of RNN under all vertex functions of convex hull. The second step transforms the RNN under every vertex activation function into neural networks under an array of saturated linear activation functions. Because the GES of the equilibrium point of three systems are equivalent, the next stability analysis focuses on the GES of the equilibrium point of RNN system under an array of saturated linear activation functions. The last step is to study both the existence of equilibrium point and the GES of the RNN under saturated linear activation functions using the theory of M-matrix. In the end, a two-neuron RNN with a generalized activation function is constructed to show the effectiveness of our results.

  15. Application of an artificial neural network for evaluation of activity concentration exemption limits in NORM industry.

    PubMed

    Wiedner, Hannah; Peyrés, Virginia; Crespo, Teresa; Mejuto, Marcos; García-Toraño, Eduardo; Maringer, Franz Josef

    2016-12-27

    NORM emits many different gamma energies that have to be analysed by an expert. Alternatively, artificial neural networks (ANNs) can be used. These mathematical software tools can generalize "knowledge" gained from training datasets, applying it to new problems. No expert knowledge of gamma-ray spectrometry is needed by the end-user. In this work an ANN was created that is able to decide from the raw gamma-ray spectrum if the activity concentrations in a sample are above or below the exemption limits.

  16. Tomography using neural networks

    NASA Astrophysics Data System (ADS)

    Demeter, G.

    1997-03-01

    We have utilized neural networks for fast evaluation of tomographic data on the MT-1M tokamak. The networks have proven useful in providing the parameters of a nonlinear fit to experimental data, producing results in a fraction of the time required for performing the nonlinear fit. Time required for training the networks makes the method worth applying only if a substantial amount of data are to be evaluated.

  17. Multistability of complex-valued neural networks with discontinuous activation functions.

    PubMed

    Liang, Jinling; Gong, Weiqiang; Huang, Tingwen

    2016-12-01

    In this paper, based on the geometrical properties of the discontinuous activation functions and the Brouwer's fixed point theory, the multistability issue is tackled for the complex-valued neural networks with discontinuous activation functions and time-varying delays. To address the network with discontinuous functions, Filippov solution of the system is defined. Through rigorous analysis, several sufficient criteria are obtained to assure the existence of 25(n) equilibrium points. Among them, 9(n) points are locally stable and 16(n)-9(n) equilibrium points are unstable. Furthermore, to enlarge the attraction basins of the 9(n) equilibrium points, some mild conditions are imposed. Finally, one numerical example is provided to illustrate the effectiveness of the obtained results.

  18. Human activities recognition by head movement using partial recurrent neural network

    NASA Astrophysics Data System (ADS)

    Tan, Henry C. C.; Jia, Kui; De Silva, Liyanage C.

    2003-06-01

    Traditionally, human activities recognition has been achieved mainly by the statistical pattern recognition methods or the Hidden Markov Model (HMM). In this paper, we propose a novel use of the connectionist approach for the recognition of ten simple human activities: walking, sitting down, getting up, squatting down and standing up, in both lateral and frontal views, in an office environment. By means of tracking the head movement of the subjects over consecutive frames from a database of different color image sequences, and incorporating the Elman model of the partial recurrent neural network (RNN) that learns the sequential patterns of relative change of the head location in the images, the proposed system is able to robustly classify all the ten activities performed by unseen subjects from both sexes, of different race and physique, with a recognition rate as high as 92.5%. This demonstrates the potential of employing partial RNN to recognize complex activities in the increasingly popular human-activities-based applications.

  19. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  20. Sex-dependent modulation of activity in the neural networks engaged during emotional speech comprehension.

    PubMed

    Beaucousin, Virginie; Zago, Laure; Hervé, Pierre-Yves; Strelnikov, Kuzma; Crivello, Fabrice; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie

    2011-05-16

    Studies using event related potentials have shown that men are more likely than women to rely on semantic cues when understanding emotional speech. In a previous functional Magnetic Resonance Imaging (fMRI) study, using an affective sentence classification task, we were able to separate areas involved in semantic processing and areas involved in the processing of affective prosody (Beaucousin et al., 2007). Here we searched for sex-related differences in the neural networks active during emotional speech processing in groups of men and women. The ortholinguistic abilities of the participants did not differ when evaluated with a large battery of tests. Although the neural networks engaged by men and women during emotional sentence classification were largely overlapping, sex-dependent modulations were detected during emotional sentence classification, but not during grammatical sentence classification. Greater activity was observed in men, compared with women, in inferior frontal cortical areas involved in emotional labeling and in attentional areas. In conclusion, at equivalent linguistic abilities and performances, men activate semantic and attentional cortical areas to a larger extent than women during emotional speech processing.

  1. Hyperbolic Hopfield neural networks.

    PubMed

    Kobayashi, M

    2013-02-01

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states.

  2. a Hybrid-Type Active Vibration Isolation System Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Ahn, K. G.; Pahk, H. J.; Jung, M. Y.; Cho, D. W.

    1996-05-01

    Vibration isolation of mechanical systems is achieved through either passive or active vibration control systems. Although a passive vibration isolation system offers simple and reliable means to protect mechanical systems from a vibration environment, it has inherent performance limitations, that is, its controllable frequency range is limited and the shape of its transmissibility does not change. Recently, in some applications, such as active suspensions or precise vibration systems, active vibration isolation systems have been employed to overcome the limitations of the passive systems. In this paper, a hybrid-type active vibration isolation system that uses electromagnetic and pneumatic force is developed, and a new control algorithm adopting neural networks is proposed. The characteristics of the hybrid system proposed in the paper were investigated via computer simulation and experiments. It was shown that the transmissibility of the vibration isolation system could be kept below 0.63 over the entire frequency range, including the resonance frequency.

  3. Nested neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    Nested neural networks, consisting of small interconnected subnetworks, allow for the storage and retrieval of neural state patterns of different sizes. The subnetworks are naturally categorized by layers of corresponding to spatial frequencies in the pattern field. The storage capacity and the error correction capability of the subnetworks generally increase with the degree of connectivity between layers (the nesting degree). Storage of only few subpatterns in each subnetworks results in a vast storage capacity of patterns and subpatterns in the nested network, maintaining high stability and error correction capability.

  4. Dynamics on Networks: The Role of Local Dynamics and Global Networks on the Emergence of Hypersynchronous Neural Activity

    PubMed Central

    Schmidt, Helmut; Petkov, George; Richardson, Mark P.; Terry, John R.

    2014-01-01

    Graph theory has evolved into a useful tool for studying complex brain networks inferred from a variety of measures of neural activity, including fMRI, DTI, MEG and EEG. In the study of neurological disorders, recent work has discovered differences in the structure of graphs inferred from patient and control cohorts. However, most of these studies pursue a purely observational approach; identifying correlations between properties of graphs and the cohort which they describe, without consideration of the underlying mechanisms. To move beyond this necessitates the development of computational modeling approaches to appropriately interpret network interactions and the alterations in brain dynamics they permit, which in the field of complexity sciences is known as dynamics on networks. In this study we describe the development and application of this framework using modular networks of Kuramoto oscillators. We use this framework to understand functional networks inferred from resting state EEG recordings of a cohort of 35 adults with heterogeneous idiopathic generalized epilepsies and 40 healthy adult controls. Taking emergent synchrony across the global network as a proxy for seizures, our study finds that the critical strength of coupling required to synchronize the global network is significantly decreased for the epilepsy cohort for functional networks inferred from both theta (3–6 Hz) and low-alpha (6–9 Hz) bands. We further identify left frontal regions as a potential driver of seizure activity within these networks. We also explore the ability of our method to identify individuals with epilepsy, observing up to 80 predictive power through use of receiver operating characteristic analysis. Collectively these findings demonstrate that a computer model based analysis of routine clinical EEG provides significant additional information beyond standard clinical interpretation, which should ultimately enable a more appropriate mechanistic stratification of people

  5. Stress-related noradrenergic activity prompts large-scale neural network reconfiguration.

    PubMed

    Hermans, Erno J; van Marle, Hein J F; Ossewaarde, Lindsey; Henckens, Marloes J A G; Qin, Shaozheng; van Kesteren, Marlieke T R; Schoots, Vincent C; Cousijn, Helena; Rijpkema, Mark; Oostenveld, Robert; Fernández, Guillén

    2011-11-25

    Acute stress shifts the brain into a state that fosters rapid defense mechanisms. Stress-related neuromodulators are thought to trigger this change by altering properties of large-scale neural populations throughout the brain. We investigated this brain-state shift in humans. During exposure to a fear-related acute stressor, responsiveness and interconnectivity within a network including cortical (frontoinsular, dorsal anterior cingulate, inferotemporal, and temporoparietal) and subcortical (amygdala, thalamus, hypothalamus, and midbrain) regions increased as a function of stress response magnitudes. β-adrenergic receptor blockade, but not cortisol synthesis inhibition, diminished this increase. Thus, our findings reveal that noradrenergic activation during acute stress results in prolonged coupling within a distributed network that integrates information exchange between regions involved in autonomic-neuroendocrine control and vigilant attentional reorienting.

  6. Neural networks in psychiatry.

    PubMed

    Hulshoff Pol, Hilleke; Bullmore, Edward

    2013-01-01

    Over the past three decades numerous imaging studies have revealed structural and functional brain abnormalities in patients with neuropsychiatric diseases. These structural and functional brain changes are frequently found in multiple, discrete brain areas and may include frontal, temporal, parietal and occipital cortices as well as subcortical brain areas. However, while the structural and functional brain changes in patients are found in anatomically separated areas, these are connected through (long distance) fibers, together forming networks. Thus, instead of representing separate (patho)-physiological entities, these local changes in the brains of patients with psychiatric disorders may in fact represent different parts of the same 'elephant', i.e., the (altered) brain network. Recent developments in quantitative analysis of complex networks, based largely on graph theory, have revealed that the brain's structure and functions have features of complex networks. Here we briefly introduce several recent developments in neural network studies relevant for psychiatry, including from the 2013 special issue on Neural Networks in Psychiatry in European Neuropsychopharmacology. We conclude that new insights will be revealed from the neural network approaches to brain imaging in psychiatry that hold the potential to find causes for psychiatric disorders and (preventive) treatments in the future.

  7. Evolving Neural Network Pattern Classifiers

    DTIC Science & Technology

    1994-05-01

    This work investigates the application of evolutionary programming for automatically configuring neural network architectures for pattern...evaluating a multitude of neural network model hypotheses. The evolutionary programming search is augmented with the Solis & Wets random optimization

  8. Mathematical Theory of Neural Networks

    DTIC Science & Technology

    1994-08-31

    This report provides a summary of the grant work by the principal investigators in the area of neural networks . The topics covered deal with...properties) for nets; and the use of neural networks for the control of nonlinear systems.

  9. Reiterative AP2a activity controls sequential steps in the neural crest gene regulatory network.

    PubMed

    de Crozé, Noémie; Maczkowiak, Frédérique; Monsoro-Burq, Anne H

    2011-01-04

    The neural crest (NC) emerges from combinatorial inductive events occurring within its progenitor domain, the neural border (NB). Several transcription factors act early at the NB, but the initiating molecular events remain elusive. Recent data from basal vertebrates suggest that ap2 might have been critical for NC emergence; however, the role of AP2 factors at the NB remains unclear. We show here that AP2a initiates NB patterning and is sufficient to elicit a NB-like pattern in neuralized ectoderm. In contrast, the other early regulators do not participate in ap2a initiation at the NB, but cooperate to further establish a robust NB pattern. The NC regulatory network uses a multistep cascade of secreted inducers and transcription factors, first at the NB and then within the NC progenitors. Here we report that AP2a acts at two distinct steps of this cascade. As the earliest known NB specifier, AP2a mediates Wnt signals to initiate the NB and activate pax3; as a NC specifier, AP2a regulates further NC development independent of and downstream of NB patterning. Our findings reconcile conflicting observations from various vertebrate organisms. AP2a provides a paradigm for the reiterated use of multifunctional molecules, thereby facilitating emergence of the NC in vertebrates.

  10. Neural Network Communications Signal Processing

    DTIC Science & Technology

    1994-08-01

    This final technical report describes the research and development- results of the Neural Network Communications Signal Processing (NNCSP) Program...The objectives of the NNCSP program are to: (1) develop and implement a neural network and communications signal processing simulation system for the...purpose of exploring the applicability of neural network technology to communications signal processing; (2) demonstrate several configurations of the

  11. Neural Networks for Speech Application.

    DTIC Science & Technology

    1987-11-01

    This is a general introduction to the reemerging technology called neural networks , and how these networks may provide an important alternative to...traditional forms of computing in speech applications. Neural networks , sometimes called Artificial Neural Systems (ANS), have shown promise for solving

  12. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  13. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  14. Periodicity and global exponential stability of generalized Cohen-Grossberg neural networks with discontinuous activations and mixed delays.

    PubMed

    Wang, Dongshu; Huang, Lihong

    2014-03-01

    In this paper, we investigate the periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and the fixed point theorem of multi-valued maps, the existence of periodic solutions for the neural networks is obtained. After that, we derive some sufficient conditions for the global exponential stability and convergence of the neural networks, in terms of nonsmooth analysis theory with generalized Lyapunov approach. Without assuming the boundedness (or the growth condition) and monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not only on discrete time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on discrete time-varying and distributed delayed neural networks with discontinuous activations. We give some numerical examples to show the applicability and effectiveness of our main results.

  15. DETECTING ACTIVE GALACTIC NUCLEI USING MULTI-FILTER IMAGING DATA. II. INCORPORATING ARTIFICIAL NEURAL NETWORKS

    SciTech Connect

    Dong, X. Y.; De Robertis, M. M.

    2013-10-01

    This is the second paper of the series Detecting Active Galactic Nuclei Using Multi-filter Imaging Data. In this paper we review shapelets, an image manipulation algorithm, which we employ to adjust the point-spread function (PSF) of galaxy images. This technique is used to ensure the image in each filter has the same and sharpest PSF, which is the preferred condition for detecting AGNs using multi-filter imaging data as we demonstrated in Paper I of this series. We apply shapelets on Canada-France-Hawaii Telescope Legacy Survey Wide Survey ugriz images. Photometric parameters such as effective radii, integrated fluxes within certain radii, and color gradients are measured on the shapelets-reconstructed images. These parameters are used by artificial neural networks (ANNs) which yield: photometric redshift with an rms of 0.026 and a regression R-value of 0.92; galaxy morphological types with an uncertainty less than 2 T types for z ≤ 0.1; and identification of galaxies as AGNs with 70% confidence, star-forming/starburst (SF/SB) galaxies with 90% confidence, and passive galaxies with 70% confidence for z ≤ 0.1. The incorporation of ANNs provides a more reliable technique for identifying AGN or SF/SB candidates, which could be very useful for large-scale multi-filter optical surveys that also include a modest set of spectroscopic data sufficient to train neural networks.

  16. Detection of micro solder balls using active thermography and probabilistic neural network

    NASA Astrophysics Data System (ADS)

    He, Zhenzhi; Wei, Li; Shao, Minghui; Lu, Xingning

    2017-03-01

    Micro solder ball/bump has been widely used in electronic packaging. It has been challenging to inspect these structures as the solder balls/bumps are often embedded between the component and substrates, especially in flip-chip packaging. In this paper, a detection method for micro solder ball/bump based on the active thermography and the probabilistic neural network is investigated. A VH680 infrared imager is used to capture the thermal image of the test vehicle, SFA10 packages. The temperature curves are processed using moving average technique to remove the peak noise. And the principal component analysis (PCA) is adopted to reconstruct the thermal images. The missed solder balls can be recognized explicitly in the second principal component image. Probabilistic neural network (PNN) is then established to identify the defective bump intelligently. The hot spots corresponding to the solder balls are segmented from the PCA reconstructed image, and statistic parameters are calculated. To characterize the thermal properties of solder bump quantitatively, three representative features are selected and used as the input vector in PNN clustering. The results show that the actual outputs and the expected outputs are consistent in identification of the missed solder balls, and all the bumps were recognized accurately, which demonstrates the viability of the PNN in effective defect inspection in high-density microelectronic packaging.

  17. Artificial neural network based characterization of the volume of tissue activated during deep brain stimulation

    NASA Astrophysics Data System (ADS)

    Chaturvedi, Ashutosh; Luján, J. Luis; McIntyre, Cameron C.

    2013-10-01

    Objective. Clinical deep brain stimulation (DBS) systems can be programmed with thousands of different stimulation parameter combinations (e.g. electrode contact(s), voltage, pulse width, frequency). Our goal was to develop novel computational tools to characterize the effects of stimulation parameter adjustment for DBS. Approach. The volume of tissue activated (VTA) represents a metric used to estimate the spatial extent of DBS for a given parameter setting. Traditional methods for calculating the VTA rely on activation function (AF)-based approaches and tend to overestimate the neural response when stimulation is applied through multiple electrode contacts. Therefore, we created a new method for VTA calculation that relied on artificial neural networks (ANNs). Main results. The ANN-based predictor provides more accurate descriptions of the spatial spread of activation compared to AF-based approaches for monopolar stimulation. In addition, the ANN was able to accurately estimate the VTA in response to multi-contact electrode configurations. Significance. The ANN-based approach may represent a useful method for fast computation of the VTA in situations with limited computational resources, such as a clinical DBS programming application on a tablet computer.

  18. Triphasic spike-timing-dependent plasticity organizes networks to produce robust sequences of neural activity

    PubMed Central

    Waddington, Amelia; Appleby, Peter A.; De Kamps, Marc; Cohen, Netta

    2012-01-01

    Synfire chains have long been proposed to generate precisely timed sequences of neural activity. Such activity has been linked to numerous neural functions including sensory encoding, cognitive and motor responses. In particular, it has been argued that synfire chains underlie the precise spatiotemporal firing patterns that control song production in a variety of songbirds. Previous studies have suggested that the development of synfire chains requires either initial sparse connectivity or strong topological constraints, in addition to any synaptic learning rules. Here, we show that this necessity can be removed by using a previously reported but hitherto unconsidered spike-timing-dependent plasticity (STDP) rule and activity-dependent excitability. Under this rule the network develops stable synfire chains that possess a non-trivial, scalable multi-layer structure, in which relative layer sizes appear to follow a universal function. Using computational modeling and a coarse grained random walk model, we demonstrate the role of the STDP rule in growing, molding and stabilizing the chain, and link model parameters to the resulting structure. PMID:23162457

  19. Ligand Biological Activity Predictions Using Fingerprint-Based Artificial Neural Networks (FANN-QSAR)

    PubMed Central

    Myint, Kyaw Z.; Xie, Xiang-Qun

    2015-01-01

    This chapter focuses on the fingerprint-based artificial neural networks QSAR (FANN-QSAR) approach to predict biological activities of structurally diverse compounds. Three types of fingerprints, namely ECFP6, FP2, and MACCS, were used as inputs to train the FANN-QSAR models. The results were benchmarked against known 2D and 3D QSAR methods, and the derived models were used to predict cannabinoid (CB) ligand binding activities as a case study. In addition, the FANN-QSAR model was used as a virtual screening tool to search a large NCI compound database for lead cannabinoid compounds. We discovered several compounds with good CB2 binding affinities ranging from 6.70 nM to 3.75 μM. The studies proved that the FANN-QSAR method is a useful approach to predict bioactivities or properties of ligands and to find novel lead compounds for drug discovery research. PMID:25502380

  20. Multistability of second-order competitive neural networks with nondecreasing saturated activation functions.

    PubMed

    Nie, Xiaobing; Cao, Jinde

    2011-11-01

    In this paper, second-order interactions are introduced into competitive neural networks (NNs) and the multistability is discussed for second-order competitive NNs (SOCNNs) with nondecreasing saturated activation functions. Firstly, based on decomposition of state space, Cauchy convergence principle, and inequality technique, some sufficient conditions ensuring the local exponential stability of 2N equilibrium points are derived. Secondly, some conditions are obtained for ascertaining equilibrium points to be locally exponentially stable and to be located in any designated region. Thirdly, the theory is extended to more general saturated activation functions with 2r corner points and a sufficient criterion is given under which the SOCNNs can have (r+1)N locally exponentially stable equilibrium points. Even if there is no second-order interactions, the obtained results are less restrictive than those in some recent works. Finally, three examples with their simulations are presented to verify the theoretical analysis.

  1. Neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  2. Rule generation from neural networks

    SciTech Connect

    Fu, L.

    1994-08-01

    The neural network approach has proven useful for the development of artificial intelligence systems. However, a disadvantage with this approach is that the knowledge embedded in the neural network is opaque. In this paper, we show how to interpret neural network knowledge in symbolic form. We lay down required definitions for this treatment, formulate the interpretation algorithm, and formally verify its soundness. The main result is a formalized relationship between a neural network and a rule-based system. In addition, it has been demonstrated that the neural network generates rules of better performance than the decision tree approach in noisy conditions. 7 refs.

  3. Coherent periodic activity in excitatory Erdös-Renyi neural networks: the role of network connectivity.

    PubMed

    Tattini, Lorenzo; Olmi, Simona; Torcini, Alessandro

    2012-06-01

    In this article, we investigate the role of connectivity in promoting coherent activity in excitatory neural networks. In particular, we would like to understand if the onset of collective oscillations can be related to a minimal average connectivity and how this critical connectivity depends on the number of neurons in the networks. For these purposes, we consider an excitatory random network of leaky integrate-and-fire pulse coupled neurons. The neurons are connected as in a directed Erdös-Renyi graph with average connectivity scaling as a power law with the number of neurons in the network. The scaling is controlled by a parameter γ, which allows to pass from massively connected to sparse networks and therefore to modify the topology of the system. At a macroscopic level, we observe two distinct dynamical phases: an asynchronous state corresponding to a desynchronized dynamics of the neurons and a regime of partial synchronization (PS) associated with a coherent periodic activity of the network. At low connectivity, the system is in an asynchronous state, while PS emerges above a certain critical average connectivity (c). For sufficiently large networks, (c) saturates to a constant value suggesting that a minimal average connectivity is sufficient to observe coherent activity in systems of any size irrespectively of the kind of considered network: sparse or massively connected. However, this value depends on the nature of the synapses: reliable or unreliable. For unreliable synapses, the critical value required to observe the onset of macroscopic behaviors is noticeably smaller than for reliable synaptic transmission. Due to the disorder present in the system, for finite number of neurons we have inhomogeneities in the neuronal behaviors, inducing a weak form of chaos, which vanishes in the thermodynamic limit. In such a limit, the disordered systems exhibit regular (non chaotic) dynamics and their properties correspond to that of a homogeneous

  4. Coherent periodic activity in excitatory Erdös-Renyi neural networks: The role of network connectivity

    NASA Astrophysics Data System (ADS)

    Tattini, Lorenzo; Olmi, Simona; Torcini, Alessandro

    2012-06-01

    In this article, we investigate the role of connectivity in promoting coherent activity in excitatory neural networks. In particular, we would like to understand if the onset of collective oscillations can be related to a minimal average connectivity and how this critical connectivity depends on the number of neurons in the networks. For these purposes, we consider an excitatory random network of leaky integrate-and-fire pulse coupled neurons. The neurons are connected as in a directed Erdös-Renyi graph with average connectivity scaling as a power law with the number of neurons in the network. The scaling is controlled by a parameter γ, which allows to pass from massively connected to sparse networks and therefore to modify the topology of the system. At a macroscopic level, we observe two distinct dynamical phases: an asynchronous state corresponding to a desynchronized dynamics of the neurons and a regime of partial synchronization (PS) associated with a coherent periodic activity of the network. At low connectivity, the system is in an asynchronous state, while PS emerges above a certain critical average connectivity c. For sufficiently large networks, c saturates to a constant value suggesting that a minimal average connectivity is sufficient to observe coherent activity in systems of any size irrespectively of the kind of considered network: sparse or massively connected. However, this value depends on the nature of the synapses: reliable or unreliable. For unreliable synapses, the critical value required to observe the onset of macroscopic behaviors is noticeably smaller than for reliable synaptic transmission. Due to the disorder present in the system, for finite number of neurons we have inhomogeneities in the neuronal behaviors, inducing a weak form of chaos, which vanishes in the thermodynamic limit. In such a limit, the disordered systems exhibit regular (non chaotic) dynamics and their properties correspond to that of a homogeneous fully

  5. Seasonal prediction of tropical cyclone activity over the north Indian Ocean using three artificial neural networks

    NASA Astrophysics Data System (ADS)

    Nath, Sankar; Kotal, S. D.; Kundu, P. K.

    2016-12-01

    Three artificial neural network (ANN) methods, namely, multilayer perceptron (MLP), radial basis function (RBF) and generalized regression neural network (GRNN) are utilized to predict the seasonal tropical cyclone (TC) activity over the north Indian Ocean (NIO) during the post-monsoon season (October, November, December). The frequency of TC and large-scale climate variables derived from NCEP/NCAR reanalysis dataset of resolution 2.5° × 2.5° were analyzed for the period 1971-2013. Data for the years 1971-2002 were used for the development of the models, which were tested with independent sample data for the year 2003-2013. Using the correlation analysis, the five large-scale climate variables, namely, geopotential height at 500 hPa, relative humidity at 500 hPa, sea-level pressure, zonal wind at 700 hPa and 200 hPa for the preceding month September, are selected as potential predictors of the post-monsoon season TC activity. The result reveals that all the three different ANN methods are able to provide satisfactory forecast in terms of the various metrics, such as root mean-square error (RMSE), standard deviation (SD), correlation coefficient ( r), and bias and index of agreement ( d). Additionally, leave-one-out cross validation (LOOCV) method is also performed and the forecast skill is evaluated. The results show that the MLP model is found to be superior to the other two models (RBF, GRNN). The (MLP) is expected to be very useful to operational forecasters for prediction of TC activity.

  6. Self-organization of neural networks

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Winston, Jeffrey V.; Rafelski, Johann

    1984-05-01

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (“brainwashing”) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conductive to the simulation of memory and learning phenomena.

  7. Where’s the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network

    PubMed Central

    Hartmann, Christoph; Lazar, Andreea; Nessler, Bernhard; Triesch, Jochen

    2015-01-01

    Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be

  8. Global Mittag-Leffler synchronization of fractional-order neural networks with discontinuous activations.

    PubMed

    Ding, Zhixia; Shen, Yi; Wang, Leimin

    2016-01-01

    This paper is concerned with the global Mittag-Leffler synchronization for a class of fractional-order neural networks with discontinuous activations (FNNDAs). We give the concept of Filippov solution for FNNDAs in the sense of Caputo's fractional derivation. By using a singular Gronwall inequality and the properties of fractional calculus, the existence of global solution under the framework of Filippov for FNNDAs is proved. Based on the nonsmooth analysis and control theory, some sufficient criteria for the global Mittag-Leffler synchronization of FNNDAs are derived by designing a suitable controller. The proposed results enrich and enhance the previous reports. Finally, one numerical example is given to demonstrate the effectiveness of the theoretical results.

  9. Anti-glycated activity prediction of polysaccharides from two guava fruits using artificial neural networks.

    PubMed

    Yan, Chunyan; Lee, Jinsheng; Kong, Fansheng; Zhang, Dezhi

    2013-10-15

    High-efficiency ultrasonic treatment was used to extract the polysaccharides of Psidium guajava (PPG) and Psidium littorale (PPL). The aims of this study were to compare polysaccharide activities from these two guavas, as well as to investigate the relationship between ultrasonic conditions and anti-glycated activity. A mathematical model of anti-glycated activity was constructed with the artificial neural network (ANN) toolbox of MATLAB software. Response surface plots showed the correlation between ultrasonic conditions and bioactivity. The optimal ultrasonic conditions of PPL for the highest anti-glycated activity were predicted to be 256 W, 60 °C, and 12 min, and the predicted activity was 42.2%. The predicted highest anti-glycated activity of PPG was 27.2% under its optimal predicted ultrasonic condition. The experimental result showed that PPG and PPL possessed anti-glycated and antioxidant activities, and those of PPL were greater. The experimental data also indicated that ANN had good prediction and optimization capability.

  10. Neural networks for triggering

    SciTech Connect

    Denby, B. ); Campbell, M. ); Bedeschi, F. ); Chriss, N.; Bowers, C. ); Nesti, F. )

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  11. Emergence of the small-world architecture in neural networks by activity dependent growth

    NASA Astrophysics Data System (ADS)

    Gafarov, F. M.

    2016-11-01

    In this paper, we propose a model describing the growth and development of neural networks based on the latest achievements of experimental neuroscience. The model is based on two evolutionary equations. The first equation is for the evolution of the neurons state and the second is for the growth of axon tips. By using the model, we demonstrated the neuronal growth process from disconnected neurons to fully connected three-dimensional networks. For the analysis of the network's connections structure, we used the random graphs theory methods. It is shown that the growth in neural networks results in the formation of a well-known ;small-world; network model. The analysis of the connectivity distribution shows the presence of a strictly non-Gaussian but no scale-free degree distribution for the in-degree node distribution. In terms of the graphs theory, this study developed a new model of dynamic graph.

  12. Global exponential stability of neural networks with globally Lipschitz continuous activations and its application to linear variational inequality problem.

    PubMed

    Liang, X B; Si, J

    2001-01-01

    This paper investigates the existence, uniqueness, and global exponential stability (GES) of the equilibrium point for a large class of neural networks with globally Lipschitz continuous activations including the widely used sigmoidal activations and the piecewise linear activations. The provided sufficient condition for GES is mild and some conditions easily examined in practice are also presented. The GES of neural networks in the case of locally Lipschitz continuous activations is also obtained under an appropriate condition. The analysis results given in the paper extend substantially the existing relevant stability results in the literature, and therefore expand significantly the application range of neural networks in solving optimization problems. As a demonstration, we apply the obtained analysis results to the design of a recurrent neural network (RNN) for solving the linear variational inequality problem (VIP) defined on any nonempty and closed box set, which includes the box constrained quadratic programming and the linear complementarity problem as the special cases. It can be inferred that the linear VIP has a unique solution for the class of Lyapunov diagonally stable matrices, and that the synthesized RNN is globally exponentially convergent to the unique solution. Some illustrative simulation examples are also given.

  13. Development of a computational model on the neural activity patterns of a visual working memory in a hierarchical feedforward Network

    NASA Astrophysics Data System (ADS)

    An, Soyoung; Choi, Woochul; Paik, Se-Bum

    2015-11-01

    Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.

  14. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  15. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  16. Multistability of neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing

    2015-05-01

    This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results.

  17. Micro-Doppler Based Classification of Human Aquatic Activities via Transfer Learning of Convolutional Neural Networks.

    PubMed

    Park, Jinhee; Javier, Rios Jesus; Moon, Taesup; Kim, Youngwook

    2016-11-24

    Accurate classification of human aquatic activities using radar has a variety of potential applications such as rescue operations and border patrols. Nevertheless, the classification of activities on water using radar has not been extensively studied, unlike the case on dry ground, due to its unique challenge. Namely, not only is the radar cross section of a human on water small, but the micro-Doppler signatures are much noisier due to water drops and waves. In this paper, we first investigate whether discriminative signatures could be obtained for activities on water through a simulation study. Then, we show how we can effectively achieve high classification accuracy by applying deep convolutional neural networks (DCNN) directly to the spectrogram of real measurement data. From the five-fold cross-validation on our dataset, which consists of five aquatic activities, we report that the conventional feature-based scheme only achieves an accuracy of 45.1%. In contrast, the DCNN trained using only the collected data attains 66.7%, and the transfer learned DCNN, which takes a DCNN pre-trained on a RGB image dataset and fine-tunes the parameters using the collected data, achieves a much higher 80.3%, which is a significant performance boost.

  18. Micro-Doppler Based Classification of Human Aquatic Activities via Transfer Learning of Convolutional Neural Networks

    PubMed Central

    Park, Jinhee; Javier, Rios Jesus; Moon, Taesup; Kim, Youngwook

    2016-01-01

    Accurate classification of human aquatic activities using radar has a variety of potential applications such as rescue operations and border patrols. Nevertheless, the classification of activities on water using radar has not been extensively studied, unlike the case on dry ground, due to its unique challenge. Namely, not only is the radar cross section of a human on water small, but the micro-Doppler signatures are much noisier due to water drops and waves. In this paper, we first investigate whether discriminative signatures could be obtained for activities on water through a simulation study. Then, we show how we can effectively achieve high classification accuracy by applying deep convolutional neural networks (DCNN) directly to the spectrogram of real measurement data. From the five-fold cross-validation on our dataset, which consists of five aquatic activities, we report that the conventional feature-based scheme only achieves an accuracy of 45.1%. In contrast, the DCNN trained using only the collected data attains 66.7%, and the transfer learned DCNN, which takes a DCNN pre-trained on a RGB image dataset and fine-tunes the parameters using the collected data, achieves a much higher 80.3%, which is a significant performance boost. PMID:27886151

  19. Classification of human activity on water through micro-Dopplers using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Kim, Youngwook; Moon, Taesup

    2016-05-01

    Detecting humans and classifying their activities on the water has significant applications for surveillance, border patrols, and rescue operations. When humans are illuminated by radar signal, they produce micro-Doppler signatures due to moving limbs. There has been a number of research into recognizing humans on land by their unique micro-Doppler signatures, but there is scant research into detecting humans on water. In this study, we investigate the micro-Doppler signatures of humans on water, including a swimming person, a swimming person pulling a floating object, and a rowing person in a small boat. The measured swimming styles were free stroke, backstroke, and breaststroke. Each activity was observed to have a unique micro-Doppler signature. Human activities were classified based on their micro-Doppler signatures. For the classification, we propose to apply deep convolutional neural networks (DCNN), a powerful deep learning technique. Rather than using conventional supervised learning that relies on handcrafted features, we present an alternative deep learning approach. We apply the DCNN, one of the most successful deep learning algorithms for image recognition, directly to a raw micro-Doppler spectrogram of humans on the water. Without extracting any explicit features from the micro-Dopplers, the DCNN can learn the necessary features and build classification boundaries using the training data. We show that the DCNN can achieve accuracy of more than 87.8% for activity classification using 5- fold cross validation.

  20. Predicting body temperature and activity of adult Polyommatus icarus using neural network models under current and projected climate scenarios.

    PubMed

    Howe, P D; Bryant, S R; Shreeve, T G

    2007-10-01

    We use field observations in two geographic regions within the British Isles and regression and neural network models to examine the relationship between microhabitat use, thoracic temperatures and activity in a widespread lycaenid butterfly, Polyommatus icarus. We also make predictions for future activity under climate change scenarios. Individuals from a univoltine northern population initiated flight with significantly lower thoracic temperatures than individuals from a bivoltine southern population. Activity is dependent on body temperature and neural network models of body temperature are better at predicting body temperature than generalized linear models. Neural network models of activity with a sole input of predicted body temperature (using weather and microclimate variables) are good predictors of observed activity and were better predictors than generalized linear models. By modelling activity under climate change scenarios for 2080 we predict differences in activity in relation to both regional differences of climate change and differing body temperature requirements for activity in different populations. Under average conditions for low-emission scenarios there will be little change in the activity of individuals from central-southern Britain and a reduction in northwest Scotland from 2003 activity levels. Under high-emission scenarios, flight-dependent activity in northwest Scotland will increase the greatest, despite smaller predicted increases in temperature and decreases in cloud cover. We suggest that neural network models are an effective way of predicting future activity in changing climates for microhabitat-specialist butterflies and that regional differences in the thermoregulatory response of populations will have profound effects on how they respond to climate change.

  1. Stimulated Photorefractive Optical Neural Networks

    DTIC Science & Technology

    1992-12-15

    This final report describes research in optical neural networks performed under DARPA sponsorship at Hughes Aircraft Company during the period 1989...in photorefractive crystals. This approach reduces crosstalk and improves the utilization of the optical input device. Successfully implemented neural ... networks include the Perceptron, Bidirectional Associative Memory, and multi-layer backpropagation networks. Up to 104 neurons, 2xl0(7) weights, and

  2. Neural network versus activity-specific prediction equations for energy expenditure estimation in children.

    PubMed

    Ruch, Nicole; Joss, Franziska; Jimmy, Gerda; Melzer, Katarina; Hänggi, Johanna; Mäder, Urs

    2013-11-01

    The aim of this study was to compare the energy expenditure (EE) estimations of activity-specific prediction equations (ASPE) and of an artificial neural network (ANNEE) based on accelerometry with measured EE. Forty-three children (age: 9.8 ± 2.4 yr) performed eight different activities. They were equipped with one tri-axial accelerometer that collected data in 1-s epochs and a portable gas analyzer. The ASPE and the ANNEE were trained to estimate the EE by including accelerometry, age, gender, and weight of the participants. To provide the activity-specific information, a decision tree was trained to recognize the type of activity through accelerometer data. The ASPE were applied to the activity-type-specific data recognized by the tree (Tree-ASPE). The Tree-ASPE precisely estimated the EE of all activities except cycling [bias: -1.13 ± 1.33 metabolic equivalent (MET)] and walking (bias: 0.29 ± 0.64 MET; P < 0.05). The ANNEE overestimated the EE of stationary activities (bias: 0.31 ± 0.47 MET) and walking (bias: 0.61 ± 0.72 MET) and underestimated the EE of cycling (bias: -0.90 ± 1.18 MET; P < 0.05). Biases of EE in stationary activities (ANNEE: 0.31 ± 0.47 MET, Tree-ASPE: 0.08 ± 0.21 MET) and walking (ANNEE 0.61 ± 0.72 MET, Tree-ASPE: 0.29 ± 0.64 MET) were significantly smaller in the Tree-ASPE than in the ANNEE (P < 0.05). The Tree-ASPE was more precise in estimating the EE than the ANNEE. The use of activity-type-specific information for subsequent EE prediction equations might be a promising approach for future studies.

  3. Artificial neural network modelling of the antioxidant activity and phenolic compounds of bananas submitted to different drying treatments.

    PubMed

    Guiné, Raquel P F; Barroca, Maria João; Gonçalves, Fernando J; Alves, Mariana; Oliveira, Solange; Mendes, Mateus

    2015-02-01

    Bananas (cv. Musa nana and Musa cavendishii) fresh and dried by hot air at 50 and 70°C and lyophilisation were analysed for phenolic contents and antioxidant activity. All samples were subject to six extractions (three with methanol followed by three with acetone/water solution). The experimental data served to train a neural network adequate to describe the experimental observations for both output variables studied: total phenols and antioxidant activity. The results show that both bananas are similar and air drying decreased total phenols and antioxidant activity for both temperatures, whereas lyophilisation decreased the phenolic content in a lesser extent. Neural network experiments showed that antioxidant activity and phenolic compounds can be predicted accurately from the input variables: banana variety, dryness state and type and order of extract. Drying state and extract order were found to have larger impact in the values of antioxidant activity and phenolic compounds.

  4. Optical Neural Network Classifier Architectures

    DTIC Science & Technology

    1998-04-01

    We present an adaptive opto-electronic neural network hardware architecture capable of exploiting parallel optics to realize real-time processing and...function neural network based on a previously demonstrated binary-input version. The greyscale-input capability broadens the range of applications for...a reduced feature set of multiwavelet images to improve training times and discrimination capability of the neural network . The design uses a joint

  5. Analysis of Simple Neural Networks

    DTIC Science & Technology

    1988-12-20

    ANALYSIS OF SThlPLE NEURAL NETWORKS Chedsada Chinrungrueng Master’s Report Under the Supervision of Prof. Carlo H. Sequin Department of... Neural Networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT...and guidJ.nce. I have learned a great deal from his teaching, knowledge, and criti- cism. 1. MOTIVATION ANALYSIS OF SIMPLE NEURAL NETWORKS Chedsada

  6. Neural Networks For Robot Control

    DTIC Science & Technology

    2001-04-17

    following: (a) Application of artificial neural networks (multi-layer perceptrons, MLPs) for 2D planar robot arm by using the dynamic backpropagation...methods for the adjustment of parameters; and optimization of the architecture; (b) Application of artificial neural networks in controlling closed...studies in controlling dynamic robot arms by using neural networks in real-time process; (2) Research of optimal architectures used in closed-loop systems in order to compare with adaptive and robust control.

  7. Trimaran Resistance Artificial Neural Network

    DTIC Science & Technology

    2011-01-01

    11th International Conference on Fast Sea Transportation FAST 2011, Honolulu, Hawaii, USA, September 2011 Trimaran Resistance Artificial Neural Network Richard...Trimaran Resistance Artificial Neural Network 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e... Artificial Neural Network and is restricted to the center and side-hull configurations tested. The value in the parametric model is that it is able to

  8. Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis.

    PubMed

    Hoogi, Assaf; Subramaniam, Arjun; Veerapaneni, Rishi; Rubin, Daniel

    2016-11-11

    In this paper, we propose a generalization of the level set segmentation approach by supplying a novel method for adaptive estimation of active contour parameters. The presented segmentation method is fully automatic once the lesion has been detected. First, the location of the level set contour relative to the lesion is estimated using a convolutional neural network (CNN). The CNN has two convolutional layers for feature extraction, which lead into dense layers for classification. Second, the output CNN probabilities are then used to adaptively calculate the parameters of the active contour functional during the segmentation process. Finally, the adaptive window size surrounding each contour point is re-estimated by an iterative process that considers lesion size and spatial texture. We demonstrate the capabilities of our method on a dataset of 164 MRI and 112 CT images of liver lesions that includes low contrast and heterogeneous lesions as well as noisy images. To illustrate the strength of our method, we evaluated it against state of the art CNNbased and active contour techniques. For all cases, our method, as assessed by Dice similarity coefficients, performed significantly better than currently available methods. An average Dice improvement of 0.27 was found across the entire dataset over all comparisons. We also analyzed two challenging subsets of lesions and obtained a significant Dice improvement of ����.�������� with our method (p < 0.001, Wilcoxon).

  9. Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis.

    PubMed

    Hoogi, Assaf; Subramaniam, Arjun; Veerapaneni, Rishi; Rubin, Daniel

    2016-11-11

    In this paper, we propose a generalization of the level set segmentation approach by supplying a novel method for adaptive estimation of active contour parameters. The presented segmentation method is fully automatic once the lesion has been detected. First, the location of the level set contour relative to the lesion is estimated using a convolutional neural network (CNN). The CNN has two convolutional layers for feature extraction, which lead into dense layers for classification. Second, the output CNN probabilities are then used to adaptively calculate the parameters of the active contour functional during the segmentation process. Finally, the adaptive window size surrounding each contour point is re-estimated by an iterative process that considers lesion size and spatial texture. We demonstrate the capabilities of our method on a dataset of 164 MRI and 112 CT images of liver lesions that includes low contrast and heterogeneous lesions as well as noisy images. To illustrate the strength of our method, we evaluated it against state of the art CNNbased and active contour techniques. For all cases, our method, as assessed by Dice similarity coefficients, performed significantly better than currently available methods. An average Dice improvement of 0.27 was found across the entire dataset over all comparisons. We also analyzed two challenging subsets of lesions and obtained a significant Dice improvement of 0.24 with our method (p < 0.001, Wilcoxon).

  10. Improving quantitative structure-activity relationship models using Artificial Neural Networks trained with dropout

    NASA Astrophysics Data System (ADS)

    Mendenhall, Jeffrey; Meiler, Jens

    2016-02-01

    Dropout is an Artificial Neural Network (ANN) training technique that has been shown to improve ANN performance across canonical machine learning (ML) datasets. Quantitative Structure Activity Relationship (QSAR) datasets used to relate chemical structure to biological activity in Ligand-Based Computer-Aided Drug Discovery pose unique challenges for ML techniques, such as heavily biased dataset composition, and relatively large number of descriptors relative to the number of actives. To test the hypothesis that dropout also improves QSAR ANNs, we conduct a benchmark on nine large QSAR datasets. Use of dropout improved both enrichment false positive rate and log-scaled area under the receiver-operating characteristic curve (logAUC) by 22-46 % over conventional ANN implementations. Optimal dropout rates are found to be a function of the signal-to-noise ratio of the descriptor set, and relatively independent of the dataset. Dropout ANNs with 2D and 3D autocorrelation descriptors outperform conventional ANNs as well as optimized fingerprint similarity search methods.

  11. Artificial neural network optimization of Althaea rosea seeds polysaccharides and its antioxidant activity.

    PubMed

    Liu, Feng; Liu, Wenhui; Tian, Shuge

    2014-09-01

    A combination of an orthogonal L16(4)4 test design and a three-layer artificial neural network (ANN) model was applied to optimize polysaccharides from Althaea rosea seeds extracted by hot water method. The highest optimal experimental yield of A. rosea seed polysaccharides (ARSPs) of 59.85 mg/g was obtained using three extraction numbers, 113 min extraction time, 60.0% ethanol concentration, and 1:41 solid-liquid ratio. Under these optimized conditions, the ARSP experimental yield was very close to the predicted yield of 60.07 mg/g and was higher than the orthogonal test results (40.86 mg/g). Structural characterizations were conducted using physicochemical property and FTIR analysis. In addition, the study of ARSP antioxidant activity demonstrated that polysaccharides exhibited high superoxide dismutase activity, strong reducing power, and positive scavenging activity on superoxide anion, hydroxyl radical, 2,2-diphenyl-1-picrylhydrazyl, and reducing power. Our results indicated that ANNs were efficient quantitative tools for predicting the total ARSP content.

  12. Neural Networks, Reliability and Data Analysis

    DTIC Science & Technology

    1993-01-01

    Neural network technology has been surveyed with the intent of determining the feasibility and impact neural networks may have in the area of...automated reliability tools. Data analysis capabilities of neural networks appear to be very applicable to reliability science due to similar mathematical...tendencies in data.... Neural networks , Reliability, Data analysis, Automated reliability tools, Automated intelligent information processing, Statistical neural network.

  13. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  14. Adaptation to New Microphones Using Artificial Neural Networks With Trainable Activation Functions.

    PubMed

    Siniscalchi, Sabato Marco; Salerno, Valerio Mario

    2016-04-14

    Model adaptation is a key technique that enables a modern automatic speech recognition (ASR) system to adjust its parameters, using a small amount of enrolment data, to the nuances in the speech spectrum due to microphone mismatch in the training and test data. In this brief, we investigate four different adaptation schemes for connectionist (also known as hybrid) ASR systems that learn microphone-specific hidden unit contributions, given some adaptation material. This solution is made possible adopting one of the following schemes: 1) the use of Hermite activation functions; 2) the introduction of bias and slope parameters in the sigmoid activation functions; 3) the injection of an amplitude parameter specific for each sigmoid unit; or 4) the combination of 2) and 3). Such a simple yet effective solution allows the adapted model to be stored in a small-sized storage space, a highly desirable property of adaptation algorithms for deep neural networks that are suitable for large-scale online deployment. Experimental results indicate that the investigated approaches reduce word error rates on the standard Spoke 6 task of the Wall Street Journal corpus compared with unadapted ASR systems. Moreover, the proposed adaptation schemes all perform better than simple multicondition training and comparable favorably against conventional linear regression-based approaches while using up to 15 orders of magnitude fewer parameters. The proposed adaptation strategies are also effective when a single adaptation sentence is available.

  15. Strong geomagnetic activity forecast by neural networks under dominant southern orientation of the interplanetary magnetic field

    NASA Astrophysics Data System (ADS)

    Valach, Fridrich; Bochníček, Josef; Hejda, Pavel; Revallo, Miloš

    2014-02-01

    The paper deals with the relation of the southern orientation of the north-south component Bz of the interplanetary magnetic field to geomagnetic activity (GA) and subsequently a method is suggested of using the found facts to forecast potentially dangerous high GA. We have found that on a day with very high GA hourly averages of Bz with a negative sign occur at least 16 times in typical cases. Since it is very difficult to estimate the orientation of Bz in the immediate vicinity of the Earth one day or even a few days in advance, we have suggested using a neural-network model, which assumes the worse of the possibilities to forecast the danger of high GA - the dominant southern orientation of the interplanetary magnetic field. The input quantities of the proposed model were information about X-ray flares, type II and IV radio bursts as well as information about coronal mass ejections (CME). In comparing the GA forecasts with observations, we obtain values of the Hanssen-Kuiper skill score ranging from 0.463 to 0.727, which are usual values for similar forecasts of space weather. The proposed model provides forecasts of potentially dangerous high geomagnetic activity should the interplanetary CME (ICME), the originator of geomagnetic storms, hit the Earth under the most unfavorable configuration of cosmic magnetic fields. We cannot know in advance whether the unfavorable configuration is going to occur or not; we just know that it will occur with the probability of 31%.

  16. Finite-time robust stabilization of uncertain delayed neural networks with discontinuous activations via delayed feedback control.

    PubMed

    Wang, Leimin; Shen, Yi; Sheng, Yin

    2016-04-01

    This paper is concerned with the finite-time robust stabilization of delayed neural networks (DNNs) in the presence of discontinuous activations and parameter uncertainties. By using the nonsmooth analysis and control theory, a delayed controller is designed to realize the finite-time robust stabilization of DNNs with discontinuous activations and parameter uncertainties, and the upper bound of the settling time functional for stabilization is estimated. Finally, two examples are provided to demonstrate the effectiveness of the theoretical results.

  17. Interacting neural networks

    NASA Astrophysics Data System (ADS)

    Metzler, R.; Kinzel, W.; Kanter, I.

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random.

  18. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network.

    PubMed

    Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng

    2016-10-13

    Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods.

  19. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network

    PubMed Central

    Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng

    2016-01-01

    Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods. PMID:27754386

  20. Neural-activity mapping of memory-based dominance in the crow: neural networks integrating individual discrimination and social behaviour control.

    PubMed

    Nishizawa, K; Izawa, E-I; Watanabe, S

    2011-12-01

    Large-billed crows (Corvus macrorhynchos), highly social birds, form stable dominance relationships based on the memory of win/loss outcomes of first encounters and on individual discrimination. This socio-cognitive behaviour predicts the existence of neural mechanisms for integration of social behaviour control and individual discrimination. This study aimed to elucidate the neural substrates of memory-based dominance in crows. First, the formation of dominance relationships was confirmed between males in a dyadic encounter paradigm. Next, we examined whether neural activities in 22 focal nuclei of pallium and subpallium were correlated with social behaviour and stimulus familiarity after exposure to dominant/subordinate familiar individuals and unfamiliar conspecifics. Neural activity was determined by measuring expression level of the immediate-early-gene (IEG) protein Zenk. Crows displayed aggressive and/or submissive behaviour to opponents less frequently but more discriminatively in subsequent encounters, suggesting stable dominance based on memory, including win/loss outcomes of the first encounters and individual discrimination. Neural correlates of aggressive and submissive behaviour were found in limbic subpallium including septum, bed nucleus of the striae terminalis (BST), and nucleus taeniae of amygdala (TnA), but also those to familiarity factor in BST and TnA. Contrastingly, correlates of social behaviour were little in pallium and those of familiarity with exposed individuals were identified in hippocampus, medial meso-/nidopallium, and ventro-caudal nidopallium. Given the anatomical connection and neural response patterns of the focal nuclei, neural networks connecting pallium and limbic subpallium via hippocampus could be involved in the integration of individual discrimination and social behaviour control in memory-based dominance in the crow.

  1. Dynamic interactions in neural networks

    SciTech Connect

    Arbib, M.A. ); Amari, S. )

    1989-01-01

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  2. Technology Assessment of Neural Networks

    DTIC Science & Technology

    1989-02-13

    Unlike a Von Neumann type of computer which needs to be programmed to carry out an information-processing function, neural networks are promised as...trainable through a series of trials to learn how to process information. An assessment of the current, near-term (1995), and long-term (2010) trends in Neural Networks is given.

  3. Phase Detection Using Neural Networks.

    DTIC Science & Technology

    1997-03-10

    A likelihood of detecting a reflected signal characterized by phase discontinuities and background noise is enhanced by utilizing neural networks to...identify coherency intervals. The received signal is processed into a predetermined format such as a digital time series. Neural networks perform

  4. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  5. Neural Networks for the Beginner.

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  6. Hybrid Neural Network for Pattern Recognition.

    DTIC Science & Technology

    1997-02-03

    two one-layer neural networks and the second stage comprises a feedforward two-layer neural network . A method for recognizing patterns is also...topological representations of the input patterns using the first and second neural networks. The method further comprises providing a third neural network for...classifying and recognizing the inputted patterns and training the third neural network with a back-propagation algorithm so that the third neural network recognizes at least one interested pattern.

  7. Neural networks to simulate regional ground water levels affected by human activities.

    PubMed

    Feng, Shaoyuan; Kang, Shaozhong; Huo, Zailin; Chen, Shaojun; Mao, Xiaomin

    2008-01-01

    In arid regions, human activities like agriculture and industry often require large ground water extractions. Under these circumstances, appropriate ground water management policies are essential for preventing aquifer overdraft, and thereby protecting critical ecologic and economic objectives. Identification of such policies requires accurate simulation capability of the ground water system in response to hydrological, meteorological, and human factors. In this research, artificial neural networks (ANNs) were developed and applied to investigate the effects of these factors on ground water levels in the Minqin oasis, located in the lower reach of Shiyang River Basin, in Northwest China. Using data spanning 1980 through 1997, two ANNs were developed to model and simulate dynamic ground water levels for the two subregions of Xinhe and Xiqu. The ANN models achieved high predictive accuracy, validating to 0.37 m or less mean absolute error. Sensitivity analyses were conducted with the models demonstrating that agricultural ground water extraction for irrigation is the predominant factor responsible for declining ground water levels exacerbated by a reduction in regional surface water inflows. ANN simulations indicate that it is necessary to reduce the size of the irrigation area to mitigate ground water level declines in the oasis. Unlike previous research, this study demonstrates that ANN modeling can capture important temporally and spatially distributed human factors like agricultural practices and water extraction patterns on a regional basin (or subbasin) scale, providing both high-accuracy prediction capability and enhanced understanding of the critical factors influencing regional ground water conditions.

  8. Dual-memory neural networks for modeling cognitive activities of humans via wearable sensors.

    PubMed

    Lee, Sang-Woo; Lee, Chung-Yeon; Kwak, Dong-Hyun; Ha, Jung-Woo; Kim, Jeonghee; Zhang, Byoung-Tak

    2017-02-20

    Wearable devices, such as smart glasses and watches, allow for continuous recording of everyday life in a real world over an extended period of time or lifelong. This possibility helps better understand the cognitive behavior of humans in real life as well as build human-aware intelligent agents for practical purposes. However, modeling the human cognitive activity from wearable-sensor data stream is challenging because learning new information often results in loss of previously acquired information, causing a problem known as catastrophic forgetting. Here we propose a deep-learning neural network architecture that resolves the catastrophic forgetting problem. Based on the neurocognitive theory of the complementary learning systems of the neocortex and hippocampus, we introduce a dual memory architecture (DMA) that, on one hand, slowly acquires the structured knowledge representations and, on the other hand, rapidly learns the specifics of individual experiences. The DMA system learns continuously through incremental feature adaptation and weight transfer. We evaluate the performance on two real-life datasets, the CIFAR-10 image-stream dataset and the 46-day Lifelog dataset collected from Google Glass, showing that the proposed model outperforms other online learning methods.

  9. Threshold control of chaotic neural network.

    PubMed

    He, Guoguang; Shrimali, Manish Dev; Aihara, Kazuyuki

    2008-01-01

    The chaotic neural network constructed with chaotic neurons exhibits rich dynamic behaviour with a nonperiodic associative memory. In the chaotic neural network, however, it is difficult to distinguish the stored patterns in the output patterns because of the chaotic state of the network. In order to apply the nonperiodic associative memory into information search, pattern recognition etc. it is necessary to control chaos in the chaotic neural network. We have studied the chaotic neural network with threshold activated coupling, which provides a controlled network with associative memory dynamics. The network converges to one of its stored patterns or/and reverse patterns which has the smallest Hamming distance from the initial state of the network. The range of the threshold applied to control the neurons in the network depends on the noise level in the initial pattern and decreases with the increase of noise. The chaos control in the chaotic neural network by threshold activated coupling at varying time interval provides controlled output patterns with different temporal periods which depend upon the control parameters.

  10. Neural Network Development Tool (NETS)

    NASA Technical Reports Server (NTRS)

    Baffes, Paul T.

    1990-01-01

    Artificial neural networks formed from hundreds or thousands of simulated neurons, connected in manner similar to that in human brain. Such network models learning behavior. Using NETS involves translating problem to be solved into input/output pairs, designing network configuration, and training network. Written in C.

  11. Forecasting geomagnetic activity indices using the Boyle index through artificial neural networks

    NASA Astrophysics Data System (ADS)

    Balasubramanian, Ramkumar

    2010-11-01

    Adverse space weather conditions affect various sectors making both human lives and technologies highly susceptible. This dissertation introduces a new set of algorithms suitable for short term space weather forecasts with an enhanced lead-time and better accuracy in predicting Kp, Dst and the AE index over some leading models. Kp is a 3-hour averaged global geomagnetic activity index good for midlatitude regions. The Dst index, an hourly index calculated using four ground based magnetic field measurements near the equator, measures the energy of the Earth's ring current. The Auroral Electrojet indices or AE indices are hourly indices used to characterize the global geomagnetic activity in the auroral zone. Our algorithms can predict these indices purely from the solar wind data with lead times up to 6 hours. We have trained and tested an ANN (Artificial Neural Network) over a complete solar cycle to serve this purpose. Over the last couple of decades, ANNs have been successful for temporal prediction problems amongst other advanced non-linear techniques. Our ANN-based algorithms receive near-real-time inputs either from ACE (Advanced Composition Explorer), located at L1, and a handful of ground-based magnetometers or only from ACE. The Boyle potential, phi = 10-4 (vkm/sec)2+ 11.7BnT sin3 (theta/2) kV, or the Boyle Index (BI) is an empirically-derived formula that approximates the Earth's polar cap potential and is easily derivable in real time using the solar wind data from ACE. The logarithms of both 3-hour and 1-hour averages of the Boyle Index correlate well with the subsequent Kp, Dst and AE: Kp = 8.93 log 10 - 12.55. Dst = 0.355 - 6.48, and AE = 5.87 - 83.46. Inputs to our ANN models have greatly benefitted from the BI and its proven record as a forecasting parameter since its initiation in October, 2003. A preconditioning event tunes the magnetosphere to a specific state before an impending geomagnetic storm. The neural net not only improves the

  12. Neural networks in astronomy.

    PubMed

    Tagliaferri, Roberto; Longo, Giuseppe; Milano, Leopoldo; Acernese, Fausto; Barone, Fabrizio; Ciaramella, Angelo; De Rosa, Rosario; Donalek, Ciro; Eleuteri, Antonio; Raiconi, Giancarlo; Sessa, Salvatore; Staiano, Antonino; Volpicelli, Alfredo

    2003-01-01

    In the last decade, the use of neural networks (NN) and of other soft computing methods has begun to spread also in the astronomical community which, due to the required accuracy of the measurements, is usually reluctant to use automatic tools to perform even the most common tasks of data reduction and data mining. The federation of heterogeneous large astronomical databases which is foreseen in the framework of the astrophysical virtual observatory and national virtual observatory projects, is, however, posing unprecedented data mining and visualization problems which will find a rather natural and user friendly answer in artificial intelligence tools based on NNs, fuzzy sets or genetic algorithms. This review is aimed to both astronomers (who often have little knowledge of the methodological background) and computer scientists (who often know little about potentially interesting applications), and therefore will be structured as follows: after giving a short introduction to the subject, we shall summarize the methodological background and focus our attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining. Most of the original work described in the paper has been performed in the framework of the AstroNeural collaboration (Napoli-Salerno).

  13. The neural basis of cognitive change: reappraisal of emotional faces modulates neural source activity in a frontoparietal attention network.

    PubMed

    Wessing, Ida; Rehbein, Maimu A; Postert, Christian; Fürniss, Tilman; Junghöfer, Markus

    2013-11-01

    Emotions can be regulated effectively via cognitive change, as evidenced by cognitive behavioural therapy. The neural correlates of cognitive change were investigated using reappraisal, a strategy that involves the reinterpretation of emotional stimuli. Hemodynamic studies revealed cortical structures involved in reappraisal and highlighted the role of the prefrontal cortex in regulating subcortical affective processing. Studies using event-related potentials elucidated the timing of reappraisal by showing effective modulation of the Late Positive Potential (LPP) after 300ms but also even earlier effects. The present study investigated the spatiotemporal dynamics of the cortical network underlying cognitive change via inverse source modelling based on whole-head magnetoencephalography (MEG). During MEG recording, 28 healthy participants saw angry and neutral faces and followed instructions designed to down- or up-regulate emotions via reappraisal. Differences between angry and neutral face processing were specifically enhanced during up-regulation, first in the parietal cortex during M170 and across the whole cortex during LPP-M, with particular involvement of the parietal and dorsal prefrontal cortex regions. Thus, our data suggest that the reappraisal of emotional faces involves specific modulations in a frontoparietal attention network.

  14. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  15. Scheduling Link Activation in Multihop Radio Networks by Means of Hopfield Neural Network Techniques

    DTIC Science & Technology

    1991-09-03

    CDMA or non-spread-spectrum systems. Sequence Conflicts The sequential schedling requirement is a further restriction of the problem. We declare that...the source node. Thus, overall, we declare :he occurrence of a scheduling conflict if there is a primary conflict, or if there is a sequence conflict...we declare link 1,1 ineligible for activation in slot 3, and enter an "i" in cell 1,1,3. 124 113 1O w ’. i i .. .. . 1 SNeuron representingi the

  16. Multistability of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2015-11-01

    The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations.

  17. Artificial neural networks from MATLAB in medicinal chemistry. Bayesian-regularized genetic neural networks (BRGNN): application to the prediction of the antagonistic activity against human platelet thrombin receptor (PAR-1).

    PubMed

    Caballero, Julio; Fernández, Michael

    2008-01-01

    Artificial neural networks (ANNs) have been widely used for medicinal chemistry modeling. In the last two decades, too many reports used MATLAB environment as an adequate platform for programming ANNs. Some of these reports comprise a variety of applications intended to quantitatively or qualitatively describe structure-activity relationships. A powerful tool is obtained when there are combined Bayesian-regularized neural networks (BRANNs) and genetic algorithm (GA): Bayesian-regularized genetic neural networks (BRGNNs). BRGNNs can model complicated relationships between explanatory variables and dependent variables. Thus, this methodology is regarded as useful tool for QSAR analysis. In order to demonstrate the use of BRGNNs, we developed a reliable method for predicting the antagonistic activity of 5-amino-3-arylisoxazole derivatives against Human Platelet Thrombin Receptor (PAR-1), using classical 3D-QSAR methodologies: Comparative Molecular Field Analysis (CoMFA) and Comparative Molecular Similarity Indices Analysis (CoMSIA). In addition, 3D vectors generated from the molecular structures were correlated with antagonistic activities by multivariate linear regression (MLR) and Bayesian-regularized neural networks (BRGNNs). All models were trained with 34 compounds, after which they were evaluated for predictive ability with additional 6 compounds. CoMFA and CoMSIA were unable to describe this structure-activity relationship, while BRGNN methodology brings the best results according to validation statistics.

  18. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  19. Neural Networks for Readability Analysis.

    ERIC Educational Resources Information Center

    McEneaney, John E.

    This paper describes and reports on the performance of six related artificial neural networks that have been developed for the purpose of readability analysis. Two networks employ counts of linguistic variables that simulate a traditional regression-based approach to readability. The remaining networks determine readability from "visual…

  20. Computerized cognitive training restores neural activity within the reality monitoring network in schizophrenia.

    PubMed

    Subramaniam, Karuna; Luks, Tracy L; Fisher, Melissa; Simpson, Gregory V; Nagarajan, Srikantan; Vinogradov, Sophia

    2012-02-23

    Schizophrenia patients suffer from severe cognitive deficits, such as impaired reality monitoring. Reality monitoring is the ability to distinguish the source of internal experiences from outside reality. During reality monitoring tasks, schizophrenia patients make errors identifying "I made it up" items, and even during accurate performance, they show abnormally low activation of the medial prefrontal cortex (mPFC), a region that supports self-referential cognition. We administered 80 hr of computerized training of cognitive processes to schizophrenia patients and found improvement in reality monitoring that correlated with increased mPFC activity. In contrast, patients in a computer games control condition did not show any behavioral or neural improvements. Notably, recovery in mPFC activity after training was associated with improved social functioning 6 months later. These findings demonstrate that a serious behavioral deficit in schizophrenia, and its underlying neural dysfunction, can be improved by well-designed computerized cognitive training, resulting in better quality of life.

  1. A Complexity Theory of Neural Networks

    DTIC Science & Technology

    1990-04-14

    Significant results have been obtained on the computation complexity of analog neural networks , and distribute voting. The computing power and...learning algorithms for limited precision analog neural networks have been investigated. Lower bounds for constant depth, polynomial size analog neural ... networks , and a limited version of discrete neural networks have been obtained. The work on distributed voting has important applications for distributed

  2. Collective Computation of Neural Network

    DTIC Science & Technology

    1990-03-15

    Sciences, Beijing ABSTRACT Computational neuroscience is a new branch of neuroscience originating from current research on the theory of computer...scientists working in artificial intelligence engineering and neuroscience . The paper introduces the collective computational properties of model neural...vision research. On this basis, the authors analyzed the significance of the Hopfield model. Key phrases: Computational Neuroscience , Neural Network, Model

  3. Artificial Neural Network Analysis System

    DTIC Science & Technology

    2007-11-02

    Target detection, multi-target tracking and threat identification of ICBM and its warheads by sensor fusion and data fusion of sensors in a fuzzy neural network system based on the compound eye of a fly.

  4. The holographic neural network: Performance comparison with other neural networks

    NASA Astrophysics Data System (ADS)

    Klepko, Robert

    1991-10-01

    The artificial neural network shows promise for use in recognition of high resolution radar images of ships. The holographic neural network (HNN) promises a very large data storage capacity and excellent generalization capability, both of which can be achieved with only a few learning trials, unlike most neural networks which require on the order of thousands of learning trials. The HNN is specially designed for pattern association storage, and mathematically realizes the storage and retrieval mechanisms of holograms. The pattern recognition capability of the HNN was studied, and its performance was compared with five other commonly used neural networks: the Adaline, Hamming, bidirectional associative memory, recirculation, and back propagation networks. The patterns used for testing represented artificial high resolution radar images of ships, and appear as a two dimensional topology of peaks with various amplitudes. The performance comparisons showed that the HNN does not perform as well as the other neural networks when using the same test data. However, modification of the data to make it appear more Gaussian distributed, improved the performance of the network. The HNN performs best if the data is completely Gaussian distributed.

  5. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  6. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  7. Application of artificial neural network in precise prediction of cement elements percentages based on the neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Eftekhari Zadeh, E.; Feghhi, S. A. H.; Roshani, G. H.; Rezaei, A.

    2016-05-01

    Due to variation of neutron energy spectrum in the target sample during the activation process and to peak overlapping caused by the Compton effect with gamma radiations emitted from activated elements, which results in background changes and consequently complex gamma spectrum during the measurement process, quantitative analysis will ultimately be problematic. Since there is no simple analytical correlation between peaks' counts with elements' concentrations, an artificial neural network for analyzing spectra can be a helpful tool. This work describes a study on the application of a neural network to determine the percentages of cement elements (mainly Ca, Si, Al, and Fe) using the neutron capture delayed gamma-ray spectra of the substance emitted by the activated nuclei as patterns which were simulated via the Monte Carlo N-particle transport code, version 2.7. The Radial Basis Function (RBF) network is developed with four specific peaks related to Ca, Si, Al and Fe, which were extracted as inputs. The proposed RBF model is developed and trained with MATLAB 7.8 software. To obtain the optimal RBF model, several structures have been constructed and tested. The comparison between simulated and predicted values using the proposed RBF model shows that there is a good agreement between them.

  8. Sequential state generation by model neural networks.

    PubMed Central

    Kleinfeld, D

    1986-01-01

    Sequential patterns of neural output activity form the basis of many biological processes, such as the cyclic pattern of outputs that control locomotion. I show how such sequences can be generated by a class of model neural networks that make defined sets of transitions between selected memory states. Sequence-generating networks depend upon the interplay between two sets of synaptic connections. One set acts to stabilize the network in its current memory state, while the second set, whose action is delayed in time, causes the network to make specified transitions between the memories. The dynamic properties of these networks are described in terms of motion along an energy surface. The performance of the networks, both with intact connections and with noisy or missing connections, is illustrated by numerical examples. In addition, I present a scheme for the recognition of externally generated sequences by these networks. PMID:3467316

  9. Sunspot prediction using neural networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Baffes, Paul

    1990-01-01

    The earliest systematic observance of sunspot activity is known to have been discovered by the Chinese in 1382 during the Ming Dynasty (1368 to 1644) when spots on the sun were noticed by looking at the sun through thick, forest fire smoke. Not until after the 18th century did sunspot levels become more than a source of wonderment and curiosity. Since 1834 reliable sunspot data has been collected by the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Naval Observatory. Recently, considerable effort has been placed upon the study of the effects of sunspots on the ecosystem and the space environment. The efforts of the Artificial Intelligence Section of the Mission Planning and Analysis Division of the Johnson Space Center involving the prediction of sunspot activity using neural network technologies are described.

  10. Antenna analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern

  11. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    PubMed

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2016-07-14

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  12. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  13. Neural networks in the former Soviet Union

    SciTech Connect

    Wunsch, D.C. II.

    1993-01-01

    A brief overview is given of neural networks activities in the former Soviet Union that have potential aerospace applications. Activities at institutes in Moscow, the former Leningrad, Kiev, Taganrog, Rostov-on-Don, and Krasnoyarsk are addressed, including the most important scientists involved. 21 refs.

  14. Signal dispersion within a hippocampal neural network

    NASA Technical Reports Server (NTRS)

    Horowitz, J. M.; Mates, J. W. B.

    1975-01-01

    A model network is described, representing two neural populations coupled so that one population is inhibited by activity it excites in the other. Parameters and operations within the model represent EPSPs, IPSPs, neural thresholds, conduction delays, background activity and spatial and temporal dispersion of signals passing from one population to the other. Simulations of single-shock and pulse-train driving of the network are presented for various parameter values. Neuronal events from 100 to 300 msec following stimulation are given special consideration in model calculations.

  15. Optical neural stimulation modeling on degenerative neocortical neural networks

    NASA Astrophysics Data System (ADS)

    Zverev, M.; Fanjul-Vélez, F.; Salas-García, I.; Arce-Diego, J. L.

    2015-07-01

    Neurodegenerative diseases usually appear at advanced age. Medical advances make people live longer and as a consequence, the number of neurodegenerative diseases continuously grows. There is still no cure for these diseases, but several brain stimulation techniques have been proposed to improve patients' condition. One of them is Optical Neural Stimulation (ONS), which is based on the application of optical radiation over specific brain regions. The outer cerebral zones can be noninvasively stimulated, without the common drawbacks associated to surgical procedures. This work focuses on the analysis of ONS effects in stimulated neurons to determine their influence in neuronal activity. For this purpose a neural network model has been employed. The results show the neural network behavior when the stimulation is provided by means of different optical radiation sources and constitute a first approach to adjust the optical light source parameters to stimulate specific neocortical areas.

  16. Frequency decoding of periodically timed action potentials through distinct activity patterns in a random neural network

    NASA Astrophysics Data System (ADS)

    Reichenbach, Tobias; Hudspeth, A. J.

    2012-11-01

    Frequency discrimination is a fundamental task of the auditory system. The mammalian inner ear, or cochlea, provides a place code in which different frequencies are detected at different spatial locations. However, a temporal code based on spike timing is also available: action potentials evoked in an auditory-nerve fiber by a low-frequency tone occur at a preferred phase of the stimulus—they exhibit phase locking—and thus provide temporal information about the tone's frequency. Humans employ this temporal information for discrimination of low frequencies. How might such temporal information be read out in the brain? Here we employ statistical and numerical methods to demonstrate that recurrent random neural networks in which connections between neurons introduce characteristic time delays, and in which neurons require temporally coinciding inputs for spike initiation, can perform sharp frequency discrimination when stimulated with phase-locked inputs. Although the frequency resolution achieved by such networks is limited by the noise in phase locking, the resolution for realistic values reaches the tiny frequency difference of 0.2% that has been measured in humans.

  17. Wavelet differential neural network observer.

    PubMed

    Chairez, Isaac

    2009-09-01

    State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown.

  18. Dynamic Attractors and Basin Class Capacity in Binary Neural Networks

    DTIC Science & Technology

    1994-12-21

    The wide repertoire of attractors and basins of attraction that appear in dynamic neural networks not only serve as models of brain activity patterns...limitations of static neural networks by use of dynamic attractors and their basins. The results show that dynamic networks have a high capacity for

  19. The application of the multi-alternative approach in active neural network models

    NASA Astrophysics Data System (ADS)

    Podvalny, S.; Vasiljev, E.

    2017-02-01

    The article refers to the construction of intelligent systems based artificial neuron networks are used. We discuss the basic properties of the non-compliance of artificial neuron networks and their biological prototypes. It is shown here that the main reason for these discrepancies is the structural immutability of the neuron network models in the learning process, that is, their passivity. Based on the modern understanding of the biological nervous system as a structured ensemble of nerve cells, it is proposed to abandon the attempts to simulate its work at the level of the elementary neurons functioning processes and proceed to the reproduction of the information structure of data storage and processing on the basis of the general enough evolutionary principles of multialternativity, i.e. the multi-level structural model, diversity and modularity. The implementation method of these principles is offered, using the faceted memory organization in the neuron network with the rearranging active structure. An example of the implementation of the active facet-type neuron network in the intellectual decision-making system in the conditions of critical events development in the electrical distribution system.

  20. Localizing Tortoise Nests by Neural Networks.

    PubMed

    Barbuti, Roberto; Chessa, Stefano; Micheli, Alessio; Pucci, Rita

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  1. Localizing Tortoise Nests by Neural Networks

    PubMed Central

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  2. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2002-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, online, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce {sub x} emissions and improve heat rate

  3. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  4. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-03-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  5. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2003-12-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  6. Imaging evolutionarily conserved neural networks: preferential activation of the olfactory system by food-related odor.

    PubMed

    Kulkarni, Praveen; Stolberg, Tara; Sullivanjr, J M; Ferris, Craig F

    2012-04-21

    Rodents routinely forge and rely on hippocampal-dependent spatial memory to guide them to sources of caloric rich food in their environment. Has evolution affected the olfactory system and its connections to the hippocampus and limbic cortex, so rodents have an innate sensitivity to energy rich food and their location? To test this notion, we used functional magnetic resonance imaging in awake rats to observe changes in brain activity in response to four odors: benzaldehyde (almond odor), isoamyl acetate (banana odor), methyl benzoate (rosy odor), and limonene (citrus odor). We chose the almond odor because nuts are high in calories and would be expected to convey greater valance as compared to the other odors. Moreover, the standard food chow is devoid of nuts, so laboratory bred rats would not have any previous exposure to this food. Activation maps derived from computational analysis using a 3D segmented rat MRI atlas were dramatically different between odors. Animals exposed to banana, rosy and citrus odors showed modest activation of the primary olfactory system, hippocampus and limbic cortex. However, animals exposed to almond showed a robust increase in brain activity in the primary olfactory system particularly the main olfactory bulb, anterior olfactory nucleus and tenia tecta. The most significant difference in brain activation between odors was observed in the hippocampus and limbic cortex. These findings show that fMRI can be used to identify neural circuits that have an innate sensitivity to environmental stimuli that may help in an animal's survival.

  7. Signal Approximation with a Wavelet Neural Network

    DTIC Science & Technology

    1992-12-01

    specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .

  8. A Neural Network Based Speech Recognition System

    DTIC Science & Technology

    1990-02-01

    encoder and identifies individual words. This use of neural networks offers two advantages over conventional algorithmic detectors: the detection...environment. Keywords: Artificial intelligence; Neural networks : Back propagation; Speech recognition.

  9. Plant Growth Models Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  10. Neural Networks for Flight Control

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  11. Analysis and Design of Neural Networks

    DTIC Science & Technology

    1992-01-01

    The training problem for feedforward neural networks is nonlinear parameter estimation that can be solved by a variety of optimization techniques...Much of the literature of neural networks has focused on variants of gradient descent. The training of neural networks using such techniques is known to...be a slow process with more sophisticated techniques not always performing significantly better. It is shown that feedforward neural networks can

  12. Radar System Classification Using Neural Networks

    DTIC Science & Technology

    1991-12-01

    This study investigated methods of improving the accuracy of neural networks in the classification of large numbers of classes. A literature search...revealed that neural networks have been successful in the radar classification problem, and that many complex problems have been solved using systems...of multiple neural networks . The experiments conducted were based on 32 classes of radar system data. The neural networks were modelled using a program

  13. Neural Detection of Malicious Network Activities Using a New Direct Parsing and Feature Extraction Technique

    DTIC Science & Technology

    2015-09-01

    NETWORK ACTIVITIES USING A NEW DIRECT PARSING AND FEATURE EXTRACTION TECHNIQUE by Cheng Hong Low September 2015 Thesis Advisor: Phillip Pace Co...FEATURE EXTRACTION TECHNIQUE 5. FUNDING NUMBERS 6. AUTHOR(S) Low, Cheng Hong 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Center for...FEATURE EXTRACTION TECHNIQUE Cheng Hong Low Civlian, ST Aerospace, Singapore M.Sc., National University of Singapore, 2012 Submitted in

  14. Neural networks and applications tutorial

    NASA Astrophysics Data System (ADS)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  15. Fault Tolerance of Neural Networks

    DTIC Science & Technology

    1994-07-01

    Systematic Ap - proach, Proc. Government Microcircuit Application Conf. (GOMAC), San Diego, Nov. 1986. [10] D.E.Goldberg, Genetic Algorithms in Search...s l m n ttempt to develop fault tolerant neural networks. The lows. Given a well-trained network, we first eliminate temp todevlopfaut tlernt eurl ...both ap - proaches, and this resulted in very slight improve- ments over the addition/deletion procedure. 103 Fisher’s Iris data in average case Fisher’s

  16. Analysis of short single rest/activation epoch fMRI by self-organizing map neural network

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Dietrich, Thomas; Kemeny, Stefan; Krings, Timo; Willmes, Klaus; Thron, Armin; Oberschelp, Walter

    2000-04-01

    Functional magnet resonance imaging (fMRI) has become a standard non invasive brain imaging technique delivering high spatial resolution. Brain activation is determined by magnetic susceptibility of the blood oxygen level (BOLD effect) during an activation task, e.g. motor, auditory and visual tasks. Usually box-car paradigms have 2 - 4 rest/activation epochs with at least an overall of 50 volumes per scan in the time domain. Statistical test based analysis methods need a large amount of repetitively acquired brain volumes to gain statistical power, like Student's t-test. The introduced technique based on a self-organizing neural network (SOM) makes use of the intrinsic features of the condition change between rest and activation epoch and demonstrated to differentiate between the conditions with less time points having only one rest and one activation epoch. The method reduces scan and analysis time and the probability of possible motion artifacts from the relaxation of the patients head. Functional magnet resonance imaging (fMRI) of patients for pre-surgical evaluation and volunteers were acquired with motor (hand clenching and finger tapping), sensory (ice application), auditory (phonological and semantic word recognition task) and visual paradigms (mental rotation). For imaging we used different BOLD contrast sensitive Gradient Echo Planar Imaging (GE-EPI) single-shot pulse sequences (TR 2000 and 4000, 64 X 64 and 128 X 128, 15 - 40 slices) on a Philips Gyroscan NT 1.5 Tesla MR imager. All paradigms were RARARA (R equals rest, A equals activation) with an epoch width of 11 time points each. We used the self-organizing neural network implementation described by T. Kohonen with a 4 X 2 2D neuron map. The presented time course vectors were clustered by similar features in the 2D neuron map. Three neural networks were trained and used for labeling with the time course vectors of one, two and all three on/off epochs. The results were also compared by using a

  17. Artificial neural networks in medicine

    SciTech Connect

    Keller, P.E.

    1994-07-01

    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  18. Semantic Interpretation of An Artificial Neural Network

    DTIC Science & Technology

    1995-12-01

    success for stock market analysis/prediction is artificial neural networks. However, knowledge embedded in the neural network is not easily translated...interpret neural network knowledge. The first, called Knowledge Math, extends the use of connection weights, generating rules for general (i.e. non-binary

  19. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  20. Implementing Signature Neural Networks with Spiking Neurons

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  1. Implementing Signature Neural Networks with Spiking Neurons.

    PubMed

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  2. A Neural Network Underlying Circadian Entrainment and Photoperiodic Adjustment of Sleep and Activity in Drosophila

    PubMed Central

    Schlichting, Matthias; Menegazzi, Pamela; Lelito, Katharine R.; Yao, Zepeng; Buhl, Edgar; Dalla Benetta, Elena; Bahle, Andrew; Denike, Jennifer; Hodge, James John

    2016-01-01

    A sensitivity of the circadian clock to light/dark cycles ensures that biological rhythms maintain optimal phase relationships with the external day. In animals, the circadian clock neuron network (CCNN) driving sleep/activity rhythms receives light input from multiple photoreceptors, but how these photoreceptors modulate CCNN components is not well understood. Here we show that the Hofbauer-Buchner eyelets differentially modulate two classes of ventral lateral neurons (LNvs) within the Drosophila CCNN. The eyelets antagonize Cryptochrome (CRY)- and compound-eye-based photoreception in the large LNvs while synergizing CRY-mediated photoreception in the small LNvs. Furthermore, we show that the large LNvs interact with subsets of “evening cells” to adjust the timing of the evening peak of activity in a day length-dependent manner. Our work identifies a peptidergic connection between the large LNvs and a group of evening cells that is critical for the seasonal adjustment of circadian rhythms. SIGNIFICANCE STATEMENT In animals, circadian clocks have evolved to orchestrate the timing of behavior and metabolism. Consistent timing requires the entrainment these clocks to the solar day, a process that is critical for an organism's health. Light cycles are the most important external cue for the entrainment of circadian clocks, and the circadian system uses multiple photoreceptors to link timekeeping to the light/dark cycle. How light information from these photorecptors is integrated into the circadian clock neuron network to support entrainment is not understood. Our results establish that input from the HB eyelets differentially impacts the physiology of neuronal subgroups. This input pathway, together with input from the compound eyes, precisely times the activity of flies under long summer days. Our results provide a mechanistic model of light transduction and integration into the circadian system, identifying new and unexpected network motifs within the circadian

  3. Intrinsic adaptation in autonomous recurrent neural networks.

    PubMed

    Marković, Dimitrije; Gros, Claudius

    2012-02-01

    A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the quality of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.

  4. Altered Synchronizations among Neural Networks in Geriatric Depression.

    PubMed

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  5. Neural networks for atmospheric retrievals

    NASA Technical Reports Server (NTRS)

    Motteler, Howard E.; Gualtieri, J. A.; Strow, L. Larrabee; Mcmillin, Larry

    1993-01-01

    We use neural networks to perform retrievals of temperature and water fractions from simulated clear air radiances for the Atmospheric Infrared Sounder (AIRS). Neural networks allow us to make effective use of the large AIRS channel set, and give good performance with noisy input. We retrieve surface temperature, air temperature at 64 distinct pressure levels, and water fractions at 50 distinct pressure levels. Using 728 temperature and surface sensitive channels, the RMS error for temperature retrievals with 0.2K input noise is 1.2K. Using 586 water and temperature sensitive channels, the mean error with 0.2K input noise is 16 percent. Our implementation of backpropagation training for neural networks on the 16,000-processor MasPar MP-1 runs at a rate of 90 million weight updates per second, and allows us to train large networks in a reasonable amount of time. Once trained, the network can be used to perform retrievals quickly on a workstation of moderate power.

  6. Controlling neural network responsiveness: tradeoffs and constraints

    PubMed Central

    Keren, Hanna; Marom, Shimon

    2014-01-01

    In recent years much effort is invested in means to control neural population responses at the whole brain level, within the context of developing advanced medical applications. The tradeoffs and constraints involved, however, remain elusive due to obvious complications entailed by studying whole brain dynamics. Here, we present effective control of response features (probability and latency) of cortical networks in vitro over many hours, and offer this approach as an experimental toy for studying controllability of neural networks in the wider context. Exercising this approach we show that enforcement of stable high activity rates by means of closed loop control may enhance alteration of underlying global input–output relations and activity dependent dispersion of neuronal pair-wise correlations across the network. PMID:24808860

  7. Hourly photosynthetically active radiation estimation in Midwestern United States from artificial neural networks and conventional regressions models.

    PubMed

    Yu, Xiaolei; Guo, Xulin

    2016-08-01

    The relationship between hourly photosynthetically active radiation (PAR) and the global solar radiation (R s ) was analyzed from data gathered over 3 years at Bondville, IL, and Sioux Falls, SD, Midwestern USA. These data were used to determine temporal variability of the PAR fraction and its dependence on different sky conditions, which were defined by the clearness index. Meanwhile, models based on artificial neural networks (ANNs) were established for predicting hourly PAR. The performance of the proposed models was compared with four existing conventional regression models in terms of the normalized root mean square error (NRMSE), the coefficient of determination (r (2)), the mean percentage error (MPE), and the relative standard error (RSE). From the overall analysis, it shows that the ANN model can predict PAR accurately, especially for overcast sky and clear sky conditions. Meanwhile, the parameters related to water vapor do not improve the prediction result significantly.

  8. Hourly photosynthetically active radiation estimation in Midwestern United States from artificial neural networks and conventional regressions models

    NASA Astrophysics Data System (ADS)

    Yu, Xiaolei; Guo, Xulin

    2016-08-01

    The relationship between hourly photosynthetically active radiation (PAR) and the global solar radiation ( R s ) was analyzed from data gathered over 3 years at Bondville, IL, and Sioux Falls, SD, Midwestern USA. These data were used to determine temporal variability of the PAR fraction and its dependence on different sky conditions, which were defined by the clearness index. Meanwhile, models based on artificial neural networks (ANNs) were established for predicting hourly PAR. The performance of the proposed models was compared with four existing conventional regression models in terms of the normalized root mean square error (NRMSE), the coefficient of determination ( r 2), the mean percentage error (MPE), and the relative standard error (RSE). From the overall analysis, it shows that the ANN model can predict PAR accurately, especially for overcast sky and clear sky conditions. Meanwhile, the parameters related to water vapor do not improve the prediction result significantly.

  9. Octopamine modulates activity of neural networks in the honey bee antennal lobe.

    PubMed

    Rein, Julia; Mustard, Julie A; Strauch, Martin; Smith, Brian H; Galizia, C Giovanni

    2013-11-01

    Neuronal plasticity allows an animal to respond to environmental changes by modulating its response to stimuli. In the honey bee (Apis mellifera), the biogenic amine octopamine plays a crucial role in appetitive odor learning, but little is known about how octopamine affects the brain. We investigated its effect in the antennal lobe, the first olfactory center in the brain, using calcium imaging to record background activity and odor responses before and after octopamine application. We show that octopamine increases background activity in olfactory output neurons, while reducing average calcium levels. Odor responses were modulated both upwards and downwards, with more odor response increases in glomeruli with negative or weak odor responses. Importantly, the octopamine effect was variable across glomeruli, odorants, odorant concentrations and animals, suggesting that the octopaminergic network is shaped by plasticity depending on an individual animal's history and possibly other factors. Using RNA interference, we show that the octopamine receptor AmOA1 (homolog of the Drosophila OAMB receptor) is involved in the octopamine effect. We propose a network model in which octopamine receptors are plastic in their density and located on a subpopulation of inhibitory neurons in a disinhibitory pathway. This would improve odor-coding of behaviorally relevant, previously experienced odors.

  10. Training Neural Networks with Weight Constraints

    DTIC Science & Technology

    1993-03-01

    Hardware implementation of artificial neural networks imposes a variety of constraints. Finite weight magnitudes exist in both digital and analog...optimizing a network with weight constraints. Comparisons are made to the backpropagation training algorithm for networks with both unconstrained and hard-limited weight magnitudes. Neural networks , Analog, Digital, Stochastic

  11. Learning the Relationship between the Primary Structure of HIV Envelope Glycoproteins and Neutralization Activity of Particular Antibodies by Using Artificial Neural Networks

    PubMed Central

    Buiu, Cătălin; Putz, Mihai V.; Avram, Speranta

    2016-01-01

    The dependency between the primary structure of HIV envelope glycoproteins (ENV) and the neutralization data for given antibodies is very complicated and depends on a large number of factors, such as the binding affinity of a given antibody for a given ENV protein, and the intrinsic infection kinetics of the viral strain. This paper presents a first approach to learning these dependencies using an artificial feedforward neural network which is trained to learn from experimental data. The results presented here demonstrate that the trained neural network is able to generalize on new viral strains and to predict reliable values of neutralizing activities of given antibodies against HIV-1. PMID:27727189

  12. Evaluation of artificial neural network algorithms for predicting METs and activity type from accelerometer data: validation on an independent sample

    PubMed Central

    Lyden, Kate; Kozey-Keadle, Sarah; Staudenmayer, John

    2011-01-01

    Previous work from our laboratory provided a “proof of concept” for use of artificial neural networks (nnets) to estimate metabolic equivalents (METs) and identify activity type from accelerometer data (Staudenmayer J, Pober D, Crouter S, Bassett D, Freedson P, J Appl Physiol 107: 1330–1307, 2009). The purpose of this study was to develop new nnets based on a larger, more diverse, training data set and apply these nnet prediction models to an independent sample to evaluate the robustness and flexibility of this machine-learning modeling technique. The nnet training data set (University of Massachusetts) included 277 participants who each completed 11 activities. The independent validation sample (n = 65) (University of Tennessee) completed one of three activity routines. Criterion measures were 1) measured METs assessed using open-circuit indirect calorimetry; and 2) observed activity to identify activity type. The nnet input variables included five accelerometer count distribution features and the lag-1 autocorrelation. The bias and root mean square errors for the nnet MET trained on University of Massachusetts and applied to University of Tennessee were +0.32 and 1.90 METs, respectively. Seventy-seven percent of the activities were correctly classified as sedentary/light, moderate, or vigorous intensity. For activity type, household and locomotion activities were correctly classified by the nnet activity type 98.1 and 89.5% of the time, respectively, and sport was correctly classified 23.7% of the time. Use of this machine-learning technique operates reasonably well when applied to an independent sample. We propose the creation of an open-access activity dictionary, including accelerometer data from a broad array of activities, leading to further improvements in prediction accuracy for METs, activity intensity, and activity type. PMID:21885802

  13. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  14. Fiber optic Adaline neural networks

    NASA Astrophysics Data System (ADS)

    Ghosh, Anjan K.; Trepka, Jim; Paparao, Palacharla

    1993-02-01

    Optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators has been discussed recently. We describe the design of a single layer fiber optic Adaline neural network which can be used as a bit pattern classifier. In our realization we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The new optical neural network described in this paper is designed for optical processing of guided lightwave signals, not electronic signals. We analyzed the convergence or learning characteristics of the optically implemented Adaline in the presence of errors in the hardware, and we studied methods for improving the convergence rate of the Adaline.

  15. Prototype neural network pattern recognition testbed

    NASA Astrophysics Data System (ADS)

    Worrell, Steven W.; Robertson, James A.; Varner, Thomas L.; Garvin, Charles G.

    1991-02-01

    Recent successes ofneural networks has led to an optimistic outlook for neural network applications to image processing(IP). This paperpresents a general architecture for performing comparative studies of neural processing and more conventional IF techniques as well as hybrid pattern recognition (PR) systems. Two hybrid PR systems have been simulated each of which incorporate both conventional and neural processing techniques.

  16. Neural Network for Visual Search Classification

    DTIC Science & Technology

    2007-11-02

    neural network used to perform visual search classification. The neural network consists of a Learning vector quantization network (LVQ) and a single layer perceptron. The objective of this neural network is to classify the various human visual search patterns into predetermined classes. The classes signify the different search strategies used by individuals to scan the same target pattern. The input search patterns are quantified with respect to an ideal search pattern, determined by the user. A supervised learning rule,

  17. The LILARTI neural network system

    SciTech Connect

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  18. Membership generation using multilayer neural network

    NASA Technical Reports Server (NTRS)

    Kim, Jaeseok

    1992-01-01

    There has been intensive research in neural network applications to pattern recognition problems. Particularly, the back-propagation network has attracted many researchers because of its outstanding performance in pattern recognition applications. In this section, we describe a new method to generate membership functions from training data using a multilayer neural network. The basic idea behind the approach is as follows. The output values of a sigmoid activation function of a neuron bear remarkable resemblance to membership values. Therefore, we can regard the sigmoid activation values as the membership values in fuzzy set theory. Thus, in order to generate class membership values, we first train a suitable multilayer network using a training algorithm such as the back-propagation algorithm. After the training procedure converges, the resulting network can be treated as a membership generation network, where the inputs are feature values and the outputs are membership values in the different classes. This method allows fairly complex membership functions to be generated because the network is highly nonlinear in general. Also, it is to be noted that the membership functions are generated from a classification point of view. For pattern recognition applications, this is highly desirable, although the membership values may not be indicative of the degree of typicality of a feature value in a particular class.

  19. Fast implementation of neural network classification

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Ok, Jiheon; Lee, Chulhee

    2013-09-01

    Most artificial neural networks use a nonlinear activation function that includes sigmoid and hyperbolic tangent functions. Most artificial networks employ nonlinear functions such as these sigmoid and hyperbolic tangent functions, which incur high complexity costs, particularly during hardware implementation. In this paper, we propose new polynomial approximation methods for nonlinear activation functions that can substantially reduce complexity without sacrificing performance. The proposed approximation methods were applied to pattern classification problems. Experimental results show that the processing time was reduced by up to 50% without any performance degradations in terms of computer simulation.

  20. Regulation of the nascent brain vascular network by neural progenitors.

    PubMed

    Santhosh, Devi; Huang, Zhen

    2015-11-01

    Neural progenitors are central players in the development of the brain neural circuitry. They not only produce the diverse neuronal and glial cell types in the brain, but also guide their migration in this process. Recent evidence indicates that neural progenitors also play a critical role in the development of the brain vascular network. At an early stage, neural progenitors have been found to facilitate the ingression of blood vessels from outside the neural tube, through VEGF and canonical Wnt signaling. Subsequently, neural progenitors directly communicate with endothelial cells to stabilize nascent brain vessels, in part through down-regulating Wnt pathway activity. Furthermore, neural progenitors promote nascent brain vessel integrity, through integrin αvβ8-dependent TGFβ signaling. In this review, we will discuss the evidence for, as well as questions that remain, regarding these novel roles of neural progenitors and the underlying mechanisms in their regulation of the nascent brain vascular network.

  1. Application of neural network method to detect type of uranium contamination by estimation of activity ratio in environmental alpha spectra.

    PubMed

    Einian, M R; Aghamiri, S M R; Ghaderi, R

    2016-01-01

    The discrimination of the composition of environmental and non-environmental materials by the estimation of the (234)U/(238)U activity ratio in alpha-particle spectrometry is important in many applications. If the interfering elements are not completely separated from the uranium, they can interfere with the determination of (234)U. Thickness as a result of the existence of iron in the source preparation phase and their alpha lines can broaden the alpha line of (234)U in alpha spectra. Therefore, the asymmetric broadening of the alpha line of (234)U and overlapping of peaks make the analysis of the alpha particle spectra and the interpretation of the results difficult. Applying Artificial Neural Network (ANN) to a spectrometry system is a good idea because it eliminates limitations of classical approaches by extracting the desired information from the input data. In this work, the average of a partial uranium raw spectrum, were considered. Each point that its slope was of the order of 0-1% per 10 channels, was used as input to the multi-layer feed forward error-back propagation network. The network was trained by an alpha spectrum library which has been developed in the present work. The training data in this study was actual spectral data with any reasonable thickness and interfering elements. According to the results, the method applied to estimate the activity ratio in this work, can examine the alpha spectrum for peaks which would not be expected for a source of given element and provide the clues about composition of uranium contamination in the environmental samples in a fast screening and classifying procedures.

  2. Neural network modeling of emotion

    NASA Astrophysics Data System (ADS)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  3. Feature Extraction Using an Unsupervised Neural Network

    DTIC Science & Technology

    1991-05-03

    A novel unsupervised neural network for dimensionality reduction which seeks directions emphasizing distinguishing features in the data is presented. A statistical framework for the parameter estimation problem associated with this neural network is given and its connection to exploratory projection pursuit methods is established. The network is shown to minimize a loss function (projection index) over a

  4. Neural networks and MIMD-multiprocessors

    NASA Technical Reports Server (NTRS)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  5. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  6. In vitro enhanced differentiation of neural networks in ES gut-like organ from mouse ES cells by a 5-HT4-receptor activation.

    PubMed

    Takaki, Miyako; Misawa, Hiromi; Matsuyoshi, Hiroko; Kawahara, Isao; Goto, Kei; Zhang, Guo-Xing; Obata, Koji; Kuniyasu, Hiroki

    2011-03-25

    Using an embryoid body (EB) culture system, we developed a functional organ-like cluster, a "gut", from mouse embryonic stem (ES) cells (ES gut). Each ES gut exhibited various types of spontaneous movements. In these spontaneously contracting ES guts, dense distributions of interstitial cells of Cajal (ICC) (c-kit, a transmembrane receptor that has tyrosine kinase activity, positive cells; gut pacemaker cells) and smooth muscle cells were discernibly identified, but enteric neural networks were not identified. In the present study, we succeeded in forming dense enteric neural networks by a 5-HT(4)-receptor (SR4) agonist, mosapride citrate (1-10 μM) added only during EB formation. Addition of an SR4-antagonist, GR113808 (10 μM) abolished the SR4-agonist-induced formation of enteric neural networks. The SR4-agonist (1 μM) up-regulated the expression of mRNA of SR4 and the SR4-antagonist abolished this upregulation. 5-HT per se exerted similar effects to those of SR4-agonist, though less potent. These results suggest SR4-agonist differentiated enteric neural networks, mediated via activation of SR4 in the ES gut.

  7. A cardiac electrical activity model based on a cellular automata system in comparison with neural network model.

    PubMed

    Khan, Muhammad Sadiq Ali; Yousuf, Sidrah

    2016-03-01

    Cardiac Electrical Activity is commonly distributed into three dimensions of Cardiac Tissue (Myocardium) and evolves with duration of time. The indicator of heart diseases can occur randomly at any time of a day. Heart rate, conduction and each electrical activity during cardiac cycle should be monitor non-invasively for the assessment of "Action Potential" (regular) and "Arrhythmia" (irregular) rhythms. Many heart diseases can easily be examined through Automata model like Cellular Automata concepts. This paper deals with the different states of cardiac rhythms using cellular automata with the comparison of neural network also provides fast and highly effective stimulation for the contraction of cardiac muscles on the Atria in the result of genesis of electrical spark or wave. The specific formulated model named as "States of automaton Proposed Model for CEA (Cardiac Electrical Activity)" by using Cellular Automata Methodology is commonly shows the three states of cardiac tissues conduction phenomena (i) Resting (Relax and Excitable state), (ii) ARP (Excited but Absolutely refractory Phase i.e. Excited but not able to excite neighboring cells) (iii) RRP (Excited but Relatively Refractory Phase i.e. Excited and able to excite neighboring cells). The result indicates most efficient modeling with few burden of computation and it is Action Potential during the pumping of blood in cardiac cycle.

  8. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  9. Phase Transitions in Living Neural Networks

    NASA Astrophysics Data System (ADS)

    Williams-Garcia, Rashid Vladimir

    Our nervous systems are composed of intricate webs of interconnected neurons interacting in complex ways. These complex interactions result in a wide range of collective behaviors with implications for features of brain function, e.g., information processing. Under certain conditions, such interactions can drive neural network dynamics towards critical phase transitions, where power-law scaling is conjectured to allow optimal behavior. Recent experimental evidence is consistent with this idea and it seems plausible that healthy neural networks would tend towards optimality. This hypothesis, however, is based on two problematic assumptions, which I describe and for which I present alternatives in this thesis. First, critical transitions may vanish due to the influence of an environment, e.g., a sensory stimulus, and so living neural networks may be incapable of achieving "critical" optimality. I develop a framework known as quasicriticality, in which a relative optimality can be achieved depending on the strength of the environmental influence. Second, the power-law scaling supporting this hypothesis is based on statistical analysis of cascades of activity known as neuronal avalanches, which conflate causal and non-causal activity, thus confounding important dynamical information. In this thesis, I present a new method to unveil causal links, known as causal webs, between neuronal activations, thus allowing for experimental tests of the quasicriticality hypothesis and other practical applications.

  10. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Small-World Connections to Induce Firing Activity and Phase Synchronization in Neural Networks

    NASA Astrophysics Data System (ADS)

    Qin, Ying-Hua; Luo, Xiao-Shu

    2009-07-01

    We investigate how the firing activity and the subsequent phase synchronization of neural networks with small-world topological connections depend on the probability p of adding-links. Network elements are described by two-dimensional map neurons (2DMNs) in a quiescent original state. Neurons burst for a given coupling strength when the topological randomness p increases, which is absent in a regular-lattice neural network. The bursting activity becomes frequent and synchronization of neurons emerges as topological randomness further increases. The maximal firing frequency and phase synchronization appear at a particular value of p. However, if the randomness p further increases, the firing frequency decreases and synchronization is apparently destroyed.

  11. Oil reservoir properties estimation using neural networks

    SciTech Connect

    Toomarian, N.B.; Barhen, J.; Glover, C.W.; Aminzadeh, F.

    1997-02-01

    This paper investigates the applicability as well as the accuracy of artificial neural networks for estimating specific parameters that describe reservoir properties based on seismic data. This approach relies on JPL`s adjoint operators general purpose neural network code to determine the best suited architecture. The authors believe that results presented in this work demonstrate that artificial neural networks produce surprisingly accurate estimates of the reservoir parameters.

  12. Adaptive optimization and control using neural networks

    SciTech Connect

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  13. Neural Network Retinal Model Real Time Implementation

    DTIC Science & Technology

    1992-09-02

    addresses the specific needs of vision processing. The goal of this SBIR Phase I project has been to take a significant neural network vision...application and to map it onto dedicated hardware for real time implementation. The neural network was already demonstrated using software simulation on a...general purpose computer. During Phase 1, HNC took a neural network model of the retina and, using HNC’s Vision Processor (ViP) prototype hardware

  14. Neural Network False Alarm Filter. Volume 1.

    DTIC Science & Technology

    1994-12-01

    This effort identified, developed and demonstrated a set of approaches for applying neural network learning techniques to the development of a real... neural network models, 9 fault report causes and 12 common groups of BIT techniques was identified. From this space, 4 unique, high-potential...of their strengths and weaknesses were performed along with cost/ benefit analyses. This study concluded that the best candidates for neural network insert

  15. A Neural Network Object Recognition System

    DTIC Science & Technology

    1990-07-01

    useful for exploring different neural network configurations. There are three main computation phases of a model based object recognition system...segmentation, feature extraction, and object classification. This report focuses on the object classification stage. For segmentation, a neural network based...are available with the current system. Neural network based feature extraction may be added at a later date. The classification stage consists of a

  16. Neural Networks Applied to Signal Processing

    DTIC Science & Technology

    1989-09-01

    DTIC FILE COpy NAVAL POSTGRADUATE SCHOOL . Monterey, California Lf 0 (0 V’ STATES 4 THESIS NEURAL NETWORKS APPLIED TO SIGNAL PROCESSING by Mark D...FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT ELEMENT NO NO NO ACCESSION NO. 11. TITLE (Include Security Classification) NEURAL NETWORKS APPLIED TO...for public release; distribution is unlimited Neural Networks Applied to Signal Processing by Mark D. Baehre Captain, United States Army B.S., United

  17. A Complexity Theory of Neural Networks

    DTIC Science & Technology

    1991-08-09

    Significant progress has been made in laying the foundations of a complexity theory of neural networks . The fundamental complexity classes have been...identified and studied. The class of problems solvable by small, shallow neural networks has been found to be the same class even if (1) probabilistic...behaviour (2)Multi-valued logic, and (3)analog behaviour, are allowed (subject to certain resonable technical assumptions). Neural networks can be

  18. Correcting wave predictions with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Makarynskyy, O.; Makarynska, D.

    2003-04-01

    The predictions of wind waves with different lead times are necessary in a large scope of coastal and open ocean activities. Numerical wave models, which usually provide this information, are based on deterministic equations that do not entirely account for the complexity and uncertainty of the wave generation and dissipation processes. An attempt to improve wave parameters short-term forecasts based on artificial neural networks is reported. In recent years, artificial neural networks have been used in a number of coastal engineering applications due to their ability to approximate the nonlinear mathematical behavior without a priori knowledge of interrelations among the elements within a system. The common multilayer feed-forward networks, with a nonlinear transfer functions in the hidden layers, were developed and employed to forecast the wave characteristics over one hour intervals starting from one up to 24 hours, and to correct these predictions. Three non-overlapping data sets of wave characteristics, both from a buoy, moored roughly 60 miles west of the Aran Islands, west coast of Ireland, were used to train and validate the neural nets involved. The networks were trained with error back propagation algorithm. Time series plots and scatterplots of the wave characteristics as well as tables with statistics show an improvement of the results achieved due to the correction procedure employed.

  19. Neural network architecture for crossbar switch control

    NASA Technical Reports Server (NTRS)

    Troudet, Terry P.; Walters, Stephen M.

    1991-01-01

    A Hopfield neural network architecture for the real-time control of a crossbar switch for switching packets at maximum throughput is proposed. The network performance and processing time are derived from a numerical simulation of the transitions of the neural network. A method is proposed to optimize electronic component parameters and synaptic connections, and it is fully illustrated by the computer simulation of a VLSI implementation of 4 x 4 neural net controller. The extension to larger size crossbars is demonstrated through the simulation of an 8 x 8 crossbar switch controller, where the performance of the neural computation is discussed in relation to electronic noise and inhomogeneities of network components.

  20. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  1. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  2. Advances in neural networks research: an introduction.

    PubMed

    Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar

    2009-01-01

    The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications.

  3. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  4. Neural Networks for Rapid Design and Analysis

    NASA Technical Reports Server (NTRS)

    Sparks, Dean W., Jr.; Maghami, Peiman G.

    1998-01-01

    Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.

  5. Episodic Bouts of Activity Accompany Recovery of Rhythmic Output by a Neuromodulator-and Activity-Deprived Adult Neural Network

    PubMed Central

    Luther, Jason A.; Robie, Alice A.; Yarotsky, John; Reina, Christopher; Marder, Eve; Golowasch, Jorge

    2013-01-01

    The pyloric rhythm of the stomatogastric ganglion of the crab, Cancer borealis, slows or stops when descending modulatory inputs are acutely removed. However, the rhythm spontaneously resumes after one or more days in the absence of neuromodulatory input. We recorded continuously for days to characterize quantitatively this recovery process. Activity bouts lasting 40 to 900 seconds began several hours after removal of neuromodulatory input and were followed by stable rhythm recovery after 1-4 days. Bout duration was not related to the intervals (0.3 to 800 minutes) between bouts. During an individual bout the frequency rapidly increased and then decreased more slowly. Photoablation of back-filled neuromodulatory terminals in the STG neuropil had no effect on activity bouts or recovery, suggesting that these processes are intrinsic to the STG neuronal network. After removal of neuromodulatory input the phase relationships of the components of the triphasic pyloric rhythm were altered, and then over time the phase relationships moved towards their control values. Although at low pyloric rhythm frequency the phase relationships among pyloric network neurons depended on frequency, the changes in frequency during recovery did not completely account for the change in phase seen after rhythm recovery. Additionally, we suggest that activity bouts represent underlying mechanisms controlling the restructuring of the pyloric network to allow resumption of an appropriate output following removal of neuromodulatory input. PMID:12840081

  6. Modeling the N400 ERP component as transient semantic over-activation within a neural network model of word comprehension.

    PubMed

    Cheyette, Samuel J; Plaut, David C

    2016-11-18

    The study of the N400 event-related brain potential has provided fundamental insights into the nature of real-time comprehension processes, and its amplitude is modulated by a wide variety of stimulus and context factors. It is generally thought to reflect the difficulty of semantic access, but formulating a precise characterization of this process has proved difficult. Laszlo and colleagues (Laszlo & Plaut, 2012; Laszlo & Armstrong, 2014) used physiologically constrained neural networks to model the N400 as transient over-activation within semantic representations, arising as a consequence of the distribution of excitation and inhibition within and between cortical areas. The current work extends this approach to successfully model effects on both N400 amplitudes and behavior of word frequency, semantic richness, repetition, semantic and associative priming, and orthographic neighborhood size. The account is argued to be preferable to one based on "implicit semantic prediction error" (Rabovsky & McRae, 2014) for a number of reasons, the most fundamental of which is that the current model actually produces N400-like waveforms in its real-time activation dynamics.

  7. Application of artificial neural networks for the soil moisture retrieval from active and passive microwave spaceborne sensors

    NASA Astrophysics Data System (ADS)

    Santi, Emanuele; Paloscia, Simonetta; Pettinato, Simone; Fontanelli, Giacomo

    2016-06-01

    Among the algorithms used for the retrieval of SMC from microwave sensors (both active, such as Synthetic Aperture Radar-SAR, and passive, radiometers), the artificial neural networks (ANN) represent the best compromise between accuracy and computation speed. ANN based algorithms have been developed at IFAC, and adapted to several radar and radiometric satellite sensors, in order to generate SMC products at a resolution varying from hundreds of meters to tens of kilometers according to the spatial scale of each sensor. These algorithms, which are based on the ANN techniques for inverting theoretical and semi-empirical models, have been adapted to the C- to Ka- band acquisitions from spaceborne radiometers (AMSR-E/AMSR2), SAR (Envisat/ASAR, Cosmo-SkyMed) and real aperture radar (MetOP ASCAT). Large datasets of co-located satellite acquisitions and direct SMC measurements on several test sites worldwide have been used along with simulations derived from forward electromagnetic models for setting up, training and validating these algorithms. An overall quality assessment of the obtained results in terms of accuracy and computational cost was carried out, and the main advantages and limitations for an operational use of these algorithms were evaluated. This technique allowed the retrieval of SMC from both active and passive satellite systems, with accuracy values of about 0.05 m3/m3 of SMC or better, thus making these applications compliant with the usual accuracy requirements for SMC products from space.

  8. Aerodynamic Design Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.

    2003-01-01

    The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.

  9. Neural networks for nuclear spectroscopy

    SciTech Connect

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T.

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  10. Character Recognition Using Genetically Trained Neural Networks

    SciTech Connect

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the amount of

  11. Flow Control Using Neural Networks

    DTIC Science & Technology

    2007-11-02

    FEB 93 - 31 DEC 96 4. TITLE AND SUBTITLE 5 . FUNDING NUMBERS FLOW CONTROL USING NEURAL NETWORKS F49620-93-1-0135 61102F 6. AUTHOR(S) 2307/BS THORWALD...OFFICE OF SCIENTIFIC RESEARCH (AFOSRO AGENCY REPORT NUMBER 110 DUNCAN AVENUE, ROOM B115 BOLLING AFB DC 20332- 8050 11. SUPPLEMENTARY NOTES 12a...signals. Figure 5 shows a time series for an actuator that performs a ramp motion in the streamwise direction over about 1 % of the TS period and remains

  12. Neural Network Classifies Teleoperation Data

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Giancaspro, Antonio; Losito, Sergio; Pasquariello, Guido

    1994-01-01

    Prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on manipulator hand. Prototype is early, subsystem-level product of continuing effort to develop automated system that assists in training and supervising human control operator: provides symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to operator in real time during successive executions of same task. Also simplifies transition between teleoperation and autonomous modes of telerobotic system.

  13. Correlated neural variability in persistent state networks.

    PubMed

    Polk, Amber; Litwin-Kumar, Ashok; Doiron, Brent

    2012-04-17

    Neural activity that persists long after stimulus presentation is a biological correlate of short-term memory. Variability in spiking activity causes persistent states to drift over time, ultimately degrading memory. Models of short-term memory often assume that the input fluctuations to neural populations are independent across cells, a feature that attenuates population-level variability and stabilizes persistent activity. However, this assumption is at odds with experimental recordings from pairs of cortical neurons showing that both the input currents and output spike trains are correlated. It remains unclear how correlated variability affects the stability of persistent activity and the performance of cognitive tasks that it supports. We consider the stochastic long-timescale attractor dynamics of pairs of mutually inhibitory populations of spiking neurons. In these networks, persistent activity was less variable when correlated variability was globally distributed across both populations compared with the case when correlations were locally distributed only within each population. Using a reduced firing rate model with a continuum of persistent states, we show that, when input fluctuations are correlated across both populations, they drive firing rate fluctuations orthogonal to the persistent state attractor, thereby causing minimal stochastic drift. Using these insights, we establish that distributing correlated fluctuations globally as opposed to locally improves network's performance on a two-interval, delayed response discrimination task. Our work shows that the correlation structure of input fluctuations to a network is an important factor when determining long-timescale, persistent population spiking activity.

  14. The Laplacian spectrum of neural networks.

    PubMed

    de Lange, Siemon C; de Reus, Marcel A; van den Heuvel, Martijn P

    2014-01-13

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these "conventional" graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks.

  15. Three dimensional living neural networks

    NASA Astrophysics Data System (ADS)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  16. Coexistence and local μ-stability of multiple equilibrium points for memristive neural networks with nonmonotonic piecewise linear activation functions and unbounded time-varying delays.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde

    2016-12-01

    In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5(n) equilibrium points located in ℜ(n), and 3(n) of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results.

  17. Neural Network Controlled Visual Saccades

    NASA Astrophysics Data System (ADS)

    Johnson, Jeffrey D.; Grogan, Timothy A.

    1989-03-01

    The paper to be presented will discuss research on a computer vision system controlled by a neural network capable of learning through classical (Pavlovian) conditioning. Through the use of unconditional stimuli (reward and punishment) the system will develop scan patterns of eye saccades necessary to differentiate and recognize members of an input set. By foveating only those portions of the input image that the system has found to be necessary for recognition the drawback of computational explosion as the size of the input image grows is avoided. The model incorporates many features found in animal vision systems, and is governed by understandable and modifiable behavior patterns similar to those reported by Pavlov in his classic study. These behavioral patterns are a result of a neuronal model, used in the network, explicitly designed to reproduce this behavior.

  18. Hand Gesture Recognition Using Neural Networks.

    DTIC Science & Technology

    1996-05-01

    inherent in the model. The high gesture recognition rates and quick network retraining times found in the present study suggest that a neural network approach to gesture recognition be further evaluated.

  19. Predicting neural network firing pattern from phase resetting curve

    NASA Astrophysics Data System (ADS)

    Oprisan, Sorinel; Oprisan, Ana

    2007-04-01

    Autonomous neural networks called central pattern generators (CPG) are composed of endogenously bursting neurons and produce rhythmic activities, such as flying, swimming, walking, chewing, etc. Simplified CPGs for quadrupedal locomotion and swimming are modeled by a ring of neural oscillators such that the output of one oscillator constitutes the input for the subsequent neural oscillator. The phase response curve (PRC) theory discards the detailed conductance-based description of the component neurons of a network and reduces them to ``black boxes'' characterized by a transfer function, which tabulates the transient change in the intrinsic period of a neural oscillator subject to external stimuli. Based on open-loop PRC, we were able to successfully predict the phase-locked period and relative phase between neurons in a half-center network. We derived existence and stability criteria for heterogeneous ring neural networks that are in good agreement with experimental data.

  20. Modeling Aircraft Wing Loads from Flight Data Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Allen, Michael J.; Dibley, Ryan P.

    2003-01-01

    Neural networks were used to model wing bending-moment loads, torsion loads, and control surface hinge-moments of the Active Aeroelastic Wing (AAW) aircraft. Accurate loads models are required for the development of control laws designed to increase roll performance through wing twist while not exceeding load limits. Inputs to the model include aircraft rates, accelerations, and control surface positions. Neural networks were chosen to model aircraft loads because they can account for uncharacterized nonlinear effects while retaining the capability to generalize. The accuracy of the neural network models was improved by first developing linear loads models to use as starting points for network training. Neural networks were then trained with flight data for rolls, loaded reversals, wind-up-turns, and individual control surface doublets for load excitation. Generalization was improved by using gain weighting and early stopping. Results are presented for neural network loads models of four wing loads and four control surface hinge moments at Mach 0.90 and an altitude of 15,000 ft. An average model prediction error reduction of 18.6 percent was calculated for the neural network models when compared to the linear models. This paper documents the input data conditioning, input parameter selection, structure, training, and validation of the neural network models.

  1. Neural network compensation of semi-active control for magneto-rheological suspension with time delay uncertainty

    NASA Astrophysics Data System (ADS)

    Dong, Xiao Min; Yu, Miao; Li, Zushu; Liao, Changrong; Chen, Weimin

    2009-01-01

    This study presents a new intelligent control method, human-simulated intelligent control (HSIC) based on the sensory motor intelligent schema (SMIS), for a magneto-rheological (MR) suspension system considering the time delay uncertainty of MR dampers. After formulating the full car dynamic model featuring four MR dampers, the HSIC based on eight SMIS is derived. A neural network model is proposed to compensate for the uncertain time delay of the MR dampers. The HSIC based on SMIS is then experimentally realized for the manufactured full vehicle MR suspension system on the basis of the dSPACE platform. Its performance is evaluated and compared under various road conditions and presented in both time and frequency domains. The results show that significant gains are made in the improvement of vehicle performance. Results include a reduction of over 35% in the acceleration peak-to-peak value of a sprung mass over a bumpy road and a reduction of over 24% in the root-mean-square (RMS) sprung mass acceleration over a random road as compared to passive suspension with typical original equipment (OE) shock absorbers. In addition, the semi-active full vehicle system via HSIC based on SMIS provides better isolation than that via the original HSIC, which can avoid the effect of the time delay uncertainty of the MR dampers.

  2. A new formulation for feedforward neural networks.

    PubMed

    Razavi, Saman; Tolson, Bryan A

    2011-10-01

    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization.

  3. Extrapolation limitations of multilayer feedforward neural networks

    NASA Technical Reports Server (NTRS)

    Haley, Pamela J.; Soloway, Donald

    1992-01-01

    The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.

  4. Problem Specific applications for Neural Networks

    DTIC Science & Technology

    1988-12-01

    97 iv List Of Figures Figure Page 1. Neural Network Models ...... ............. 2 2. A Single - Layer Perceptron ..... ........... 4...the network is in use. Three of the most well-known neural networks are the single - layer perceptron , the multi-layer perceptron, and the Kohonen self...three of these networks can accept discrete (binary) or continuous inputs (5:6). 3 Single-Laver Perceptron. The single - layer perceptron (shown in Figure 2

  5. Drift chamber tracking with neural networks

    SciTech Connect

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  6. Coherence resonance in bursting neural networks.

    PubMed

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal-a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  7. Marginalization in Random Nonlinear Neural Networks

    NASA Astrophysics Data System (ADS)

    Vasudeva Raju, Rajkumar; Pitkow, Xaq

    2015-03-01

    Computations involved in tasks like causal reasoning in the brain require a type of probabilistic inference known as marginalization. Marginalization corresponds to averaging over irrelevant variables to obtain the probability of the variables of interest. This is a fundamental operation that arises whenever input stimuli depend on several variables, but only some are task-relevant. Animals often exhibit behavior consistent with marginalizing over some variables, but the neural substrate of this computation is unknown. It has been previously shown (Beck et al. 2011) that marginalization can be performed optimally by a deterministic nonlinear network that implements a quadratic interaction of neural activity with divisive normalization. We show that a simpler network can perform essentially the same computation. These Random Nonlinear Networks (RNN) are feedforward networks with one hidden layer, sigmoidal activation functions, and normally-distributed weights connecting the input and hidden layers. We train the output weights connecting the hidden units to an output population, such that the output model accurately represents a desired marginal probability distribution without significant information loss compared to optimal marginalization. Simulations for the case of linear coordinate transformations show that the RNN model has good marginalization performance, except for highly uncertain inputs that have low amplitude population responses. Behavioral experiments, based on these results, could then be used to identify if this model does indeed explain how the brain performs marginalization.

  8. Electronic neural network for dynamic resource allocation

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Eberhardt, S. P.; Daud, T.

    1991-01-01

    A VLSI implementable neural network architecture for dynamic assignment is presented. The resource allocation problems involve assigning members of one set (e.g. resources) to those of another (e.g. consumers) such that the global 'cost' of the associations is minimized. The network consists of a matrix of sigmoidal processing elements (neurons), where the rows of the matrix represent resources and columns represent consumers. Unlike previous neural implementations, however, association costs are applied directly to the neurons, reducing connectivity of the network to VLSI-compatible 0 (number of neurons). Each row (and column) has an additional neuron associated with it to independently oversee activations of all the neurons in each row (and each column), providing a programmable 'k-winner-take-all' function. This function simultaneously enforces blocking (excitatory/inhibitory) constraints during convergence to control the number of active elements in each row and column within desired boundary conditions. Simulations show that the network, when implemented in fully parallel VLSI hardware, offers optimal (or near-optimal) solutions within only a fraction of a millisecond, for problems up to 128 resources and 128 consumers, orders of magnitude faster than conventional computing or heuristic search methods.

  9. Unique Applications for Artificial Neural Networks. Phase 1

    DTIC Science & Technology

    1991-08-08

    AD-A243 365’ l!1111iLI[li In M aR C ’ PHASE I FINAL REPORT Unique Applications for Artificial Neural Networks DARPA SBIR 90-115 Contract # DAAH01-91...Contents Unique Applications for Artificial Neural Networks Acknowledgments Table of Contents Abstract i 1.0 Introduction 1 2.0 The NGO-VRP Solver 2...34 solution is thus obtained through analogy. Because of this activity, artificial neural networks have emerged as a primary artificial intelligence

  10. Beneficial role of noise in artificial neural networks

    SciTech Connect

    Monterola, Christopher; Saloma, Caesar; Zapotocky, Martin

    2008-06-18

    We demonstrate enhancement of neural networks efficacy to recognize frequency encoded signals and/or to categorize spatial patterns of neural activity as a result of noise addition. For temporal information recovery, noise directly added to the receiving neurons allow instantaneous improvement of signal-to-noise ratio [Monterola and Saloma, Phys. Rev. Lett. 2002]. For spatial patterns however, recurrence is necessary to extend and homogenize the operating range of a feed-forward neural network [Monterola and Zapotocky, Phys. Rev. E 2005]. Finally, using the size of the basin of attraction of the networks learned patterns (dynamical fixed points), a procedure for estimating the optimal noise is demonstrated.

  11. Neural Network Classification of Cerebral Embolic Signals

    DTIC Science & Technology

    2007-11-02

    application of new signal processing techniques to the analysis and classification of embolic signals. We applied a Wavelet Neural Network algorithm...to approximate the embolic signals, with the parameters of the wavelet nodes being used to train a Neural Network to classify these signals as resulting from normal flow, or from gaseous or solid emboli.

  12. Multidisciplinary Studies of Integrated Neural Network Systems

    DTIC Science & Technology

    1994-03-01

    They accomplish this by partitioning the system into functional sub-units in a quasi-hierarchical structure of neural network modules. We studied...three specific examples of this system integration strategy and modeled their operation for the purpose of creating new neural network architectures and

  13. Neural Network Research: A Personal Perspective,

    DTIC Science & Technology

    1988-03-01

    These vision preprocessor and ART autonomous classifier examples are just two of the many neural network architectures now being developed by...computational theories with natural realizations as real-time adaptive neural network architectures with promising properties for tackling some of the

  14. Neural Network Based Helicopter Low Airspeed Indicator

    DTIC Science & Technology

    1996-10-24

    This invention relates generally to virtual sensors and, more particularly, to a means and method utilizing a neural network for estimating...helicopter airspeed at speeds below about 50 knots using only fixed system parameters (i.e., parameters measured or determined in a reference frame fixed relative to the helicopter fuselage) as inputs to the neural network .

  15. Evolving Neural Networks for Nonlinear Control.

    DTIC Science & Technology

    1996-09-30

    An approach to creating Amorphous Recurrent Neural Networks (ARNN) using Genetic Algorithms (GA) called 2pGA has been developed and shown to be...effective in evolving neural networks for the control and stabilization of both linear and nonlinear plants, the optimal control for a nonlinear regulator

  16. Online guidance updates using neural networks

    NASA Astrophysics Data System (ADS)

    Filici, Cristian; Sánchez Peña, Ricardo S.

    2010-02-01

    The aim of this article is to present a method for the online guidance update for a launcher ascent trajectory that is based on the utilization of a neural network approximator. Generation of training patterns and selection of the input and output spaces of the neural network are presented, and implementation issues are discussed. The method is illustrated by a 2-dimensional launcher simulation.

  17. Neural network based architectures for aerospace applications

    NASA Technical Reports Server (NTRS)

    Ricart, Richard

    1987-01-01

    A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed.

  18. Isolated Speech Recognition Using Artificial Neural Networks

    DTIC Science & Technology

    2007-11-02

    In this project Artificial Neural Networks are used as research tool to accomplish Automated Speech Recognition of normal speech. A small size...the first stage of this work are satisfactory and thus the application of artificial neural networks in conjunction with cepstral analysis in isolated word recognition holds promise.

  19. Neural network classification - A Bayesian interpretation

    NASA Technical Reports Server (NTRS)

    Wan, Eric A.

    1990-01-01

    The relationship between minimizing a mean squared error and finding the optimal Bayesian classifier is reviewed. This provides a theoretical interpretation for the process by which neural networks are used in classification. A number of confidence measures are proposed to evaluate the performance of the neural network classifier within a statistical framework.

  20. Radiation Behavior of Analog Neural Network Chip

    NASA Technical Reports Server (NTRS)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  1. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging.

  2. Neural Networks for Handwritten English Alphabet Recognition

    NASA Astrophysics Data System (ADS)

    Perwej, Yusuf; Chaturvedi, Ashish

    2011-04-01

    This paper demonstrates the use of neural networks for developing a system that can recognize hand-written English alphabets. In this system, each English alphabet is represented by binary values that are used as input to a simple feature extraction system, whose output is fed to our neural network system.

  3. A Survey of Neural Network Publications.

    ERIC Educational Resources Information Center

    Vijayaraman, Bindiganavale S.; Osyk, Barbara

    This paper is a survey of publications on artificial neural networks published in business journals for the period ending July 1996. Its purpose is to identify and analyze trends in neural network research during that period. This paper shows which topics have been heavily researched, when these topics were researched, and how that research has…

  4. Applications of Neural Networks in Finance.

    ERIC Educational Resources Information Center

    Crockett, Henry; Morrison, Ronald

    1994-01-01

    Discusses research with neural networks in the area of finance. Highlights include bond pricing, theoretical exposition of primary bond pricing, bond pricing regression model, and an example that created networks with corporate bonds and NeuralWare Neuralworks Professional H software using the back-propagation technique. (LRW)

  5. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  6. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  7. Neural networks applications to control and computations

    NASA Technical Reports Server (NTRS)

    Luxemburg, Leon A.

    1994-01-01

    Several interrelated problems in the area of neural network computations are described. First an interpolation problem is considered, then a control problem is reduced to a problem of interpolation by a neural network via Lyapunov function approach, and finally a new, faster method of learning as compared with the gradient descent method, was introduced.

  8. Forecasting Jet Fuel Prices Using Artificial Neural Networks.

    DTIC Science & Technology

    1995-03-01

    Artificial neural networks provide a new approach to commodity forecasting that does not require algorithm or rule development. Neural networks have...NeuralWare, more people can take advantage of the power of artificial neural networks . This thesis provides an introduction to neural networks, and reviews

  9. Artificial neural networks to predict 3D spinal posture in reaching and lifting activities; Applications in biomechanical models.

    PubMed

    Gholipour, A; Arjmand, N

    2016-09-06

    Spinal posture is a crucial input in biomechanical models and an essential factor in ergonomics investigations to evaluate risk of low back injury. In vivo measurement of spinal posture through the common motion capture techniques is limited to equipped laboratories and thus impractical for workplace applications. Posture prediction models are therefore considered indispensable tools. This study aims to investigate the capability of artificial neural networks (ANNs) in predicting the three-dimensional posture of the spine (S1, T12 and T1 orientations) in various activities. Two ANNs were trained and tested using measurements from spinal postures of 40 male subjects by an inertial tracking device in various static reaching and lifting (of 5kg) activities. Inputs of each ANN were position of the hand load and body height, while outputs were rotations of the three foregoing segments relative to their initial orientation in the neutral upright posture. Effect of posture prediction errors on the estimated spinal loads in symmetric reaching activities was also investigated using a biomechanical model. Results indicated that both trained ANNs could generate outputs (three-dimensional orientations of the segments) from novel sets of inputs that were not included in the training processes (root-mean-squared-error (RMSE)<11° and coefficient-of-determination (R(2))>0.95). A graphic user interface was designed and made available to facilitate use of the ANNs. The difference between the mean of each measured angle in a reaching task and the corresponding angle in a lifting task remained smaller than 8°. Spinal loads estimated by the biomechanical model based on the predicted postures were on average different by < 12% from those estimated based on the exact measured postures (RMSE=173 and 35N for the L5-S1 compression and shear loads, respectively).

  10. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  11. Grid cells: the position code, neural network models of activity, and the problem of learning.

    PubMed

    Welinder, Peter E; Burak, Yoram; Fiete, Ila R

    2008-01-01

    We review progress on the modeling and theoretical fronts in the quest to unravel the computational properties of the grid cell code and to explain the mechanisms underlying grid cell dynamics. The goals of the review are to outline a coherent framework for understanding the dynamics of grid cells and their representation of space; to critically present and draw contrasts between recurrent network models of grid cells based on continuous attractor dynamics and independent-neuron models based on temporal interference; and to suggest open questions for experiment and theory.

  12. Pruning artificial neural networks using neural complexity measures.

    PubMed

    Jorgensen, Thomas D; Haynes, Barry P; Norlund, Charlotte C F

    2008-10-01

    This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.

  13. A decade of neural networks: Practical applications and prospects

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina (Editor); Thakoor, Anil (Editor)

    1994-01-01

    On May 11-13, 1994, JPL's Center for Space Microelectronics Technology (CSMT) hosted a neural network workshop entitled, 'A Decade of Neural Networks: Practical Applications and Prospects,' sponsored by DOD and NASA. The past ten years of renewed activity in neural network research has brought the technology to a crossroads regarding the overall scope of its future practical applicability. The purpose of the workshop was to bring together the sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and development prospects, with emphasis on practical applications. Of the 93 participants, roughly 15% were from government agencies, 30% were from industry, 20% were from universities, and 35% were from Federally Funded Research and Development Centers (FFRDC's).

  14. Enhancing neural-network performance via assortativity

    SciTech Connect

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-03-15

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  15. Enhancing neural-network performance via assortativity.

    PubMed

    de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J

    2011-03-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  16. Characterization of Early Cortical Neural Network ...

    EPA Pesticide Factsheets

    We examined the development of neural network activity using microelectrode array (MEA) recordings made in multi-well MEA plates (mwMEAs) over the first 12 days in vitro (DIV). In primary cortical cultures made from postnatal rats, action potential spiking activity was essentially absent on DIV 2 and developed rapidly between DIV 5 and 12. Spiking activity was primarily sporadic and unorganized at early DIV, and became progressively more organized with time in culture, with bursting parameters, synchrony and network bursting increasing between DIV 5 and 12. We selected 12 features to describe network activity and principal components analysis using these features demonstrated a general segregation of data by age at both the well and plate levels. Using a combination of random forest classifiers and Support Vector Machines, we demonstrated that 4 features (CV of within burst ISI, CV of IBI, network spike rate and burst rate) were sufficient to predict the age (either DIV 5, 7, 9 or 12) of each well recording with >65% accuracy. When restricting the classification problem to a binary decision, we found that classification improved dramatically, e.g. 95% accuracy for discriminating DIV 5 vs DIV 12 wells. Further, we present a novel resampling approach to determine the number of wells that might be needed for conducting comparisons of different treatments using mwMEA plates. Overall, these results demonstrate that network development on mwMEA plates is similar to

  17. Functional activation and neural networks in women with posttraumatic stress disorder related to intimate partner violence

    PubMed Central

    Simmons, Alan; Paulus, Martin P.; Thorp, Steven R.; Matthews, Scott C.; Norman, Sonya B.; Stein, Murray B.

    2008-01-01

    Background Intimate partner violence (IPV) is one of the most common causes of posttraumatic stress disorder (PTSD) in women. Victims of IPV are often preoccupied by the anticipation of impending harm. This investigation tested the hypothesis that IPV-related PTSD individuals show exaggerated insula reactivity to the anticipation of aversive stimuli. Methods Fifteen women with a history of IPV and consequent PTSD (IPV-PTSD) and 15 non-traumatized control (NTC) women performed a task involving cued anticipation to images of positive and negative events during functional magnetic resonance imaging. Results Both groups showed increased activation of bilateral anterior insula during anticipation of negative images minus anticipation of positive images. Activation in right anterior/middle insula was significantly greater in the IPV-PTSD relative to the NTC group. Functional connectivity analysis revealed that changes in activation in right middle insula and bilateral anterior insula were more strongly associated with amygdala activation changes in NTC than in IPV-PTSD subjects. Conclusions Increased activation in the anterior/middle insula during negative anticipation in women with IPV-related PTSD. These findings in women with IPV could be a consequence of the IPV exposure, reflect pre-existing differences in insular function, or due to the development of PTSD. Thus, future longitudinal studi4s need to examine these possibilities. PMID:18639236

  18. Neural-Network Control Of Prosthetic And Robotic Hands

    NASA Technical Reports Server (NTRS)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  19. Neural networks for damage identification

    SciTech Connect

    Paez, T.L.; Klenke, S.E.

    1997-11-01

    Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  20. Devices and circuits for nanoelectronic implementation of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Turel, Ozgur

    Biological neural networks perform complicated information processing tasks at speeds better than conventional computers based on conventional algorithms. This has inspired researchers to look into the way these networks function, and propose artificial networks that mimic their behavior. Unfortunately, most artificial neural networks, either software or hardware, do not provide either the speed or the complexity of a human brain. Nanoelectronics, with high density and low power dissipation that it provides, may be used in developing more efficient artificial neural networks. This work consists of two major contributions in this direction. First is the proposal of the CMOL concept, hybrid CMOS-molecular hardware [1-8]. CMOL may circumvent most of the problems in posed by molecular devices, such as low yield, vet provide high active device density, ˜1012/cm 2. The second contribution is CrossNets, artificial neural networks that are based on CMOL. We showed that CrossNets, with their fault tolerance, exceptional speed (˜ 4 to 6 orders of magnitude faster than biological neural networks) can perform any task any artificial neural network can perform. Moreover, there is a hope that if their integration scale is increased to that of human cerebral cortex (˜ 1010 neurons and ˜ 1014 synapses), they may be capable of performing more advanced tasks.

  1. Energy coding in biological neural networks.

    PubMed

    Wang, Rubin; Zhang, Zhikang

    2007-09-01

    According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, we present a brand new scientific theory that offers an unique mechanism for brain information processing. We demonstrate that the neural coding produced by the activity of the brain is well described by our theory of energy coding. Due to the energy coding model's ability to reveal mechanisms of brain information processing based upon known biophysical properties, we can not only reproduce various experimental results of neuro-electrophysiology, but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, we estimate that the theory has very important consequences for quantitative research of cognitive function.

  2. Nonlinear programming with feedforward neural networks.

    SciTech Connect

    Reifman, J.

    1999-06-02

    We provide a practical and effective method for solving constrained optimization problems by successively training a multilayer feedforward neural network in a coupled neural-network/objective-function representation. Nonlinear programming problems are easily mapped into this representation which has a simpler and more transparent method of solution than optimization performed with Hopfield-like networks and poses very mild requirements on the functions appearing in the problem. Simulation results are illustrated and compared with an off-the-shelf optimization tool.

  3. VLSI Cells Placement Using the Neural Networks

    SciTech Connect

    Azizi, Hacene; Zouaoui, Lamri; Mokhnache, Salah

    2008-06-12

    The artificial neural networks have been studied for several years. Their effectiveness makes it possible to expect high performances. The privileged fields of these techniques remain the recognition and classification. Various applications of optimization are also studied under the angle of the artificial neural networks. They make it possible to apply distributed heuristic algorithms. In this article, a solution to placement problem of the various cells at the time of the realization of an integrated circuit is proposed by using the KOHONEN network.

  4. Neural network regulation driven by autonomous neural firings

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  5. A Decade of Neural Networks: Practical Applications and Prospects

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina E.

    1994-01-01

    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization.

  6. A neural network model for credit risk evaluation.

    PubMed

    Khashman, Adnan

    2009-08-01

    Credit scoring is one of the key analytical techniques in credit risk evaluation which has been an active research area in financial risk management. This paper presents a credit risk evaluation system that uses a neural network model based on the back propagation learning algorithm. We train and implement the neural network to decide whether to approve or reject a credit application, using seven learning schemes and real world credit applications from the Australian credit approval datasets. A comparison of the system performance under the different learning schemes is provided, furthermore, we compare the performance of two neural networks; with one and two hidden layers following the ideal learning scheme. Experimental results suggest that neural networks can be effectively used in automatic processing of credit applications.

  7. Object detection using pulse coupled neural networks.

    PubMed

    Ranganath, H S; Kuntimad, G

    1999-01-01

    This paper describes an object detection system based on pulse coupled neural networks. The system is designed and implemented to illustrate the power, flexibility and potential the pulse coupled neural networks have in real-time image processing. In the preprocessing stage, a pulse coupled neural network suppresses noise by smoothing the input image. In the segmentation stage, a second pulse coupled neural-network iteratively segments the input image. During each iteration, with the help of a control module, the segmentation network deletes regions that do not satisfy the retention criteria from further processing and produces an improved segmentation of the retained image. In the final stage each group of connected regions that satisfies the detection criteria is identified as an instance of the object of interest.

  8. A neural network prototyping package within IRAF

    NASA Technical Reports Server (NTRS)

    Bazell, D.; Bankman, I.

    1992-01-01

    We outline our plans for incorporating a Neural Network Prototyping Package into the IRAF environment. The package we are developing will allow the user to choose between different types of networks and to specify the details of the particular architecture chosen. Neural networks consist of a highly interconnected set of simple processing units. The strengths of the connections between units are determined by weights which are adaptively set as the network 'learns'. In some cases, learning can be a separate phase of the user cycle of the network while in other cases the network learns continuously. Neural networks have been found to be very useful in pattern recognition and image processing applications. They can form very general 'decision boundaries' to differentiate between objects in pattern space and they can be used for associative recall of patterns based on partial cures and for adaptive filtering. We discuss the different architectures we plan to use and give examples of what they can do.

  9. Description of interatomic interactions with neural networks

    NASA Astrophysics Data System (ADS)

    Hajinazar, Samad; Shao, Junping; Kolmogorov, Aleksey N.

    Neural networks are a promising alternative to traditional classical potentials for describing interatomic interactions. Recent research in the field has demonstrated how arbitrary atomic environments can be represented with sets of general functions which serve as an input for the machine learning tool. We have implemented a neural network formalism in the MAISE package and developed a protocol for automated generation of accurate models for multi-component systems. Our tests illustrate the performance of neural networks and known classical potentials for a range of chemical compositions and atomic configurations. Supported by NSF Grant DMR-1410514.

  10. Pricing financial derivatives with neural networks

    NASA Astrophysics Data System (ADS)

    Morelli, Marco J.; Montagna, Guido; Nicrosini, Oreste; Treccani, Michele; Farina, Marco; Amato, Paolo

    2004-07-01

    Neural network algorithms are applied to the problem of option pricing and adopted to simulate the nonlinear behavior of such financial derivatives. Two different kinds of neural networks, i.e. multi-layer perceptrons and radial basis functions, are used and their performances compared in detail. The analysis is carried out both for standard European options and American ones, including evaluation of the Greek letters, necessary for hedging purposes. Detailed numerical investigation show that, after a careful phase of training, neural networks are able to predict the value of options and Greek letters with high accuracy and competitive computational time.

  11. Genetic algorithm for neural networks optimization

    NASA Astrophysics Data System (ADS)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  12. Noise cancellation of memristive neural networks.

    PubMed

    Wen, Shiping; Zeng, Zhigang; Huang, Tingwen; Yu, Xinghuo

    2014-12-01

    This paper investigates noise cancellation problem of memristive neural networks. Based on the reproducible gradual resistance tuning in bipolar mode, a first-order voltage-controlled memristive model is employed with asymmetric voltage thresholds. Since memristive devices are especially tiny to be densely packed in crossbar-like structures and possess long time memory needed by neuromorphic synapses, this paper shows how to approximate the behavior of synapses in neural networks using this memristive device. Also certain templates of memristive neural networks are established to implement the noise cancellation.

  13. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  14. Multispectral-image fusion using neural networks

    NASA Astrophysics Data System (ADS)

    Kagel, Joseph H.; Platt, C. A.; Donaven, T. W.; Samstad, Eric A.

    1990-08-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard a circuit card assembly and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations results and a description of the prototype system are presented. 1.

  15. Neural networks techniques applied to reservoir engineering

    SciTech Connect

    Flores, M.; Barragan, C.

    1995-12-31

    Neural Networks are considered the greatest technological advance since the transistor. They are expected to be a common household item by the year 2000. An attempt to apply Neural Networks to an important geothermal problem has been made, predictions on the well production and well completion during drilling in a geothermal field. This was done in Los Humeros geothermal field, using two common types of Neural Network models, available in commercial software. Results show the learning capacity of the developed model, and its precision in the predictions that were made.

  16. Stock market index prediction using neural networks

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  17. Optimization of multilayer neural network parameters for speaker recognition

    NASA Astrophysics Data System (ADS)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  18. Phase diagram of spiking neural networks

    PubMed Central

    Seyed-allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters – excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli. PMID:25788885

  19. Nonequilibrium landscape theory of neural networks.

    PubMed

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-11-05

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

  20. Representational Distance Learning for Deep Neural Networks.

    PubMed

    McClure, Patrick; Kriegeskorte, Nikolaus

    2016-01-01

    Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.

  1. Representational Distance Learning for Deep Neural Networks

    PubMed Central

    McClure, Patrick; Kriegeskorte, Nikolaus

    2016-01-01

    Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains. PMID:28082889

  2. Healthy human CSF promotes glial differentiation of hESC-derived neural cells while retaining spontaneous activity in existing neuronal networks.

    PubMed

    Kiiski, Heikki; Aänismaa, Riikka; Tenhunen, Jyrki; Hagman, Sanna; Ylä-Outinen, Laura; Aho, Antti; Yli-Hankala, Arvi; Bendel, Stepani; Skottman, Heli; Narkilahti, Susanna

    2013-06-15

    The possibilities of human pluripotent stem cell-derived neural cells from the basic research tool to a treatment option in regenerative medicine have been well recognized. These cells also offer an interesting tool for in vitro models of neuronal networks to be used for drug screening and neurotoxicological studies and for patient/disease specific in vitro models. Here, as aiming to develop a reductionistic in vitro human neuronal network model, we tested whether human embryonic stem cell (hESC)-derived neural cells could be cultured in human cerebrospinal fluid (CSF) in order to better mimic the in vivo conditions. Our results showed that CSF altered the differentiation of hESC-derived neural cells towards glial cells at the expense of neuronal differentiation. The proliferation rate was reduced in CSF cultures. However, even though the use of CSF as the culture medium altered the glial vs. neuronal differentiation rate, the pre-existing spontaneous activity of the neuronal networks persisted throughout the study. These results suggest that it is possible to develop fully human cell and culture-based environments that can further be modified for various in vitro modeling purposes.

  3. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the…

  4. Results of the neural network investigation

    NASA Astrophysics Data System (ADS)

    Uvanni, Lee A.

    1992-04-01

    Rome Laboratory has designed and implemented a neural network based automatic target recognition (ATR) system under contract F30602-89-C-0079 with Booz, Allen & Hamilton (BAH), Inc., of Arlington, Virginia. The system utilizes a combination of neural network paradigms and conventional image processing techniques in a parallel environment on the IE- 2000 SUN 4 workstation at Rome Laboratory. The IE-2000 workstation was designed to assist the Air Force and Department of Defense to derive the needs for image exploitation and image exploitation support for the late 1990s - year 2000 time frame. The IE-2000 consists of a developmental testbed and an applications testbed, both with the goal of solving real world problems on real-world facilities for image exploitation. To fully exploit the parallel nature of neural networks, 18 Inmos T800 transputers were utilized, in an attempt to provide a near- linear speed-up for each subsystem component implemented on them. The initial design contained three well-known neural network paradigms, each modified by BAH to some extent: the Selective Attention Neocognitron (SAN), the Binary Contour System/Feature Contour System (BCS/FCS), and Adaptive Resonance Theory 2 (ART-2), and one neural network designed by BAH called the Image Variance Exploitation Network (IVEN). Through rapid prototyping, the initial system evolved into a completely different final design, called the Neural Network Image Exploitation System (NNIES), where the final system consists of two basic components: the Double Variance (DV) layer and the Multiple Object Detection And Location System (MODALS). A rapid prototyping neural network CAD Tool, designed by Booz, Allen & Hamilton, was used to rapidly build and emulate the neural network paradigms. Evaluation of the completed ATR system included probability of detections and probability of false alarms among other measures.

  5. Recognition of Telugu characters using neural networks.

    PubMed

    Sukhaswami, M B; Seetharamulu, P; Pujari, A K

    1995-09-01

    The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.

  6. Neural Networks for Dynamic Flight Control

    DTIC Science & Technology

    1993-12-01

    uses the Adaline (22) model for development of the neural networks. Neural Graphics and other AFIT applications use a slightly different model. The...primary difference in the Nguyen application is that the Adaline uses the nonlinear function .f(a) = tanh(a) where standard backprop uses the sigmoid

  7. Radar signal categorization using a neural network

    NASA Technical Reports Server (NTRS)

    Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.

    1991-01-01

    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.

  8. Constructive Autoassociative Neural Network for Facial Recognition

    PubMed Central

    Fernandes, Bruno J. T.; Cavalcanti, George D. C.; Ren, Tsang I.

    2014-01-01

    Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature. PMID:25542018

  9. A neural network architecture for data classification.

    PubMed

    Lezoray, O

    2001-02-01

    This article aims at showing an architecture of neural networks designed for the classification of data distributed among a high number of classes. A significant gain in the global classification rate can be obtained by using our architecture. This latter is based on a set of several little neural networks, each one discriminating only two classes. The specialization of each neural network simplifies their structure and improves the classification. Moreover, the learning step automatically determines the number of hidden neurons. The discussion is illustrated by tests on databases from the UCI machine learning database repository. The experimental results show that this architecture can achieve a faster learning, simpler neural networks and an improved performance in classification.

  10. Neural Network Solutions to Optical Absorption Spectra

    NASA Astrophysics Data System (ADS)

    Rosenbrock, Conrad

    2012-10-01

    Artificial neural networks have been effective in reducing computation time while achieving remarkable accuracy for a variety of difficult physics problems. Neural networks are trained iteratively by adjusting the size and shape of sums of non-linear functions by varying the function parameters to fit results for complex non-linear systems. For smaller structures, ab initio simulation methods can be used to determine absorption spectra under field perturbations. However, these methods are impractical for larger structures. Designing and training an artificial neural network with simulated data from time-dependent density functional theory may allow time-dependent perturbation effects to be calculated more efficiently. I investigate the design considerations and results of neural network implementations for calculating perturbation-coupled electron oscillations in small molecules.

  11. Imbibition well stimulation via neural network design

    DOEpatents

    Weiss, William

    2007-08-14

    A method for stimulation of hydrocarbon production via imbibition by utilization of surfactants. The method includes use of fuzzy logic and neural network architecture constructs to determine surfactant use.

  12. Temporal Coding in Realistic Neural Networks

    NASA Astrophysics Data System (ADS)

    Gerasyuta, S. M.; Ivanov, D. V.

    1995-10-01

    The modification of realistic neural network model have been proposed. The model differs from the Hopfield model because of the two characteristic contributions to synaptic efficacious: the short-time contribution which is determined by the chemical reactions in the synapses and the long-time contribution corresponding to the structural changes of synaptic contacts. The approximation solution of the realistic neural network model equations is obtained. This solution allows us to calculate the postsynaptic potential as function of input. Using the approximate solution of realistic neural network model equations the behaviour of postsynaptic potential of realistic neural network as function of time for the different temporal sequences of stimuli is described. The various outputs are obtained for the different temporal sequences of the given stimuli. These properties of the temporal coding can be exploited as a recognition element capable of being selectively tuned to different inputs.

  13. A neural network for bounded linear programming

    SciTech Connect

    Culioli, J.C.; Protopopescu, V.; Britton, C.; Ericson, N. )

    1989-01-01

    The purpose of this paper is to describe a neural network implementation of an algorithm recently designed at ORNL to solve the Transportation and the Assignment Problems, and, more generally, any explicitly bounded linear program. 9 refs.

  14. Blood glucose prediction using neural network

    NASA Astrophysics Data System (ADS)

    Soh, Chit Siang; Zhang, Xiqin; Chen, Jianhong; Raveendran, P.; Soh, Phey Hong; Yeo, Joon Hock

    2008-02-01

    We used neural network for blood glucose level determination in this study. The data set used in this study was collected using a non-invasive blood glucose monitoring system with six laser diodes, each laser diode operating at distinct near infrared wavelength between 1500nm and 1800nm. The neural network is specifically used to determine blood glucose level of one individual who participated in an oral glucose tolerance test (OGTT) session. Partial least squares regression is also used for blood glucose level determination for the purpose of comparison with the neural network model. The neural network model performs better in the prediction of blood glucose level as compared with the partial least squares model.

  15. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  16. Global exponential stability of multitime scale competitive neural networks with nonsmooth functions.

    PubMed

    Lu, Hongtao; Amari, Shun-ichi

    2006-09-01

    In this paper, we study the global exponential stability of a multitime scale competitive neural network model with nonsmooth functions, which models a literally inhibited neural network with unsupervised Hebbian learning. The network has two types of state variables, one corresponds to the fast neural activity and another to the slow unsupervised modification of connection weights. Based on the nonsmooth analysis techniques, we prove the existence and uniqueness of equilibrium for the system and establish some new theoretical conditions ensuring global exponential stability of the unique equilibrium of the neural network. Numerical simulations are conducted to illustrate the effectiveness of the derived conditions in characterizing stability regions of the neural network.

  17. Application of artificial neural networks to gaming

    NASA Astrophysics Data System (ADS)

    Baba, Norio; Kita, Tomio; Oda, Kazuhiro

    1995-04-01

    Recently, neural network technology has been applied to various actual problems. It has succeeded in producing a large number of intelligent systems. In this article, we suggest that it could be applied to the field of gaming. In particular, we suggest that the neural network model could be used to mimic players' characters. Several computer simulation results using a computer gaming system which is a modified version of the COMMONS GAME confirm our idea.

  18. Using neural networks in software repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  19. Limitations of opto-electronic neural networks

    NASA Technical Reports Server (NTRS)

    Yu, Jeffrey; Johnston, Alan; Psaltis, Demetri; Brady, David

    1989-01-01

    Consideration is given to the limitations of implementing neurons, weights, and connections in neural networks for electronics and optics. It is shown that the advantages of each technology are utilized when electronically fabricated neurons are included and a combination of optics and electronics are employed for the weights and connections. The relationship between the types of neural networks being constructed and the choice of technologies to implement the weights and connections is examined.

  20. Predicting Car Production using a Neural Network

    DTIC Science & Technology

    2003-04-24

    World Almanac Education Group, 2003 [8] E. Petroutsos, Mastering Visual Basic .NET, SYBEX Inc., 2002 [9] D. E. Rumelhart, J. L. McClelland, Parallel...In this example, 100,000 cycles (epochs) were used to train it. The initial weights were randomly selected from values between 1 and -1. Visual ... basic .NET was used to program the neural network [8]. The neural network algorithm followed the steps outlined in [9]. As stated above, a 3 layer

  1. Neural Networks for Signal Processing and Control

    NASA Astrophysics Data System (ADS)

    Hesselroth, Ted Daniel

    Neural networks are developed for controlling a robot-arm and camera system and for processing images. The networks are based upon computational schemes that may be found in the brain. In the first network, a neural map algorithm is employed to control a five-joint pneumatic robot arm and gripper through feedback from two video cameras. The pneumatically driven robot arm employed shares essential mechanical characteristics with skeletal muscle systems. To control the position of the arm, 200 neurons formed a network representing the three-dimensional workspace embedded in a four-dimensional system of coordinates from the two cameras, and learned a set of pressures corresponding to the end effector positions, as well as a set of Jacobian matrices for interpolating between these positions. Because of the properties of the rubber-tube actuators of the arm, the position as a function of supplied pressure is nonlinear, nonseparable, and exhibits hysteresis. Nevertheless, through the neural network learning algorithm the position could be controlled to an accuracy of about one pixel (~3 mm) after two hundred learning steps. Applications of repeated corrections in each step via the Jacobian matrices leads to a very robust control algorithm since the Jacobians learned by the network have to satisfy the weak requirement that they yield a reduction of the distance between gripper and target. The second network is proposed as a model for the mammalian vision system in which backward connections from the primary visual cortex (V1) to the lateral geniculate nucleus play a key role. The application of hebbian learning to the forward and backward connections causes the formation of receptive fields which are sensitive to edges, bars, and spatial frequencies of preferred orientations. The receptive fields are learned in such a way as to maximize the rate of transfer of information from the LGN to V1. Orientational preferences are organized into a feature map in the primary visual

  2. Neural network for image segmentation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.; Prasad, Lakshman; Schlei, Bernd R.

    2000-10-01

    Image analysis is an important requirement of many artificial intelligence systems. Though great effort has been devoted to inventing efficient algorithms for image analysis, there is still much work to be done. It is natural to turn to mammalian vision systems for guidance because they are the best known performers of visual tasks. The pulse- coupled neural network (PCNN) model of the cat visual cortex has proven to have interesting properties for image processing. This article describes the PCNN application to the processing of images of heterogeneous materials; specifically PCNN is applied to image denoising and image segmentation. Our results show that PCNNs do well at segmentation if we perform image smoothing prior to segmentation. We use PCNN for obth smoothing and segmentation. Combining smoothing and segmentation enable us to eliminate PCNN sensitivity to the setting of the various PCNN parameters whose optimal selection can be difficult and can vary even for the same problem. This approach makes image processing based on PCNN more automatic in our application and also results in better segmentation.

  3. Artificial neural network and medicine.

    PubMed

    Khan, Z H; Mohapatra, S K; Khodiar, P K; Ragu Kumar, S N

    1998-07-01

    The introduction of human brain functions such as perception and cognition into the computer has been made possible by the use of Artificial Neural Network (ANN). ANN are computer models inspired by the structure and behavior of neurons. Like the brain, ANN can recognize patterns, manage data and most significantly, learn. This learning ability, not seen in other computer models simulating human intelligence, constantly improves its functional accuracy as it keeps on performing. Experience is as important for an ANN as it is for man. It is being increasingly used to supplement and even (may be) replace experts, in medicine. However, there is still scope for improvement in some areas. Its ability to classify and interpret various forms of medical data comes as a helping hand to clinical decision making in both diagnosis and treatment. Treatment planning in medicine, radiotherapy, rehabilitation, etc. is being done using ANN. Morbidity and mortality prediction by ANN in different medical situations can be very helpful for hospital management. ANN has a promising future in fundamental research, medical education and surgical robotics.

  4. Learning and coding in biological neural networks

    NASA Astrophysics Data System (ADS)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  5. A neural network simulation package in CLIPS

    NASA Technical Reports Server (NTRS)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  6. Logarithmic learning for generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network.

  7. Neural networks for segmentation, tracking, and identification

    NASA Astrophysics Data System (ADS)

    Rogers, Steven K.; Ruck, Dennis W.; Priddy, Kevin L.; Tarr, Gregory L.

    1992-09-01

    The main thrust of this paper is to encourage the use of neural networks to process raw data for subsequent classification. This article addresses neural network techniques for processing raw pixel information. For this paper the definition of neural networks includes the conventional artificial neural networks such as the multilayer perceptrons and also biologically inspired processing techniques. Previously, we have successfully used the biologically inspired Gabor transform to process raw pixel information and segment images. In this paper we extend those ideas to both segment and track objects in multiframe sequences. It is also desirable for the neural network processing data to learn features for subsequent recognition. A common first step for processing raw data is to transform the data and use the transform coefficients as features for recognition. For example, handwritten English characters become linearly separable in the feature space of the low frequency Fourier coefficients. Much of human visual perception can be modelled by assuming low frequency Fourier as the feature space used by the human visual system. The optimum linear transform, with respect to reconstruction, is the Karhunen-Loeve transform (KLT). It has been shown that some neural network architectures can compute approximations to the KLT. The KLT coefficients can be used for recognition as well as for compression. We tested the use of the KLT on the problem of interfacing a nonverbal patient to a computer. The KLT uses an optimal basis set for object reconstruction. For object recognition, the KLT may not be optimal.

  8. Discrimination of volcano activity and mountain-associated waves using infrasonic data and a backpropagation neural network

    NASA Astrophysics Data System (ADS)

    Ham, Fredric M.; Leeney, Thomas A.; Canady, Heather M.; Wheeler, Joseph C.

    1999-03-01

    An integral part of the Comprehensive Nuclear Test Ban Treaty monitoring is an international infrasonic monitoring network that is capable of detecting and verifying nuclear explosions. Reliable detection of such events must be made from data that may contain other sources of infrasonic phenomena. Infrasonic waves can also result from volcanic eruptions, mountain associated waves, auroral waves, earthquakes, meteors, avalanches, severe weather, quarry blasting, high-speed aircraft, gravity waves, and microbaroms. This paper shows that a feedforward multi-layer neural network discriminator, trained by backpropagation, is capable of distinguishing between two unique infrasonic events recorded from single station recordings with a relatively high degree of accuracy. The two types of infrasonic events used in this study are volcanic eruptions and a set of mountain associated waves recorded at Windless Bight, Antarctica. An important element for the successful classification of infrasonic events is the preprocessing techniques used to form a set of feature vectors that can be used to train and test the neural network. The preprocessing steps used in our analysis for the infrasonic data are similar to those techniques used in speech processing, specifically speech recognition. From the raw time-domain infrasonic data, a set of mel-frequency cepstral coefficients and their associated derivatives for each signal are used to form, a set of feature vectors. These feature vectors contain the pertinent characteristics of the data that can be used to classify the events of interest as opposed to using the raw data. A linear analysis was first performed on the feature vector space to determine the best combination of mel-frequency cepstral coefficients and derivatives. Then several simulations were run to distinguish between two different volcanic events, and mountain associated waves versus volcanic events, using their infrasonic characteristics.

  9. Antagonistic neural networks underlying differentiated leadership roles

    PubMed Central

    Boyatzis, Richard E.; Rochford, Kylie; Jack, Anthony I.

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks – the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success. PMID:24624074

  10. Neural-Network Object-Recognition Program

    NASA Technical Reports Server (NTRS)

    Spirkovska, L.; Reid, M. B.

    1993-01-01

    HONTIOR computer program implements third-order neural network exhibiting invariance under translation, change of scale, and in-plane rotation. Invariance incorporated directly into architecture of network. Only one view of each object needed to train network for two-dimensional-translation-invariant recognition of object. Also used for three-dimensional-transformation-invariant recognition by training network on only set of out-of-plane rotated views. Written in C language.

  11. Fast curve fitting using neural networks

    NASA Astrophysics Data System (ADS)

    Bishop, C. M.; Roach, C. M.

    1992-10-01

    Neural networks provide a new tool for the fast solution of repetitive nonlinear curve fitting problems. In this article we introduce the concept of a neural network, and we show how such networks can be used for fitting functional forms to experimental data. The neural network algorithm is typically much faster than conventional iterative approaches. In addition, further substantial improvements in speed can be obtained by using special purpose hardware implementations of the network, thus making the technique suitable for use in fast real-time applications. The basic concepts are illustrated using a simple example from fusion research, involving the determination of spectral line parameters from measurements of B iv impurity radiation in the COMPASS-C tokamak.

  12. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  13. A gentle introduction to artificial neural networks.

    PubMed

    Zhang, Zhongheng

    2016-10-01

    Artificial neural network (ANN) is a flexible and powerful machine learning technique. However, it is under utilized in clinical medicine because of its technical challenges. The article introduces some basic ideas behind ANN and shows how to build ANN using R in a step-by-step framework. In topology and function, ANN is in analogue to the human brain. There are input and output signals transmitting from input to output nodes. Input signals are weighted before reaching output nodes according to their respective importance. Then the combined signal is processed by activation function. I simulated a simple example to illustrate how to build a simple ANN model using nnet() function. This function allows for one hidden layer with varying number of units in that layer. The basic structure of ANN can be visualized with plug-in plot.nnet() function. The plot function is powerful that it allows for varieties of adjustment to the appearance of the neural networks. Prediction with ANN can be performed with predict() function, similar to that of conventional generalized linear models. Finally, the prediction power of ANN is examined using confusion matrix and average accuracy. It appears that ANN is slightly better than conventional linear model.

  14. Hardware implementation of stochastic spiking neural networks.

    PubMed

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  15. Toward on-chip functional neuronal networks: computational study on the effect of synaptic connectivity on neural activity.

    PubMed

    Foroushani, Armin Najarpour; Ghafar-Zadeh, Ebrahim

    2014-01-01

    This paper presents a new unified computational-experimental approach to study the role of the synaptic activity on the activity of neurons in the small neuronal networks (NNs). In a neuronal tissue/organ, this question is investigated with higher complexities by recording action potentials from population of neurons in order to find the relationship between connectivity and the recorded activities. In this approach, we study the dynamics of very small cortical neuronal networks, which can be experimentally synthesized on chip with constrained connectivity. Multi-compartmental Hodgkin-Huxley model is used in NEURON software to reproduce cells by extracting the experimental data from the synthesized NNs. We thereafter demonstrate how the type of synaptic activity affects the network response to specific spike train using the simulation results.

  16. A stereo-compound hybrid microscope for combined intracellular and optical recording of invertebrate neural network activity

    PubMed Central

    Frost, William N.; Wang, Jean; Brandon, Christopher J.

    2007-01-01

    Optical recording studies of invertebrate neural networks with voltage-sensitive dyes seldom employ conventional intracellular electrodes. This may in part be due to the traditional reliance on compound microscopes for such work. While such microscopes have high light-gathering power, they do not provide depth of field, making working with sharp electrodes difficult. Here we describe a hybrid microscope design, with switchable compound and stereo objectives, that eases the use of conventional intracellular electrodes in optical recording experiments. We use it, in combination with a voltage-sensitive dye and photodiode array, to identify neurons participating in the swim motor program of the marine mollusk Tritonia. This microscope design should be applicable to optical recording studies in many preparations. PMID:17306887

  17. Deterministic chaos control in neural networks on various topologies

    NASA Astrophysics Data System (ADS)

    Neto, A. J. F.; Lima, F. W. S.

    2017-01-01

    Using numerical simulations, we study the control of deterministic chaos in neural networks on various topologies like Voronoi-Delaunay, Barabási-Albert, Small-World networks and Erdös-Rényi random graphs by "pinning" the state of a "special" neuron. We show that the chaotic activity of the networks or graphs, when control is on, can become constant or periodic.

  18. Amyloid Beta-Protein and Neural Network Dysfunction

    PubMed Central

    Peña-Ortega, Fernando

    2013-01-01

    Understanding the neural mechanisms underlying brain dysfunction induced by amyloid beta-protein (Aβ) represents one of the major challenges for Alzheimer's disease (AD) research. The most evident symptom of AD is a severe decline in cognition. Cognitive processes, as any other brain function, arise from the activity of specific cell assemblies of interconnected neurons that generate neural network dynamics based on their intrinsic and synaptic properties. Thus, the origin of Aβ-induced cognitive dysfunction, and possibly AD-related cognitive decline, must be found in specific alterations in properties of these cells and their consequences in neural network dynamics. The well-known relationship between AD and alterations in the activity of several neural networks is reflected in the slowing of the electroencephalographic (EEG) activity. Some features of the EEG slowing observed in AD, such as the diminished generation of different network oscillations, can be induced in vivo and in vitro upon Aβ application or by Aβ overproduction in transgenic models. This experimental approach offers the possibility to study the mechanisms involved in cognitive dysfunction produced by Aβ. This type of research may yield not only basic knowledge of neural network dysfunction associated with AD, but also novel options to treat this modern epidemic. PMID:26316994

  19. The neural network for tool-related cognition: An activation likelihood estimation meta-analysis of 70 neuroimaging contrasts

    PubMed Central

    Ishibashi, Ryo; Pobric, Gorana; Saito, Satoru; Lambon Ralph, Matthew A.

    2016-01-01

    ABSTRACT The ability to recognize and use a variety of tools is an intriguing human cognitive function. Multiple neuroimaging studies have investigated neural activations with various types of tool-related tasks. In the present paper, we reviewed tool-related neural activations reported in 70 contrasts from 56 neuroimaging studies and performed a series of activation likelihood estimation (ALE) meta-analyses to identify tool-related cortical circuits dedicated either to general tool knowledge or to task-specific processes. The results indicate the following: (a) Common, task-general processing regions for tools are located in the left inferior parietal lobule (IPL) and ventral premotor cortex; and (b) task-specific regions are located in superior parietal lobule (SPL) and dorsal premotor area for imagining/executing actions with tools and in bilateral occipito-temporal cortex for recognizing/naming tools. The roles of these regions in task-general and task-specific activities are discussed with reference to evidence from neuropsychology, experimental psychology and other neuroimaging studies. PMID:27362967

  20. Spontaneous Neural Dynamics and Multi-scale Network Organization

    PubMed Central

    Foster, Brett L.; He, Biyu J.; Honey, Christopher J.; Jerbi, Karim; Maier, Alexander; Saalmann, Yuri B.

    2016-01-01

    Spontaneous neural activity has historically been viewed as task-irrelevant noise that should be controlled for via experimental design, and removed through data analysis. However, electrophysiology and functional MRI studies of spontaneous activity patterns, which have greatly increased in number over the past decade, have revealed a close correspondence between these intrinsic patterns and the structural network architecture of functional brain circuits. In particular, by analyzing the large-scale covariation of spontaneous hemodynamics, researchers are able to reliably identify functional networks in the human brain. Subsequent work has sought to identify the corresponding neural signatures via electrophysiological measurements, as this would elucidate the neural origin of spontaneous hemodynamics and would reveal the temporal dynamics of these processes across slower and faster timescales. Here we survey common approaches to quantifying spontaneous neural activity, reviewing their empirical success, and their correspondence with the findings of neuroimaging. We emphasize invasive electrophysiological measurements, which are amenable to amplitude- and phase-based analyses, and which can report variations in connectivity with high spatiotemporal precision. After summarizing key findings from the human brain, we survey work in animal models that display similar multi-scale properties. We highlight that, across many spatiotemporal scales, the covariance structure of spontaneous neural activity reflects structural properties of neural networks and dynamically tracks their functional repertoire. PMID:26903823

  1. Some neural networks compute, others don't.

    PubMed

    Piccinini, Gualtiero

    2008-01-01

    I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend the following theses. (1) Many neural networks compute--they perform computations. (2) Some neural networks compute in a classical way. Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may well fall into this last class.

  2. On sparsely connected optimal neural networks

    SciTech Connect

    Beiu, V.; Draghici, S.

    1997-10-01

    This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.

  3. Computational inference of neural information flow networks.

    PubMed

    Smith, V Anne; Yu, Jing; Smulders, Tom V; Hartemink, Alexander J; Jarvis, Erich D

    2006-11-24

    Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.

  4. Artificial Neural Networks and Instructional Technology.

    ERIC Educational Resources Information Center

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  5. Higher-Order Neural Networks Recognize Patterns

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly; Ochoa, Ellen

    1996-01-01

    Networks of higher order have enhanced capabilities to distinguish between different two-dimensional patterns and to recognize those patterns. Also enhanced capabilities to "learn" patterns to be recognized: "trained" with far fewer examples and, therefore, in less time than necessary to train comparable first-order neural networks.

  6. Orthogonal Patterns In A Binary Neural Network

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1991-01-01

    Report presents some recent developments in theory of binary neural networks. Subject matter relevant to associate (content-addressable) memories and to recognition of patterns - both of considerable importance in advancement of robotics and artificial intelligence. When probed by any pattern, network converges to one of stored patterns.

  7. Neural-Network Modeling Of Arc Welding

    NASA Technical Reports Server (NTRS)

    Anderson, Kristinn; Barnett, Robert J.; Springfield, James F.; Cook, George E.; Strauss, Alvin M.; Bjorgvinsson, Jon B.

    1994-01-01

    Artificial neural networks considered for use in monitoring and controlling gas/tungsten arc-welding processes. Relatively simple network, using 4 welding equipment parameters as inputs, estimates 2 critical weld-bead paramaters within 5 percent. Advantage is computational efficiency.

  8. Neural networks as perpetual information generators

    NASA Astrophysics Data System (ADS)

    Englisch, Harald; Xiao, Yegao; Yao, Kailun

    1991-07-01

    The information gain in a neural network cannot be larger than the bit capacity of the synapses. It is shown that the equation derived by Engel et al. [Phys. Rev. A 42, 4998 (1990)] for the strongly diluted network with persistent stimuli contradicts this condition. Furthermore, for any time step the correct equation is derived by taking the correlation between random variables into account.

  9. An overview on development of neural network technology

    NASA Technical Reports Server (NTRS)

    Lin, Chun-Shin

    1993-01-01

    The study has been to obtain a bird's-eye view of the current neural network technology and the neural network research activities in NASA. The purpose was two fold. One was to provide a reference document for NASA researchers who want to apply neural network techniques to solve their problems. Another one was to report out survey results regarding NASA research activities and provide a view on what NASA is doing, what potential difficulty exists and what NASA can/should do. In a ten week study period, we interviewed ten neural network researchers in the Langley Research Center and sent out 36 survey forms to researchers at the Johnson Space Center, Lewis Research Center, Ames Research Center and Jet Propulsion Laboratory. We also sent out 60 similar forms to educators and corporation researchers to collect general opinions regarding this field. Twenty-eight survey forms, 11 from NASA researchers and 17 from outside, were returned. Survey results were reported in our final report. In the final report, we first provided an overview on the neural network technology. We reviewed ten neural network structures, discussed the applications in five major areas, and compared the analog, digital and hybrid electronic implementation of neural networks. In the second part, we summarized known NASA neural network research studies and reported the results of the questionnaire survey. Survey results show that most studies are still in the development and feasibility study stage. We compared the techniques, application areas, researchers' opinions on this technology, and many aspects between NASA and non-NASA groups. We also summarized their opinions on difficulties encountered. Applications are considered the top research priority by most researchers. Hardware development and learning algorithm improvement are the next. The lack of financial and management support is among the difficulties in research study. All researchers agree that the use of neural networks could result in

  10. Electronic device aspects of neural network memories

    NASA Technical Reports Server (NTRS)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  11. Disruption forecasting at JET using neural networks

    NASA Astrophysics Data System (ADS)

    Cannas, B.; Fanni, A.; Marongiu, E.; Sonato, P.

    2004-01-01

    Neural networks are trained to evaluate the risk of plasma disruptions in a tokamak experiment using several diagnostic signals as inputs. A saliency analysis confirms the goodness of the chosen inputs, all of which contribute to the network performance. Tests that were carried out refer to data collected from succesfully terminated and disruption terminated pulses performed during two years of JET tokamak experiments. Results show the possibility of developing a neural network predictor that intervenes well in advance in order to avoid plasma disruption or mitigate its effects.

  12. DARPA Neural Network Study: October 1987 - February 1988

    DTIC Science & Technology

    1989-03-22

    34Weight." Neurodynamics The study of the generation and propagation of synchronized neural activity in biological systems. 70 Neuron The nerve cells in...11.3 Cognitive Science 74 11.4 Classical Conditioning 74 12. TOWARD A THEORY OF NEURAL NETWORKS 77 12.1 Introduction 77 12.2 Capability 78 12.3...a taxonomy used to classify these models. The fol- lowing chapters review important models as well as developments in neurobiology and cognitive

  13. Homeostatic Scaling of Excitability in Recurrent Neural Networks

    PubMed Central

    Remme, Michiel W. H.; Wadman, Wytse J.

    2012-01-01

    Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE) in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (patho)physiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity. PMID:22570604

  14. Prediction of bioactive peptides using artificial neural networks.

    PubMed

    Andreu, David; Torrent, Marc

    2015-01-01

    Peptides are molecules of varying complexity, with different functions in the organism and with remarkable therapeutic interest. Predicting peptide activity by computational means can help us to understand their mechanism of action and deliver powerful drug-screening methodologies. In this chapter, we describe how to apply artificial neural networks to predict antimicrobial peptide activity.

  15. Neural Networks Control of a Magnetic Levitation System

    DTIC Science & Technology

    2001-04-17

    neural networks (ANN) in conjunction of proportional-integral-derivative (PID) controllers in control of non-contacting active magnetic bearings (AMB). The objective of this technique is to reduce the effect of the unbalance on the rotor displacement without the estimating perturbation. The work consists of the following: 1) application of artificial neural networks (multi-layer perceptrons) for nonlinear model of the active magnetic bearing by using the dynamic back-propagation methods for the adjustment of parameters; and 2) application of

  16. Multiwavelet neural network and its approximation properties.

    PubMed

    Jiao, L; Pan, J; Fang, Y

    2001-01-01

    A model of multiwavelet-based neural networks is proposed. Its universal and L(2) approximation properties, together with its consistency are proved, and the convergence rates associated with these properties are estimated. The structure of this network is similar to that of the wavelet network, except that the orthonormal scaling functions are replaced by orthonormal multiscaling functions. The theoretical analyses show that the multiwavelet network converges more rapidly than the wavelet network, especially for smooth functions. To make a comparison between both networks, experiments are carried out with the Lemarie-Meyer wavelet network, the Daubechies2 wavelet network and the GHM multiwavelet network, and the results support the theoretical analysis well. In addition, the results also illustrate that at the jump discontinuities, the approximation performance of the two networks are about the same.

  17. Analysis of optical neural stimulation effects on neural networks affected by neurodegenerative diseases

    NASA Astrophysics Data System (ADS)

    Zverev, M.; Fanjul-Vélez, F.; Salas-García, I.; Ortega-Quijano, N.; Arce-Diego, J. L.

    2016-03-01

    The number of people in risk of developing a neurodegenerative disease increases as the life expectancy grows due to medical advances. Multiple techniques have been developed to improve patient's condition, from pharmacological to invasive electrodes approaches, but no definite cure has yet been discovered. In this work Optical Neural Stimulation (ONS) has been studied. ONS stimulates noninvasively the outer regions of the brain, mainly the neocortex. The relationship between the stimulation parameters and the therapeutic response is not totally clear. In order to find optimal ONS parameters to treat a particular neurodegenerative disease, mathematical modeling is necessary. Neural networks models have been employed to study the neural spiking activity change induced by ONS. Healthy and pathological neocortical networks have been considered to study the required stimulation to restore the normal activity. The network consisted of a group of interconnected neurons, which were assigned 2D spatial coordinates. The optical stimulation spatial profile was assumed to be Gaussian. The stimulation effects were modeled as synaptic current increases in the affected neurons, proportional to the stimulation fluence. Pathological networks were defined as the healthy ones with some neurons being inactivated, which presented no synaptic conductance. Neurons' electrical activity was also studied in the frequency domain, focusing specially on the changes of the spectral bands corresponding to brain waves. The complete model could be used to determine the optimal ONS parameters in order to achieve the specific neural spiking patterns or the required local neural activity increase to treat particular neurodegenerative pathologies.

  18. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.

    PubMed

    Miconi, Thomas

    2017-02-23

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

  19. Applications of Neural Networks to Adaptive Control

    DTIC Science & Technology

    1989-12-01

    DTIC ;- E py 00 NAVAL POSTGRADUATE SCHOOL Monterey, California I.$ RDTIC IELECTE fl THESIS BEG7V°U APPLICATIONS OF NEURAL NETWORKS TO ADAPTIVE CONTROL...Second keader E . Robert Wood, Chairman, Department of Aeronautics and Astronautics Gordoii E . Schacher, Dean of Faculty and Graduate Education ii ABSTRACT...23: Network Dynamic Stability for q(t) . ............................. 55 ix Figure 24: Network Dynamic Stability for e (t

  20. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  1. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  2. Neural network technologies for image classification

    NASA Astrophysics Data System (ADS)

    Korikov, A. M.; Tungusova, A. V.

    2015-11-01

    We analyze the classes of problems with an objective necessity to use neural network technologies, i.e. representation and resolution problems in the neural network logical basis. Among these problems, image recognition takes an important place, in particular the classification of multi-dimensional data based on information about textural characteristics. These problems occur in aerospace and seismic monitoring, materials science, medicine and other. We reviewed different approaches for the texture description: statistical, structural, and spectral. We developed a neural network technology for resolving a practical problem of cloud image classification for satellite snapshots from the spectroradiometer MODIS. The cloud texture is described by the statistical characteristics of the GLCM (Gray Level Co- Occurrence Matrix) method. From the range of neural network models that might be applied for image classification, we chose the probabilistic neural network model (PNN) and developed an implementation which performs the classification of the main types and subtypes of clouds. Also, we chose experimentally the optimal architecture and parameters for the PNN model which is used for image classification.

  3. Fire detection from hyperspectral data using neural network approach

    NASA Astrophysics Data System (ADS)

    Piscini, Alessandro; Amici, Stefania

    2015-10-01

    This study describes an application of artificial neural networks for the recognition of flaming areas using hyper- spectral remote sensed data. Satellite remote sensing is considered an effective and safe way to monitor active fires for environmental and people safeguarding. Neural networks are an effective and consolidated technique for the classification of satellite images. Moreover, once well trained, they prove to be very fast in the application stage for a rapid response. At flaming temperature, thanks to its low excitation energy (about 4.34 eV), potassium (K) ionize with a unique doublet emission features. This emission features can be detected remotely providing a detection map of active fire which allows in principle to separate flaming from smouldering areas of vegetation even in presence of smoke. For this study a normalised Advanced K Band Difference (AKBD) has been applied to airborne hyper spectral sensor covering a range of 400-970 nm with resolution 2.9 nm. A back propagation neural network was used for the recognition of active fires affecting the hyperspectral image. The network was trained using all channels of sensor as inputs, and the corresponding AKBD indexes as target output. In order to evaluate its generalization capabilities, the neural network was validated on two independent data sets of hyperspectral images, not used during neural network training phase. The validation results for the independent data-sets had an overall accuracy round 100% for both image and a few commission errors (0.1%), therefore demonstrating the feasibility of estimating the presence of active fires using a neural network approach. Although the validation of the neural network classifier had a few commission errors, the producer accuracies were lower due to the presence of omission errors. Image analysis revealed that those false negatives lie in "smoky" portion fire fronts, and due to the low intensity of the signal. The proposed method can be considered

  4. Neurale Netwerken en Radarsystemen (Neural Networks and Radar Systems)

    DTIC Science & Technology

    1989-08-01

    general issues in cognitive science", Parallel distributed processing, Vol 1: Foundations, Rumelhart et al. 1986 pp 110-146 THO rapport Pagina 151 36 D.E...34Neural networks (part 2)",Expert Focus, IEEE Expert, Spring 1988. 61 J.A. Anderson, " Cognitive and Psychological Computations with Neural Models", IEEE...Pagina 154 69 David H. Ackley, Geoffrey E. Hinton and Terrence J. Sejnowski, "A Learning Algorithm for Boltzmann machines", cognitive science 9, 147-169

  5. Estimates on compressed neural networks regression.

    PubMed

    Zhang, Yongquan; Li, Youmei; Sun, Jianyong; Ji, Jiabing

    2015-03-01

    When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.

  6. Identification of the connections in biologically inspired neural networks

    NASA Technical Reports Server (NTRS)

    Demuth, H.; Leung, K.; Beale, M.; Hicklin, J.

    1990-01-01

    We developed an identification method to find the strength of the connections between neurons from their behavior in small biologically-inspired artificial neural networks. That is, given the network external inputs and the temporal firing pattern of the neurons, we can calculate a solution for the strengths of the connections between neurons and the initial neuron activations if a solution exists. The method determines directly if there is a solution to a particular neural network problem. No training of the network is required. It should be noted that this is a first pass at the solution of a difficult problem. The neuron and network models chosen are related to biology but do not contain all of its complexities, some of which we hope to add to the model in future work. A variety of new results have been obtained. First, the method has been tailored to produce connection weight matrix solutions for networks with important features of biological neural (bioneural) networks. Second, a computationally efficient method of finding a robust central solution has been developed. This later method also enables us to find the most consistent solution in the presence of noisy data. Prospects of applying our method to identify bioneural network connections are exciting because such connections are almost impossible to measure in the laboratory. Knowledge of such connections would facilitate an understanding of bioneural networks and would allow the construction of the electronic counterparts of bioneural networks on very large scale integrated (VLSI) circuits.

  7. Flexible body control using neural networks

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  8. Neural networks in support of manned space

    NASA Technical Reports Server (NTRS)

    Werbos, Paul J.

    1989-01-01

    Many lobbyists in Washington have argued that artificial intelligence (AI) is an alternative to manned space activity. In actuality, this is the opposite of the truth, especially as regards artificial neural networks (ANNs), that form of AI which has the greatest hope of mimicking human abilities in learning, ability to interface with sensors and actuators, flexibility and balanced judgement. ANNs and their relation to expert systems (the more traditional form of AI), and the limitations of both technologies are briefly reviewed. A Few highlights of recent work on ANNs, including an NSF-sponsored workshop on ANNs for control applications are given. Current thinking on ANNs for use in certain key areas (the National Aerospace Plane, teleoperation, the control of large structures, fault diagnostics, and docking) which may be crucial to the long term future of man in space is discussed.

  9. Training Deep Spiking Neural Networks Using Backpropagation

    PubMed Central

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations. PMID:27877107

  10. Foreign currency rate forecasting using neural networks

    NASA Astrophysics Data System (ADS)

    Pandya, Abhijit S.; Kondo, Tadashi; Talati, Amit; Jayadevappa, Suryaprasad

    2000-03-01

    Neural networks are increasingly being used as a forecasting tool in many forecasting problems. This paper discusses the application of neural networks in predicting daily foreign exchange rates between the USD, GBP as well as DEM. We approach the problem from a time-series analysis framework - where future exchange rates are forecasted solely using past exchange rates. This relies on the belief that the past prices and future prices are very close related, and interdependent. We present the result of training a neural network with historical USD-GBP data. The methodology used in explained, as well as the training process. We discuss the selection of inputs to the network, and present a comparison of using the actual exchange rates and the exchange rate differences as inputs. Price and rate differences are the preferred way of training neural network in financial applications. Results of both approaches are present together for comparison. We show that the network is able to learn the trends in the exchange rate movements correctly, and present the results of the prediction over several periods of time.

  11. Training Deep Spiking Neural Networks Using Backpropagation.

    PubMed

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  12. Neural network approaches to dynamic collision-free trajectory generation.

    PubMed

    Yang, S X; Meng, M

    2001-01-01

    In this paper, dynamic collision-free trajectory generation in a nonstationary environment is studied using biologically inspired neural network approaches. The proposed neural network is topologically organized, where the dynamics of each neuron is characterized by a shunting equation or an additive equation. The state space of the neural network can be either the Cartesian workspace or the joint space of multi-joint robot manipulators. There are only local lateral connections among neurons. The real-time optimal trajectory is generated through the dynamic activity landscape of the neural network without explicitly searching over the free space nor the collision paths, without explicitly optimizing any global cost functions, without any prior knowledge of the dynamic environment, and without any learning procedures. Therefore the model algorithm is computationally efficient. The stability of the neural network system is guaranteed by the existence of a Lyapunov function candidate. In addition, this model is not very sensitive to the model parameters. Several model variations are presented and the differences are discussed. As examples, the proposed models are applied to generate collision-free trajectories for a mobile robot to solve a maze-type of problem, to avoid concave U-shaped obstacles, to track a moving target and at the same to avoid varying obstacles, and to generate a trajectory for a two-link planar robot with two targets. The effectiveness and efficiency of the proposed approaches are demonstrated through simulation and comparison studies.

  13. Neural network approaches for noisy language modeling.

    PubMed

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  14. Intelligent neural network classifier for automatic testing

    NASA Astrophysics Data System (ADS)

    Bai, Baoxing; Yu, Heping

    1996-10-01

    This paper is concerned with an application of a multilayer feedforward neural network for the vision detection of industrial pictures, and introduces a high characteristics image processing and recognizing system which can be used for real-time testing blemishes, streaks and cracks, etc. on the inner walls of high-accuracy pipes. To take full advantage of the functions of the artificial neural network, such as the information distributed memory, large scale self-adapting parallel processing, high fault-tolerance ability, this system uses a multilayer perceptron as a regular detector to extract features of the images to be inspected and classify them.

  15. Implementation aspects of Graph Neural Networks

    NASA Astrophysics Data System (ADS)

    Barcz, A.; Szymański, Z.; Jankowski, S.

    2013-10-01

    This article summarises the results of implementation of a Graph Neural Network classi er. The Graph Neural Network model is a connectionist model, capable of processing various types of structured data, including non- positional and cyclic graphs. In order to operate correctly, the GNN model must implement a transition function being a contraction map, which is assured by imposing a penalty on model weights. This article presents research results concerning the impact of the penalty parameter on the model training process and the practical decisions that were made during the GNN implementation process.

  16. Livermore Big Artificial Neural Network Toolkit

    SciTech Connect

    Essen, Brian Van; Jacobs, Sam; Kim, Hyojin; Dryden, Nikoli; Moon, Tim

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  17. Simulation of photosynthetic production using neural network

    NASA Astrophysics Data System (ADS)

    Kmet, Tibor; Kmetova, Maria

    2013-10-01

    This paper deals with neural network based optimal control synthesis for solving optimal control problems with control and state constraints and discrete time delay. The optimal control problem is transcribed into nonlinear programming problem which is implemented with adaptive critic neural network. This approach is applicable to a wide class of nonlinear systems. The proposed simulation methods is illustrated by the optimal control problem of photosynthetic production described by discrete time delay differential equations. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  18. Automatic identification of species with neural networks

    PubMed Central

    Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification. PMID:25392749

  19. A growing and pruning sequential learning algorithm of hyper basis function neural network for function approximation.

    PubMed

    Vuković, Najdan; Miljković, Zoran

    2013-10-01

    Radial basis function (RBF) neural network is constructed of certain number of RBF neurons, and these networks are among the most used neural networks for modeling of various nonlinear problems in engineering. Conventional RBF neuron is usually based on Gaussian type of activation function with single width for each activation function. This feature restricts neuron performance for modeling the complex nonlinear problems. To accommodate limitation of a single scale, this paper presents neural network with similar but yet different activation function-hyper basis function (HBF). The HBF allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The HBF is based on generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. Compared to the RBF, the HBF neuron has more parameters to optimize, but HBF neural network needs less number of HBF neurons to memorize relationship between input and output sets in order to achieve good generalization property. However, recent research results of HBF neural network performance have shown that optimal way of constructing this type of neural network is needed; this paper addresses this issue and modifies sequential learning algorithm for HBF neural network that exploits the concept of neuron's significance and allows growing and pruning of HBF neuron during learning process. Extensive experimental study shows that HBF neural network, trained with developed learning algorithm, achieves lower prediction error and more compact neural network.

  20. Circuit design and exponential stabilization of memristive neural networks.

    PubMed

    Wen, Shiping; Huang, Tingwen; Zeng, Zhigang; Chen, Yiran; Li, Peng

    2015-03-01

    This paper addresses the problem of circuit design and global exponential stabilization of memristive neural networks with time-varying delays and general activation functions. Based on the Lyapunov-Krasovskii functional method and free weighting matrix technique, a delay-dependent criteria for the global exponential stability and stabilization of memristive neural networks are derived in form of linear matrix inequalities (LMIs). Two numerical examples are elaborated to illustrate the characteristics of the results. It is noteworthy that the traditional assumptions on the boundness of the derivative of the time-varying delays are removed.

  1. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    PubMed Central

    Jie, Shao

    2014-01-01

    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172

  2. Representation of neural networks as Lotka-Volterra systems

    SciTech Connect

    Moreau, Yves; Vandewalle, Joos; Louies, Stephane; Brenig, Leon

    1999-03-22

    We study changes of coordinates that allow the representation of the ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models--also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form, where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoied. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network.

  3. Dynamical analysis of uncertain neural networks with multiple time delays

    NASA Astrophysics Data System (ADS)

    Arik, Sabri

    2016-02-01

    This paper investigates the robust stability problem for dynamical neural networks in the presence of time delays and norm-bounded parameter uncertainties with respect to the class of non-decreasing, non-linear activation functions. By employing the Lyapunov stability and homeomorphism mapping theorems together, a new delay-independent sufficient condition is obtained for the existence, uniqueness and global asymptotic stability of the equilibrium point for the delayed uncertain neural networks. The condition obtained for robust stability establishes a matrix-norm relationship between the network parameters of the neural system, which can be easily verified by using properties of the class of the positive definite matrices. Some constructive numerical examples are presented to show the applicability of the obtained result and its advantages over the previously published corresponding literature results.

  4. A neural network approach to complete coverage path planning.

    PubMed

    Yang, Simon X; Luo, Chaomin

    2004-02-01

    Complete coverage path planning requires the robot path to cover every part of the workspace, which is an essential issue in cleaning robots and many other robotic applications such as vacuum robots, painter robots, land mine detectors, lawn mowers, automated harvesters, and window cleaners. In this paper, a novel neural network approach is proposed for complete coverage path planning with obstacle avoidance of cleaning robots in nonstationary environments. The dynamics of each neuron in the topologically organized neural network is characterized by a shunting equation derived from Hodgkin and Huxley's (1952) membrane equation. There are only local lateral connections among neurons. The robot path is autonomously generated from the dynamic activity landscape of the neural network and the previous robot location. The proposed model algorithm is computationally simple. Simulation results show that the proposed model is capable of planning collision-free complete coverage robot paths.

  5. Neural Network Control of a Magnetically Suspended Rotor System

    NASA Technical Reports Server (NTRS)

    Choi, Benjamin; Brown, Gerald; Johnson, Dexter

    1997-01-01

    Abstract Magnetic bearings offer significant advantages because of their noncontact operation, which can reduce maintenance. Higher speeds, no friction, no lubrication, weight reduction, precise position control, and active damping make them far superior to conventional contact bearings. However, there are technical barriers that limit the application of this technology in industry. One of them is the need for a nonlinear controller that can overcome the system nonlinearity and uncertainty inherent in magnetic bearings. This paper discusses the use of a neural network as a nonlinear controller that circumvents system nonlinearity. A neural network controller was well trained and successfully demonstrated on a small magnetic bearing rig. This work demonstrated the feasibility of using a neural network to control nonlinear magnetic bearings and systems with unknown dynamics.

  6. Existence and uniqueness results for neural network approximations.

    PubMed

    Williamson, R C; Helmke, U

    1995-01-01

    Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined.

  7. Neuroplasticity of prehensile neural networks after quadriplegia.

    PubMed

    Di Rienzo, F; Guillot, A; Mateo, S; Daligault, S; Delpuech, C; Rode, G; Collet, C

    2014-08-22

    Targeting cortical neuroplasticity through rehabilitation-based practice is believed to enhance functional recovery after spinal cord injury (SCI). While prehensile performance is severely disturbed after C6-C7 SCI, subjects with tetraplegia can learn a compensatory passive prehension using the tenodesis effect. During tenodesis, an active wrist extension triggers a passive flexion of the fingers allowing grasping. We investigated whether motor imagery training could promote activity-dependent neuroplasticity and improve prehensile tenodesis performance. SCI participants (n=6) and healthy participants (HP, n=6) took part in a repeated measurement design. After an extended baseline period of 3 weeks including repeated magnetoencephalography (MEG) measurements, MI training was embedded within the classical course of physiotherapy for 5 additional weeks (three sessions per week). An immediate MEG post-test and a follow-up at 2 months were performed. Before MI training, compensatory activations and recruitment of deafferented cortical regions characterized the cortical activity during actual and imagined prehension in SCI participants. After MI training, MEG data yielded reduced compensatory activations. Cortical recruitment became similar to that in HP. Behavioral analysis evidenced decreased movement variability suggesting motor learning of tenodesis. Data suggest that MI training participated to reverse compensatory neuroplasticity in SCI participants, and promoted the integration of new upper limb prehensile coordination in the neural networks functionally dedicated to the control of healthy prehension before injury.

  8. Porosity Log Prediction Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Dwi Saputro, Oki; Lazuardi Maulana, Zulfikar; Dzar Eljabbar Latief, Fourier

    2016-08-01

    Well logging is important in oil and gas exploration. Many physical parameters of reservoir is derived from well logging measurement. Geophysicists often use well logging to obtain reservoir properties such as porosity, water saturation and permeability. Most of the time, the measurement of the reservoir properties are considered expensive. One of method to substitute the measurement is by conducting a prediction using artificial neural network. In this paper, artificial neural network is performed to predict porosity log data from other log data. Three well from ‘yy’ field are used to conduct the prediction experiment. The log data are sonic, gamma ray, and porosity log. One of three well is used as training data for the artificial neural network which employ the Levenberg-Marquardt Backpropagation algorithm. Through several trials, we devise that the most optimal input training is sonic log data and gamma ray log data with 10 hidden layer. The prediction result in well 1 has correlation of 0.92 and mean squared error of 5.67 x10-4. Trained network apply to other well data. The result show that correlation in well 2 and well 3 is 0.872 and 0.9077 respectively. Mean squared error in well 2 and well 3 is 11 x 10-4 and 9.539 x 10-4. From the result we can conclude that sonic log and gamma ray log could be good combination for predicting porosity with neural network.

  9. Experimental fault characterization of a neural network

    NASA Technical Reports Server (NTRS)

    Tan, Chang-Huong

    1990-01-01

    The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.

  10. Non-overlapping Neural Networks in Hydra vulgaris.

    PubMed

    Dupre, Christophe; Yuste, Rafael

    2017-03-24

    To understand the emergent properties of neural circuits, it would be ideal to record the activity of every neuron in a behaving animal and decode how it relates to behavior. We have achieved this with the cnidarian Hydra vulgaris, using calcium imaging of genetically engineered animals to measure the activity of essentially all of its neurons. Although the nervous system of Hydra is traditionally described as a simple nerve net, we surprisingly find instead a series of functional networks that are anatomically non-overlapping and are associated with specific behaviors. Three major functional networks extend through the entire animal and are activated selectively during longitudinal contractions, elongations in response to light, and radial contractions, whereas an additional network is located near the hypostome and is active during nodding. These results demonstrate the functional sophistication of apparently simple nerve nets, and the potential of Hydra and other basal metazoans as a model system for neural circuit studies.

  11. Stretch and Hammer Neural Networks for N-Dimensional Data Generalization

    DTIC Science & Technology

    1992-01-15

    for setting the connection weights (sometimes called trainivg the network). Not :ll methods are logistically supportable (See Raeth. Logistica ...government and industry contracts that have involved neural network applications and optical implementations. His other research activities emphasize

  12. Payload Invariant Control via Neural Networks: Development and Experimental Evaluation

    DTIC Science & Technology

    1989-12-01

    control is proposed and experimentally evaluated. An Adaptive Model-Based Neural Network Controller (AMBNNC) uses multilayer perceptron artificial neural ... networks to estimate the payload during high speed manipulator motion. The payload estimate adapts the feedforward compensator to unmodeled system

  13. Differential neural network configuration during human path integration.

    PubMed

    Arnold, Aiden E G F; Burles, Ford; Bray, Signe; Levy, Richard M; Iaria, Giuseppe

    2014-01-01

    Path integration is a fundamental skill for navigation in both humans and animals. Despite recent advances in unraveling the neural basis of path integration in animal models, relatively little is known about how path integration operates at a neural level in humans. Previous attempts to characterize the neural mechanisms used by humans to visually path integrate have suggested a central role of the hippocampus in allowing accurate performance, broadly resembling results from animal data. However, in recent years both the central role of the hippocampus and the perspective that animals and humans share similar neural mechanisms for path integration has come into question. The present study uses a data driven analysis to investigate the neural systems engaged during visual path integration in humans, allowing for an unbiased estimate of neural activity across the entire brain. Our results suggest that humans employ common task control, attention and spatial working memory systems across a frontoparietal network during path integration. However, individuals differed in how these systems are configured into functional networks. High performing individuals were found to more broadly express spatial working memory systems in prefrontal cortex, while low performing individuals engaged an allocentric memory system based primarily in the medial occipito-temporal region. These findings suggest that visual path integration in humans over short distances can operate through a spatial working memory system engaging primarily the prefrontal cortex and that the differential configuration of memory systems recruited by task control networks may help explain individual biases in spatial learning strategies.

  14. Artificial Neural Networks for Modeling Knowing and Learning in Science.

    ERIC Educational Resources Information Center

    Roth, Wolff-Michael

    2000-01-01

    Advocates artificial neural networks as models for cognition and development. Provides an example of how such models work in the context of a well-known Piagetian developmental task and school science activity: balance beam problems. (Contains 59 references.) (Author/WRM)

  15. Neural network guided search control in partial order planning

    SciTech Connect

    Zimmerman, T.

    1996-12-31

    The development of efficient search control methods is an active research topic in the field of planning. Investigation of a planning program integrated with a neural network (NN) that assists in search control is underway, and has produced promising preliminary results.

  16. A neural-network model for earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Bodri, Bertalan

    2001-10-01

    Changes in seismic activity patterns can occur during the process of preparation of large earthquakes, and such changes possibly are the most reliable long-term earthquake precursor examined to date. In the present work, seismicity rate variations in the Carpathian-Pannoman region, Hungary, and the Peloponnesos-Aegean area, Greece, have been used to develop neural network models for the prediction of the origin times of large ( M⩾6.0) earthquakes. Three-layer feed-forward neural network models were constructed to analyse earthquake occurrences. Numerical experiments have been performed with the aim to find the optimum input set configuration which provides the best performance of a neural network. It was possible to reach sufficient training tolerance for the constructed networks (correspondence between predicted by the model outputs and known from experience outputs within the limits of given error thresholds) only when the input set contained seismicity rate values for different magnitude bands (when such data appeared representative enough) and also for more than one time intervals between large earthquakes. The specific structure of the network input generates the question of whether this configuration has some relationship to the physics of the strain accumulation and/or release process. The remarkably satisfactory performance of the constructed neural networks suggests the usefulness of the application of this tool in earthquake prediction problems.

  17. Computational modeling of neural plasticity for self-organization of neural networks.

    PubMed

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence.

  18. Computational chaos in massively parallel neural networks

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  19. The labeled systems of multiple neural networks.

    PubMed

    Nemissi, M; Seridi, H; Akdag, H

    2008-08-01

    This paper proposes an implementation scheme of K-class classification problem using systems of multiple neural networks. Usually, a multi-class problem is decomposed into simple sub-problems solved independently using similar single neural networks. For the reason that these sub-problems are not equivalent in their complexity, we propose a system that includes reinforced networks destined to solve complicated parts of the entire problem. Our approach is inspired from principles of the multi-classifiers systems and the labeled classification, which aims to improve performances of the networks trained by the Back-Propagation algorithm. We propose two implementation schemes based on both OAO (one-against-all) and OAA (one-against-one). The proposed models are evaluated using iris and human thigh databases.

  20. Development of programmable artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J.

    1993-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  1. A neural network based speech recognition system

    NASA Astrophysics Data System (ADS)

    Carroll, Edward J.; Coleman, Norman P., Jr.; Reddy, G. N.

    1990-02-01

    An overview is presented of the development of a neural network based speech recognition system. The two primary tasks involved were the development of a time invariant speech encoder and a pattern recognizer or detector. The speech encoder uses amplitude normalization and a Fast Fourier Transform to eliminate amplitude and frequency shifts of acoustic clues. The detector consists of a back-propagation network which accepts data from the encoder and identifies individual words. This use of neural networks offers two advantages over conventional algorithmic detectors: the detection time is no more than a few network time constants, and its recognition speed is independent of the number of the words in the vocabulary. The completed system has functioned as expected with high tolerance to input variation and with error rates comparable to a commercial system when used in a noisy environment.

  2. A neural network with modular hierarchical learning

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)

    1994-01-01

    This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.

  3. Neural Network Noise Anomaly Recognition System and Method

    DTIC Science & Technology

    2000-10-04

    determine when an input waveform deviates from learned noise characteristics. A plurality of neural networks is preferably provided, which each receives a...plurality of samples of intervals or windows of the input waveform. Each of the neural networks produces an output based on whether an anomaly is...detected with respect to the noise, which the neural network is trained to detect. The plurality of outputs of the neural networks is preferably applied to

  4. Analysis of Wideband Beamformers Designed with Artificial Neural Networks

    DTIC Science & Technology

    1990-12-01

    TECHNICAL REPORT 0-90-1 ANALYSIS OF WIDEBAND BEAMFORMERS DESIGNED WITH ARTIFICIAL NEURAL NETWORKS by Cary Cox Instrumentation Services Division...included. A briel tutorial on beamformers and neural networks is also provided. 14. SUBJECT TERMS 15, NUMBER OF PAGES Artificial neural networks Fecdforwa:,l...Beamformers Designed with Artificial Neural Networks ". The study was conducted under the general supervision of Messrs. George P. Bonner, Chief

  5. Knowledge learning on fuzzy expert neural networks

    NASA Astrophysics Data System (ADS)

    Fu, Hsin-Chia; Shann, J.-J.; Pao, Hsiao-Tien

    1994-03-01

    The proposed fuzzy expert network is an event-driven, acyclic neural network designed for knowledge learning on a fuzzy expert system. Initially, the network is constructed according to a primitive (rough) expert rules including the input and output linguistic variables and values of the system. For each inference rule, it corresponds to an inference network, which contains five types of nodes: Input, Membership-Function, AND, OR, and Defuzzification Nodes. We propose a two-phase learning procedure for the inference network. The first phase is the competitive backpropagation (CBP) training phase, and the second phase is the rule- pruning phase. The CBP learning algorithm in the training phase enables the network to learn the fuzzy rules as precisely as backpropagation-type learning algorithms and yet as quickly as competitive-type learning algorithms. After the CBP training, the rule-pruning process is performed to delete redundant weight connections for simple network structures and yet compatible retrieving performance.

  6. Simplified Learning Scheme For Analog Neural Network

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1991-01-01

    Synaptic connections adjusted one at a time in small increments. Simplified gradient-descent learning scheme for electronic neural-network processor less efficient than better-known back-propagation scheme, but offers two advantages: easily implemented in circuitry because data-access circuitry separated from learning circuitry; and independence of data-access circuitry makes possible to implement feedforward as well as feedback networks, including those of multiple-attractor type. Important in such applications as recognition of patterns.

  7. Using neural networks to model chaos

    SciTech Connect

    Upadhyay, M.D.

    1996-12-31

    Two types of neural networks -- backpropagation and radial basis function -- are presented for modeling dynamical systems. They were trained to model the Henon, Ikeda and Tinkerbell dynamical systems by providing a set of points randomly chosen from orbits under the functions. After training, the networks were used to simulate the functions to determine the extent to which they could generate the chaotic attractors associated with these systems.

  8. Auto-associative nanoelectronic neural network

    SciTech Connect

    Nogueira, C. P. S. M.; Guimarães, J. G.

    2014-05-15

    In this paper, an auto-associative neural network using single-electron tunneling (SET) devices is proposed and simulated at low temperature. The nanoelectronic auto-associative network is able to converge to a stable state, previously stored during training. The recognition of the pattern involves decreasing the energy of the input state until it achieves a point of local minimum energy, which corresponds to one of the stored patterns.

  9. Are artificial neural networks black boxes?

    PubMed

    Benitez, J M; Castro, J L; Requena, I

    1997-01-01

    Artificial neural networks are efficient computing models which have shown their strengths in solving hard problems in artificial intelligence. They have also been shown to be universal approximators. Notwithstanding, one of the major criticisms is their being black boxes, since no satisfactory explanation of their behavior has been offered. In this paper, we provide such an interpretation of neural networks so that they will no longer be seen as black boxes. This is stated after establishing the equality between a certain class of neural nets and fuzzy rule-based systems. This interpretation is built with fuzzy rules using a new fuzzy logic operator which is defined after introducing the concept of f-duality. In addition, this interpretation offers an automated knowledge acquisition procedure.

  10. Neural Network Classification of Environmental Samples

    DTIC Science & Technology

    1996-12-01

    Biological and Artificial Neural Networks. Air Force Institute of Technology, 1990. 24. Rosenblatt. Principles of Neurodynamics . New York, NY: Spartan...Parallel Distributed Processing: Explorations in the Microstructure of Cognition . MIT Press, 1986. 29. Smagt, Patrick P. Van Der. "Minimisation Methods

  11. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  12. Neural network application to comprehensive engine diagnostics

    NASA Technical Reports Server (NTRS)

    Marko, Kenneth A.

    1994-01-01

    We have previously reported on the use of neural networks for detection and identification of faults in complex microprocessor controlled powertrain systems. The data analyzed in those studies consisted of the full spectrum of signals passing between the engine and the real-time microprocessor controller. The specific task of the classification system was to classify system operation as nominal or abnormal and to identify the fault present. The primary concern in earlier work was the identification of faults, in sensors or actuators in the powertrain system as it was exercised over its full operating range. The use of data from a variety of sources, each contributing some potentially useful information to the classification task, is commonly referred to as sensor fusion and typifies the type of problems successfully addressed using neural networks. In this work we explore the application of neural networks to a different diagnostic problem, the diagnosis of faults in newly manufactured engines and the utility of neural networks for process control.

  13. Multidimensional neural growing networks and computer intelligence

    SciTech Connect

    Yashchenko, V.A.

    1995-03-01

    This paper examines information-computation processes in time and in space and some aspects of computer intelligence using multidimensional matrix neural growing networks. In particular, issues of object-oriented {open_quotes}thinking{close_quotes} of computers are considered.

  14. Nonlinear Time Series Analysis via Neural Networks

    NASA Astrophysics Data System (ADS)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  15. Automatic target identification using neural networks

    NASA Astrophysics Data System (ADS)

    Abdallah, Mahmoud A.; Samu, Tayib I.; Grissom, William A.

    1995-10-01

    Neural network theories are applied to attain human-like performance in areas such as speech recognition, statistical mapping, and target recognition or identification. In target identification, one of the difficult tasks has been the extraction of features to be used to train the neural network which is subsequently used for the target's identification. The purpose of this paper is to describe the development of an automatic target identification system using features extracted from a specific class of targets. The extracted features were the graphical representations of the silhouettes of the targets. Image processing techniques and some Fast Fourier Transform (FFT) properties were implemented to extract the features. The FFT eliminates variations in the extracted features due to rotation or scaling. A Neural Network was trained with the extracted features using the Learning Vector Quantization paradigm. An identification system was set up to test the algorithm. The image processing software was interfaced with MATLAB Neural Network Toolbox via a computer program written in C language to automate the target identification process. The system performed well as at classified the objects used to train it irrespective of rotation, scaling, and translation. This automatic target identification system had a classification success rate of about 95%.

  16. Optoelectronic Integrated Circuits For Neural Networks

    NASA Technical Reports Server (NTRS)

    Psaltis, D.; Katz, J.; Kim, Jae-Hoon; Lin, S. H.; Nouhi, A.

    1990-01-01

    Many threshold devices placed on single substrate. Integrated circuits containing optoelectronic threshold elements developed for use as planar arrays of artificial neurons in research on neural-network computers. Mounted with volume holograms recorded in photorefractive crystals serving as dense arrays of variable interconnections between neurons.

  17. Chaotic time series prediction using artificial neural networks

    SciTech Connect

    Bartlett, E.B.

    1991-12-31

    This paper describes the use of artificial neural networks to model the complex oscillations defined by a chaotic Verhuist animal population dynamic. A predictive artificial neural network model is developed and tested, and results of computer simulations are given. These results show that the artificial neural network model predicts the chaotic time series with various initial conditions, growth parameters, or noise.

  18. Chaotic time series prediction using artificial neural networks

    SciTech Connect

    Bartlett, E.B.

    1991-01-01

    This paper describes the use of artificial neural networks to model the complex oscillations defined by a chaotic Verhuist animal population dynamic. A predictive artificial neural network model is developed and tested, and results of computer simulations are given. These results show that the artificial neural network model predicts the chaotic time series with various initial conditions, growth parameters, or noise.

  19. Neural Network Design on the SRC-6 Reconfigurable Computer

    DTIC Science & Technology

    2006-12-01

    speeds of FPGA systems. This thesis explores the use of a Feed-forward, Multi-Layer Perceptron (MLP) Artificial Neural Network (ANN) architecture... Implementation of a Fast Artificial Neural Network Library (FANN), Graduate Project Report, Department of Computer Science, University of Copenhagen (DIKU...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited NEURAL NETWORK

  20. Hyperspectral Imagery Classification Using a Backpropagation Neural Network

    DTIC Science & Technology

    1993-12-01

    A backpropagation neural network was developed and implemented for classifying AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) hyperspectral...imagery. It is a fully interconnected linkage of three layers of neural network . Fifty input layer neurons take in signals from Bands 41 to 90 of the...moderate AVIRIS pixel resolution of 20 meters by 20 meters. Backpropagation neural network , Hyperspectral imagery

  1. Electrically Modifiable Nonvolatile SONOS Synapses for Electronic Neural Networks.

    DTIC Science & Technology

    1992-09-30

    for the electrically reprogrammable analog conductance in an artificial neural network. We have demonstrated the attractive featuies of this synaptic ...Electrically Modifiable Synaptic Element for VLSI Neural Network Implementation", Proceedings of the 1991 IEEE Nonvolatile Semiconductor Memory Workshop...Nonvolatile Eletrically Modifiable Synaptic Element for VLSI Neural Network Implementation", 11th IEEE Nonvolatile Semiconductor Memory Workshop, 1991. 19. A

  2. [Application of artificial neural networks in infectious diseases].

    PubMed

    Xu, Jun-fang; Zhou, Xiao-nong

    2011-02-28

    With the development of information technology, artificial neural networks has been applied to many research fields. Due to the special features such as nonlinearity, self-adaptation, and parallel processing, artificial neural networks are applied in medicine and biology. This review summarizes the application of artificial neural networks in the relative factors, prediction and diagnosis of infectious diseases in recent years.

  3. Brain tumor grading based on Neural Networks and Convolutional Neural Networks.

    PubMed

    Yuehao Pan; Weimin Huang; Zhiping Lin; Wanzheng Zhu; Jiayin Zhou; Wong, Jocelyn; Zhongxiang Ding

    2015-08-01

    This paper studies brain tumor grading using multiphase MRI images and compares the results with various configurations of deep learning structure and baseline Neural Networks. The MRI images are used directly into the learning machine, with some combination operations between multiphase MRIs. Compared to other researches, which involve additional effort to design and choose feature sets, the approach used in this paper leverages the learning capability of deep learning machine. We present the grading performance on the testing data measured by the sensitivity and specificity. The results show a maximum improvement of 18% on grading performance of Convolutional Neural Networks based on sensitivity and specificity compared to Neural Networks. We also visualize the kernels trained in different layers and display some self-learned features obtained from Convolutional Neural Networks.

  4. Iterative free-energy optimization for recurrent neural networks (INFERNO).

    PubMed

    Pitti, Alexandre; Gaussier, Philippe; Quoy, Mathias

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.

  5. Iterative free-energy optimization for recurrent neural networks (INFERNO)

    PubMed Central

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes’ synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle. PMID:28282439

  6. Hybrid neural networks--combining abstract and realistic neural units.

    PubMed

    Lytton, William W; Hines, Michael

    2004-01-01

    There is a trade-off in neural network simulation between simulations that embody the details of neuronal biology and those that omit these details in favor of abstractions. The former approach appeals to physiologists and pharmacologists who can directly relate their experimental manipulations to parameter changes in the model. The latter approach appeals to physicists and mathematicians who seek analytic understanding of the behavior of large numbers of coupled simple units. This simplified approach is also valuable for practical reasons a highly simplified unit will run several orders of magnitude faster than a complex, biologically realistic unit. In order to have our cake and eat it, we have developed hybrid networks in the Neuron simulator package. These make use of Neuron's local variable timestep method to permit simplified integrate-and-fire units to move ahead quickly while realistic neurons in the same network are integrated slowly.

  7. Design of coupling resistor networks for neural network hardware

    NASA Astrophysics Data System (ADS)

    Barkan, Ozdal; Smith, W. R.; Persky, George

    1990-06-01

    The specification of an artificial neural network includes (1) the transformation relating each neuron's output voltage to its input voltage, and (2) a set of coupling weight factors expressing the input voltage of any neuron as a linear combination of the output voltages of other neurons. In analog VLSI chips for direct hardware implementation of these networks, neurons are often represented by amplifier elements (e.g. operational amplifiers or opamps), and resistors or active transconductances are used to couple signals from the outputs of certain neurons to the inputs of other neurons. Each coupling conductance is proportional to a single, corresponding coupling weight only under the following 'ideal' conditions: (1) each opamp has negligible output impedance, and (2) the input voltage of each opamp is developed across a low-resistance sampling resistor that is not loaded by the opamp itself. By contrast, the output impedance of a practical opamp may not be negligible in comparison to that of the high-fan network that it drives, and the sampling resistances on the opamp inputs cannot be arbitrarily low lest the input voltages be corrupted by unavoidable opamp input voltage offsets.

  8. Neural network based feature extraction scheme for heart rate variability

    NASA Astrophysics Data System (ADS)

    Raymond, Ben; Nandagopal, Doraisamy; Mazumdar, Jagan; Taverner, D.

    1995-04-01

    Neural networks are extensively used in solving a wide range of pattern recognition problems in signal processing. The accuracy of pattern recognition depends to a large extent on the quality of the features extracted from the signal. We present a neural network capable of extracting the autoregressive parameters of a cardiac signal known as hear rate variability (HRV). Frequency specific oscillations in the HRV signal represent heart rate regulatory activity and hence cardiovascular function. Continual monitoring and tracking of the HRV data over a period of time will provide valuable diagnostic information. We give an example of the network applied to a short HRV signal and demonstrate the tracking performance of the network with a single sinusoid embedded in white noise.

  9. Perspective: network-guided pattern formation of neural dynamics.

    PubMed

    Hütt, Marc-Thorsten; Kaiser, Marcus; Hilgetag, Claus C

    2014-10-05

    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings and lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatio-temporal pattern formation and propose a novel perspective for analysing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics.

  10. Optical implementation of neural networks

    NASA Astrophysics Data System (ADS)

    Yu, Francis T. S.; Guo, Ruyan

    2002-12-01

    An adaptive optical neuro-computing (ONC) using inexpensive pocket size liquid crystal televisions (LCTVs) had been developed by the graduate students in the Electro-Optics Laboratory at The Pennsylvania State University. Although this neuro-computing has only 8×8=64 neurons, it can be easily extended to 16×20=320 neurons. The major advantages of this LCTV architecture as compared with other reported ONCs, are low cost and the flexibility to operate. To test the performance, several neural net models are used. These models are Interpattern Association, Hetero-association and unsupervised learning algorithms. The system design considerations and experimental demonstrations are also included.

  11. Neural networks in windprofiler data processing

    NASA Astrophysics Data System (ADS)

    Weber, H.; Richner, H.; Kretzschmar, R.; Ruffieux, D.

    2003-04-01

    Wind profilers are basically Doppler radars yielding 3-dimensional wind profiles that are deduced from the Doppler shift caused by turbulent elements in the atmosphere. These signals can be contaminated by other airborne elements such as birds or hydrometeors. Using a feed-forward neural network with one hidden layer and one output unit, birds and hydrometeors can be successfully identified in non-averaged single spectra; theses are subsequently removed in the wind computation. An infrared camera was used to identify birds in one of the beams of the wind profiler. After training the network with about 6000 contaminated data sets, it was able to identify contaminated data in a test data set with a reliability of 96 percent. The assumption was made that the neural network parameters obtained in the beam for which bird data was collected can be transferred to the other beams (at least three beams are needed for computing wind vectors). Comparing the evolution of a wind field with and without the neural network shows a significant improvement of wind data quality. Current work concentrates on training the network also for hydrometeors. It is hoped that the instrument's capability can thus be expanded to measure not only correct winds, but also observe bird migration, estimate precipitation and -- by combining precipitation information with vertical velocity measurement -- the monitoring of the height of the melting layer.

  12. Classifying multispectral data by neural networks

    NASA Technical Reports Server (NTRS)

    Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.

    1993-01-01

    Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.

  13. Back propagation neural networks for facial verification

    SciTech Connect

    Garnett, A.E.; Solheim, I.; Payne, T.; Castain, R.H.

    1992-10-01

    We conducted a test to determine the aptitude of neural networks to recognize human faces. The pictures we collected of 511 subjects captured both profiles and many natural expressions. Some of the subjects were wearing glasses, sunglasses, or hats in some of the pictures. The images were compressed by a factor of 100 and converted into image vectors of 1400 pixels. The image vectors were fed into a back propagation neural network with one hidden layer and one output node. The networks were trained to recognize one target person and to reject all other persons. Neural networks for 37 target subjects were trained with 8 different training sets that consisted of different subsets of the data. The networks were then tested on the rest of the data, which consisted of 7000 or more unseen pictures. Results indicate that a false acceptance rate of less than 1 percent can be obtained, and a false rejection rate of 2 percent can be obtained when certain restrictions are followed.

  14. Neural networks underlying the metacognitive uncertainty response.

    PubMed

    Paul, Erick J; Smith, J David; Valentin, Vivian V; Turner, Benjamin O; Barbey, Aron K; Ashby, F Gregory

    2015-10-01

    Humans monitor states of uncertainty that can guide decision-making. These uncertain states are evident behaviorally when humans decline to make a categorization response. Such behavioral uncertainty responses (URs) have also defined the search for metacognition in animals. While a plethora of neuroimaging studies have focused on uncertainty, the brain systems supporting a volitional strategy shift under uncertainty have not been distinguished from those observed in making introspective post-hoc reports of categorization uncertainty. Using rapid event-related fMRI, we demonstrate that the neural activity patterns elicited by humans' URs are qualitatively different from those recruited by associative processes during categorization. Participants performed a one-dimensional perceptual-categorization task in which an uncertainty-response option let them decline to make a categorization response. Uncertainty responding activated a distributed network including prefrontal cortex (PFC), anterior and posterior cingulate cortex (ACC, PCC), anterior insula, and posterior parietal areas; importantly, these regions were distinct from those whose activity was modulated by task difficulty. Generally, our results can be characterized as a large-scale cognitive control network including recently evolved brain regions such as the anterior dorsolateral and medial PFC. A metacognitive theory would view the UR as a deliberate behavioral adjustment rather than just a learned middle category response, and predicts this pattern of results. These neuroimaging results bolster previous behavioral findings, which suggested that different cognitive processes underlie responses due to associative learning versus the declaration of uncertainty. We conclude that the UR represents an elemental behavioral index of metacognition.

  15. a Heterosynaptic Learning Rule for Neural Networks

    NASA Astrophysics Data System (ADS)

    Emmert-Streib, Frank

    In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.

  16. Fuzzy logic and neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  17. A Topological Perspective of Neural Network Structure

    NASA Astrophysics Data System (ADS)

    Sizemore, Ann; Giusti, Chad; Cieslak, Matthew; Grafton, Scott; Bassett, Danielle

    The wiring patterns of white matter tracts between brain regions inform functional capabilities of the neural network. Indeed, densely connected and cyclically arranged cognitive systems may communicate and thus perform distinctly. However, previously employed graph theoretical statistics are local in nature and thus insensitive to such global structure. Here we present an investigation of the structural neural network in eight healthy individuals using persistent homology. An extension of homology to weighted networks, persistent homology records both circuits and cliques (all-to-all connected subgraphs) through a repetitive thresholding process, thus perceiving structural motifs. We report structural features found across patients and discuss brain regions responsible for these patterns, finally considering the implications of such motifs in relation to cognitive function.

  18. Neural networks: Application to medical imaging

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  19. Computationally Efficient Neural Network Intrusion Security Awareness

    SciTech Connect

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  20. The relevance of network micro-structure for neural dynamics

    PubMed Central

    Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan

    2013-01-01

    The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits. PMID:23761758

  1. Do neural networks offer something for you?

    SciTech Connect

    Ramchandran, S.; Rhinehart, R.R.

    1995-11-01

    The concept of neural network computation was inspired by the hope to artifically reproduce some of the flexibility and power of the human brain. Human beings can recognize different patterns and voices even though these signals do not have a simple phenomenological understanding. Scientists have developed artificial neural networks (ANNs) for modeling processes that do not have a simple phenomenological explanation, such as voice recognition. Consequently, ANN jargon can be confusing to process and control engineers. In simple terms, ANNs take a nonlinear regression modeling approach. Like any regression curve-fitting approach, a least-squares optimization can generate model parameters. One advantage of ANNs is that they require neither a priori understanding of the process behavior nor phenomenological understanding of the process. ANNs use data describing the input/output relationship in a process to {open_quotes}learn{close_quotes} about the underlying process behavior. As a result of this, ANNs have a wide range of applicability. Furthermore, ANNs are computationally efficient and can replace models that are computationally intensive. This can make real-time online model-based applications practicable. A neural network is a dense mesh of nodes and connections. The basic processing elements of a network are called neurons. Neural networks are organized in layers, and typically consist of at least three layers: an input layer, one or more hidden layers, and an output layer. The input and output layers serve as interfaces that perform appropriate scaling between `real-world` and network data. Hidden layers are so termed because their neurons are hidden to the real-world data. Connections are the means for information flow. Each connection has an associated adjustable weight, w{sub i}. The weight can be regarded as a measure of the importance of the signals between the two neurons. 7 figs.

  2. Neural networks in the process industries

    SciTech Connect

    Ben, L.R.; Heavner, L.

    1996-12-01

    Neural networks, or more precisely, artificial neural networks (ANNs), are rapidly gaining in popularity. They first began to appear on the process-control scene in the early 1990s, but have been a research focus for more than 30 years. Neural networks are really empirical models that approximate the way man thinks neurons in the human brain work. Neural-net technology is not trying to produce computerized clones, but to model nature in an effort to mimic some of the brain`s capabilities. Modeling, for the purposes of this article, means developing a mathematical description of physical phenomena. The physics and chemistry of industrial processes are usually quite complex and sometimes poorly understood. Our process understanding, and our imperfect ability to describe complexity in mathematical terms, limit fidelity of first-principle models. Computational requirements for executing these complex models are a further limitation. It is often not possible to execute first-principle model algorithms at the high rate required for online control. Nevertheless, rigorous first principle models are commonplace design tools. Process control is another matter. Important model inputs are often not available as process measurements, making real-time application difficult. In fact, engineers often use models to infer unavailable measurements. 5 figs.

  3. Exceptional reducibility of complex-valued neural networks.

    PubMed

    Kobayashi, Masaki

    2010-07-01

    A neural network is referred to as minimal if it cannot reduce the number of hidden neurons that maintain the input-output map. The condition in which the number of hidden neurons can be reduced is referred to as reducibility. Real-valued neural networks have only three simple types of reducibility. It can be naturally extended to complex-valued neural networks without bias terms of hidden neurons. However, general complex-valued neural networks have another type of reducibility, referred to herein as exceptional reducibility. In this paper, another type of reducibility is presented, and a method by which to minimize complex-valued neural networks is proposed.

  4. A Squeezed Artificial Neural Network for the Symbolic Network Reliability Functions of Binary-State Networks.

    PubMed

    Yeh, Wei-Chang

    2016-08-18

    Network reliability is an important index to the provision of useful information for decision support in the modern world. There is always a need to calculate symbolic network reliability functions (SNRFs) due to dynamic and rapid changes in network parameters. In this brief, the proposed squeezed artificial neural network (SqANN) approach uses the Monte Carlo simulation to estimate the corresponding reliability of a given designed matrix from the Box-Behnken design, and then the Taguchi method is implemented to find the appropriate number of neurons and activation functions of the hidden layer and the output layer in ANN to evaluate SNRFs. According to the experimental results of the benchmark networks, the comparison appears to support the superiority of the proposed SqANN method over the traditional ANN-based approach with at least 16.6% improvement in the median absolute deviation in the cost of extra 2 s on average for all experiments.

  5. Neural Networks for Beat Perception in Musical Rhythm

    PubMed Central

    Large, Edward W.; Herrera, Jorge A.; Velasco, Marc J.

    2015-01-01

    Entrainment of cortical rhythms to acoustic rhythms has been hypothesized to be the neural correlate of pulse and meter perception in music. Dynamic attending theory first proposed synchronization of endogenous perceptual rhythms nearly 40 years ago, but only recently has the pivotal role of neural synchrony been demonstrated. Significant progress has since been made in understanding the role of neural oscillations and the neural structures that support synchronized responses to musical rhythm. Synchronized neural activity has been observed in auditory and motor networks, and has been linked with attentional allocation and movement coordination. Here we describe a neurodynamic model that shows how self-organization of oscillations in interacting sensory and motor networks could be responsible for the formation of the pulse percept in complex rhythms. In a pulse synchronization study, we test the model's key prediction that pulse can be perceived at a frequency for which no spectral energy is present in the amplitude envelope of the acoustic rhythm. The result shows that participants perceive the pulse at the theoretically predicted frequency. This model is one of the few consistent with neurophysiological evidence on the role of neural oscillation, and it explains a phenomenon that other computational models fail to explain. Because it is based on a canonical model, the predictions hold for an entire family of dynamical systems, not only a specific one. Thus, this model provides a theoretical link between oscillatory neurodynamics and the induction of pulse and meter in musical rhythm. PMID:26635549

  6. Neural Networks for Beat Perception in Musical Rhythm.

    PubMed

    Large, Edward W; Herrera, Jorge A; Velasco, Marc J

    2015-01-01

    Entrainment of cortical rhythms to acoustic rhythms has been hypothesized to be the neural correlate of pulse and meter perception in music. Dynamic attending theory first proposed synchronization of endogenous perceptual rhythms nearly 40 years ago, but only recently has the pivotal role of neural synchrony been demonstrated. Significant progress has since been made in understanding the role of neural oscillations and the neural structures that support synchronized responses to musical rhythm. Synchronized neural activity has been observed in auditory and motor networks, and has been linked with attentional allocation and movement coordination. Here we describe a neurodynamic model that shows how self-organization of oscillations in interacting sensory and motor networks could be responsible for the formation of the pulse percept in complex rhythms. In a pulse synchronization study, we test the model's key prediction that pulse can be perceived at a frequency for which no spectral energy is present in the amplitude envelope of the acoustic rhythm. The result shows that participants perceive the pulse at the theoretically predicted frequency. This model is one of the few consistent with neurophysiological evidence on the role of neural oscillation, and it explains a phenomenon that other computational models fail to explain. Because it is based on a canonical model, the predictions hold for an entire family of dynamical systems, not only a specific one. Thus, this model provides a theoretical link between oscillatory neurodynamics and the induction of pulse and meter in musical rhythm.

  7. Pruning Neural Networks with Distribution Estimation Algorithms

    SciTech Connect

    Cantu-Paz, E

    2003-01-15

    This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than the original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.

  8. Neural network computer simulation of medical aerosols.

    PubMed

    Richardson, C J; Barlow, D J

    1996-06-01

    Preliminary investigations have been conducted to assess the potential for using artificial neural networks to simulate aerosol behaviour, with a view to employing this type of methodology in the evaluation and design of pulmonary drug-delivery systems. Details are presented of the general purpose software developed for these tasks; it implements a feed-forward back-propagation algorithm with weight decay and connection pruning, the user having complete run-time control of the network architecture and mode of training. A series of exploratory investigations is then reported in which different network structures and training strategies are assessed in terms of their ability to simulate known patterns of fluid flow in simple model systems. The first of these involves simulations of cellular automata-generated data for fluid flow through a partially obstructed two-dimensional pipe. The artificial neural networks are shown to be highly successful in simulating the behaviour of this simple linear system, but with important provisos relating to the information content of the training data and the criteria used to judge when the network is properly trained. A second set of investigations is then reported in which similar networks are used to simulate patterns of fluid flow through aerosol generation devices, using training data furnished through rigorous computational fluid dynamics modelling. These more complex three-dimensional systems are modelled with equal success. It is concluded that carefully tailored, well trained networks could provide valuable tools not just for predicting but also for analysing the spatial dynamics of pharmaceutical aerosols.

  9. Learning-induced synchronization and plasticity of a developing neural network.

    PubMed

    Chao, T C; Chen, C M

    2005-12-01

    Learning-induced synchronization of a neural network at various developing stages is studied by computer simulations using a pulse-coupled neural network model in which the neuronal activity is simulated by a one-dimensional map. Two types of Hebbian plasticity rules are investigated and their differences are compared. For both models, our simulations show a logarithmic increase in the synchronous firing frequency of the network with the culturing time of the neural network. This result is consistent with recent experimental observations. To investigate how to control the synchronization behavior of a neural network after learning, we compare the occurrence of synchronization for four networks with different designed patterns under the influence of an external signal. The effect of such a signal on the network activity highly depends on the number of connections between neurons. We discuss the synaptic plasticity and enhancement effects for a random network after learning at various developing stages.

  10. Non-Intrusive Gaze Tracking Using Artificial Neural Networks

    DTIC Science & Technology

    1994-01-05

    Artificial Neural Networks Shumeet Baluja & Dean...this paper appear in: Baluja, S. & Pomerleau, D.A. "Non-Intrusive Gaze Tracking Using Artificial Neural Networks ", Advances in Neural Information...document hLc-s been opproved t0T 011bhiC leleWOe cad ý’ir/4 its di stT-b’ution Ls •_nii•ite6. - Keywords Gaze Tracking, Artificial Neural Networks ,

  11. A solution to neural field equations by a recurrent neural network method

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2012-09-01

    Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.

  12. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    PubMed

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  13. Toward implementation of artificial neural networks that "really work".

    PubMed Central

    Leon, M. A.; Keller, J.

    1997-01-01

    Artificial neural networks are established analytical methods in bio-medical research. They have repeatedly outperformed traditional tools for pattern recognition and clinical outcome prediction while assuring continued adaptation and learning. However, successful experimental neural networks systems seldom reach a production state. That is, they are not incorporated into clinical information systems. It could be speculated that neural networks simply must undergo a lengthy acceptance process before they become part of the day to day operations of health care systems. However, our experience trying to incorporate experimental neural networks into information systems lead us to believe that there are technical and operational barriers that greatly difficult neural network implementation. A solution for these problems may be the delineation of policies and procedures for neural network implementation and the development a new class of neural network client/server applications that fit the needs of current clinical information systems. PMID:9357613

  14. Applications of neural networks in training science.

    PubMed

    Pfeiffer, Mark; Hohmann, Andreas

    2012-04-01

    Training science views itself as an integrated and applied science, developing practical measures founded on scientific method. Therefore, it demands consideration of a wide spectrum of approaches and methods. Especially in the field of competitive sports, research questions are usually located in complex environments, so that mainly field studies are drawn upon to obtain broad external validity. Here, the interrelations between different variables or variable sets are mostly of a nonlinear character. In these cases, methods like neural networks, e.g., the pattern recognizing methods of Self-Organizing Kohonen Feature Maps or similar instruments to identify interactions might be successfully applied to analyze data. Following on from a classification of data analysis methods in training-science research, the aim of the contribution is to give examples of varied sports in which network approaches can be effectually used in training science. First, two examples are given in which neural networks are employed for pattern recognition. While one investigation deals with the detection of sporting talent in swimming, the other is located in game sports research, identifying tactical patterns in team handball. The third and last example shows how an artificial neural network can be used to predict competitive performance in swimming.

  15. Functional expansion representations of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1992-01-01

    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  16. On lateral competition in dynamic neural networks

    SciTech Connect

    Bellyustin, N.S.

    1995-02-01

    Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.

  17. Neural networks as a control methodology

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1990-01-01

    While conventional computers must be programmed in a logical fashion by a person who thoroughly understands the task to be performed, the motivation behind neural networks is to develop machines which can train themselves to perform tasks, using available information about desired system behavior and learning from experience. There are three goals of this fellowship program: (1) to evaluate various neural net methods and generate computer software to implement those deemed most promising on a personal computer equipped with Matlab; (2) to evaluate methods currently in the professional literature for system control using neural nets to choose those most applicable to control of flexible structures; and (3) to apply the control strategies chosen in (2) to a computer simulation of a test article, the Control Structures Interaction Suitcase Demonstrator, which is a portable system consisting of a small flexible beam driven by a torque motor and mounted on springs tuned to the first flexible mode of the beam. Results of each are discussed.

  18. Neural network models of categorical perception.

    PubMed

    Damper, R I; Harnad, S R

    2000-05-01

    Studies of the categorical perception (CP) of sensory continua have a long and rich history in psychophysics. In 1977, Macmillan, Kaplan, and Creelman introduced the use of signal detection theory to CP studies. Anderson and colleagues simultaneously proposed the first neural model for CP, yet this line of research has been less well explored. In this paper, we assess the ability of neural-network models of CP to predict the psychophysical performance of real observers with speech sounds and artificial/novel stimuli. We show that a variety of neural mechanisms are capable of generating the characteristics of CP. Hence, CP may not be a special model of perception but an emergent property of any sufficiently powerful general learning system.

  19. Neural networks and logical reasoning systems: a translation table.

    PubMed

    Martins, J; Mendes, R V

    2001-04-01

    A correspondence is established between the basic elements of logic reasoning systems (knowledge bases, rules, inference and queries) and the structure and dynamical evolution laws of neural networks. The correspondence is pictured as a translation dictionary which might allow to go back and forth between symbolic and network formulations, a desirable step in learning-oriented systems and multicomputer networks. In the framework of Horn clause logics, it is found that atomic propositions with n arguments correspond to nodes with nth order synapses, rules to synaptic intensity constraints, forward chaining to synaptic dynamics and queries either to simple node activation or to a query tensor dynamics.

  20. Speed up Neural Network Learning by GPGPU

    NASA Astrophysics Data System (ADS)

    Tsuchida, Yuta; Yoshioka, Michifumi

    Recently, graphic boards have higher performance with development of 3DCG and movie processing than CPU, and widely used with progress of computer entertainment. Implementation of the General-purpose computing on GPU (GPGPU) become more easier by the integrated development environment, CUDA distributed by NVIDIA. GPU has dozens or a hundred arithmetic circuits, whose allocations are controlled by CUDA. In the previous researches, the implementation of the neural network using GPGPU have been studied, however the learning of networks was not mentioned because the GPU performance is low in conditional processing whereas high in linear algebra processing. Therefore we have proposed two methods. At first, a whole network is implemented as a thread, and some networks are taught in parallel to shorten the time necessary to find the optimal weight coefficients. Secondly, this paper introduces parallelization in the neural network structure, that is, the calculation of neurons in the same layers can be paralleled. And the processes to teach for same network with different patterns are independent also. As a result, the second method is 20 times faster than CPU, and compared with the first proposed method, that is about 6 times faster.

  1. Predicting stream water quality using artificial neural networks (ANN)

    SciTech Connect

    Bowers, J.A.

    2000-05-17

    Predicting point and nonpoint source runoff of dissolved and suspended materials into their receiving streams is important to protecting water quality and traditionally has been modeled using deterministic or statistical methods. The purpose of this study was to predict water quality in small streams using an Artificial Neural Network (ANN). The selected input variables were local precipitation, stream flow rates and turbidity for the initial prediction of suspended solids in the stream. A single hidden-layer feedforward neural network using backpropagation learning algorithms was developed with a detailed analysis of model design of those factors affecting successful implementation of the model. All features of a feedforward neural model were investigated including training set creation, number and layers of neurons, neural activation functions, and backpropagation algorithms. Least-squares regression was used to compare model predictions with test data sets. Most of the model configurations offered excellent predictive capabilities. Using either the logistic or the hyperbolic tangent neural activation function did not significantly affect predicted results. This was also true for the two learning algorithms tested, the Levenberg-Marquardt and Polak-Ribiere conjugate-gradient descent methods. The most important step during model development and training was the representative selection of data records for training of the model.

  2. SYNAPTIC DEPRESSION IN DEEP NEURAL NETWORKS FOR SPEECH PROCESSING

    PubMed Central

    Zhang, Wenhao; Li, Hanyu; Yang, Minda; Mesgarani, Nima

    2017-01-01

    A characteristic property of biological neurons is their ability to dynamically change the synaptic efficacy in response to variable input conditions. This mechanism, known as synaptic depression, significantly contributes to the formation of normalized representation of speech features. Synaptic depression also contributes to the robust performance of biological systems. In this paper, we describe how synaptic depression can be modeled and incorporated into deep neural network architectures to improve their generalization ability. We observed that when synaptic depression is added to the hidden layers of a neural network, it reduces the effect of changing background activity in the node activations. In addition, we show that when synaptic depression is included in a deep neural network trained for phoneme classification, the performance of the network improves under noisy conditions not included in the training phase. Our results suggest that more complete neuron models may further reduce the gap between the biological performance and artificial computing, resulting in networks that better generalize to novel signal conditions. PMID:28286424

  3. A novel neural network based image reconstruction model with scale and rotation invariance for target identification and classification for Active millimetre wave imaging

    NASA Astrophysics Data System (ADS)

    Agarwal, Smriti; Bisht, Amit Singh; Singh, Dharmendra; Pathak, Nagendra Prasad

    2014-12-01

    Millimetre wave imaging (MMW) is gaining tremendous interest among researchers, which has potential applications for security check, standoff personal screening, automotive collision-avoidance, and lot more. Current state-of-art imaging techniques viz. microwave and X-ray imaging suffers from lower resolution and harmful ionizing radiation, respectively. In contrast, MMW imaging operates at lower power and is non-ionizing, hence, medically safe. Despite these favourable attributes, MMW imaging encounters various challenges as; still it is very less explored area and lacks suitable imaging methodology for extracting complete target information. Keeping in view of these challenges, a MMW active imaging radar system at 60 GHz was designed for standoff imaging application. A C-scan (horizontal and vertical scanning) methodology was developed that provides cross-range resolution of 8.59 mm. The paper further details a suitable target identification and classification methodology. For identification of regular shape targets: mean-standard deviation based segmentation technique was formulated and further validated using a different target shape. For classification: probability density function based target material discrimination methodology was proposed and further validated on different dataset. Lastly, a novel artificial neural network based scale and rotation invariant, image reconstruction methodology has been proposed to counter the distortions in the image caused due to noise, rotation or scale variations. The designed neural network once trained with sample images, automatically takes care of these deformations and successfully reconstructs the corrected image for the test targets. Techniques developed in this paper are tested and validated using four different regular shapes viz. rectangle, square, triangle and circle.

  4. Visual grammars and their neural networks

    NASA Astrophysics Data System (ADS)

    Mjolsness, Eric

    1992-07-01

    We exhibit a systematic way to derive neural nets for vision problems. It involves formulating a vision problem as Bayesian inference or decision on a comprehensive model of the visual domain given by a probabilistic grammar. A key feature of this grammar is the way in which it eliminates model information, such as object labels, as it produces an image; correspondence problems and other noise removal tasks result. The neural nets that arise most directly are generalized assignment networks. Also there are transformations which naturally yield improved algorithms such as correlation matching in scale space and the Frameville neural nets for high-level vision. Networks derived this way generally have objective functions with spurious local minima; such minima may commonly be avoided by dynamics that include deterministic annealing, for example recent improvements to Mean Field Theory dynamics. The grammatical method of neural net design allows domain knowledge to enter from all levels of the grammar, including `abstract' levels remote from the final image data, and may permit new kinds of learning as well.

  5. Complex Chebyshev-polynomial-based unified model (CCPBUM) neural networks

    NASA Astrophysics Data System (ADS)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1998-03-01

    In this paper, we propose complex Chebyshev Polynomial Based unified model neural network for the approximation of complex- valued function. Based on this approximate transformable technique, we have derived the relationship between the single-layered neural network and multi-layered perceptron neural network. It is shown that the complex Chebyshev Polynomial Based unified model neural network can be represented as a functional link network that are based on Chebyshev polynomial. We also derived a new learning algorithm for the proposed network. It turns out that the complex Chebyshev Polynomial Based unified model neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional complex feedforward/recurrent neural network.

  6. Associative Memory Neural Network with Low Temporal Spiking Rates

    NASA Astrophysics Data System (ADS)

    Amit, Daniel J.; Treves, A.

    1989-10-01

    We describe a modified attractor neural network in which neuronal dynamics takes place on a time scale of the absolute refractory period but the mean temporal firing rate of any neuron in the network is lower by an arbitrary factor that characterizes the strength of the effective inhibition. It operates by encoding information on the excitatory neurons only and assuming the inhibitory neurons to be faster and to inhibit the excitatory ones by an effective postsynaptic potential that is expressed in terms of the activity of the excitatory neurons themselves. Retrieval is identified as a nonergodic behavior of the network whose consecutive states have a significantly enhanced activity rate for the neurons that should be active in a stored pattern and a reduced activity rate for the neurons that are inactive in the memorized pattern. In contrast to the Hopfield model the network operates away from fixed points and under the strong influence of noise. As a consequence, of the neurons that should be active in a pattern, only a small fraction is active in any given time cycle and those are randomly distributed, leading to reduced temporal rates. We argue that this model brings neural network models much closer to biological reality. We present the results of detailed analysis of the model as well as simulations.

  7. Automatic breast density classification using neural network

    NASA Astrophysics Data System (ADS)

    Arefan, D.; Talebpour, A.; Ahmadinejhad, N.; Kamali Asl, A.

    2015-12-01

    According to studies, the risk of breast cancer directly associated with breast density. Many researches are done on automatic diagnosis of breast density using mammography. In the current study, artifacts of mammograms are removed by using image processing techniques and by using the method presented in this study, including the diagnosis of points of the pectoral muscle edges and estimating them using regression techniques, pectoral muscle is detected with high accuracy in mammography and breast tissue is fully automatically extracted. In order to classify mammography images into three categories: Fatty, Glandular, Dense, a feature based on difference of gray-levels of hard tissue and soft tissue in mammograms has been used addition to the statistical features and a neural network classifier with a hidden layer. Image database used in this research is the mini-MIAS database and the maximum accuracy of system in classifying images has been reported 97.66% with 8 hidden layers in neural network.

  8. Application of neural networks in space construction

    NASA Technical Reports Server (NTRS)

    Thilenius, Stephen C.; Barnes, Frank

    1990-01-01

    When trying to decide what task should be done by robots and what tasks should be done by humans with respect to space construction, there has been one decisive barrier which ultimately divides the tasks: can a computer do the job? Von Neumann type computers have great difficulty with problems that the human brain seems to do instantaneously and with little effort. Some of these problems are pattern recognition, speech recognition, content addressable memories, and command interpretation. In an attempt to simulate these talents of the human brain, much research was currently done into the operations and construction of artificial neural networks. The efficiency of the interface between man and machine, robots in particular, can therefore be greatly improved with the use of neural networks. For example, wouldn't it be easier to command a robot to 'fetch an object' rather then having to remotely control the entire operation with remote control?

  9. Neural networks predict tomato maturity stage

    NASA Astrophysics Data System (ADS)

    Hahn, Federico

    1999-03-01

    Almost 40% of the total horticultural produce exported from Mexico the USA is tomato, and quality is fundamental for maintaining the market. Many fruits packed at the green-mature stage do not mature towards a red color as they were harvested before achieving its physiological maturity. Tomato gassed for advancing maturation does not respond on those fruits, and repacking is necessary at terminal markets, causing losses to the producer. Tomato spectral signatures are different on each maturity stage and tomato size was poorly correlated against peak wavelengths. A back-propagation neural network was used to predict tomato maturity using reflectance ratios as inputs. Higher success rates were achieved on tomato maturity stage recognition with neural networks than with discriminant analysis.

  10. Privacy-preserving backpropagation neural network learning.

    PubMed

    Chen, Tingting; Zhong, Sheng

    2009-10-01

    With the development of distributed computing environment , many learning problems now have to deal with distributed input data. To enhance cooperations in learning, it is important to address the privacy concern of each data holder by extending the privacy preservation notion to original learning algorithms. In this paper, we focus on preserving the privacy in an important learning model, multilayer neural networks. We present a privacy-preserving two-party distributed algorithm of backpropagation which allows a neural network to be trained without requiring either party to reveal her data to the other. We provide complete correctness and security analysis of our algorithms. The effectiveness of our algorithms is verified by experiments on various real world data sets.

  11. Design of fiber optic adaline neural networks

    NASA Astrophysics Data System (ADS)

    Ghosh, Anjan K.; Trepka, Jim

    1997-03-01

    Based on possible optoelectronic realization of adaptive filters and equalizers using fiber optic tapped delay lines and spatial light modulators we describe the design of a single-layer fiber optic Adaline neural network that can be used as a bit pattern classifier. In our design, we employ as few electronic devices as possible and use optical computation to utilize the advantages of optics in processing speed, parallelism, and interconnection. The described new optical neural network design is for optical processing of guided light wave signals, not electronic signals. We analyze the convergence or learning characteristics of the optoelectronic Adaline in the presence of errors in the hardware. We show that with such an optoelectronic Adaline it is possible to detect a desired code word/token/header with good accuracy.

  12. Neural networks for aerosol particles characterization

    NASA Astrophysics Data System (ADS)

    Berdnik, V. V.; Loiko, V. A.

    2016-11-01

    Multilayer perceptron neural networks with one, two and three inputs are built to retrieve parameters of spherical homogeneous nonabsorbing particle. The refractive index ranges from 1.3 to 1.7; particle radius ranges from 0.251 μm to 56.234 μm. The logarithms of the scattered radiation intensity are used as input signals. The problem of the most informative scattering angles selection is elucidated. It is shown that polychromatic illumination helps one to increase significantly the retrieval accuracy. In the absence of measurement errors relative error of radius retrieval by the neural network with three inputs is 0.54%, relative error of the refractive index retrieval is 0.84%. The effect of measurement errors on the result of retrieval is simulated.

  13. Pattern recognition, neural networks, and artificial intelligence

    NASA Astrophysics Data System (ADS)

    Bezdek, James C.

    1991-03-01

    We write about the relationship between numerical patten recognition and neural-like computation networks. Extensive research that proposes the use of neural models for a wide variety of applications has been conducted in the past few years. Sometimes justification for investigating the potential of neural nets (NNs) is obvious. On the other hand, current enthusiasm for this approach has also led to the use of neural models when the apparent rationale for their use has been justified by what is best described as 'feeding frenzy'. In this latter instance there is at times concomitant lack of concern about many 'side issues' connected with algorithms (e.g., complexity, convergence, stability, robustness and performance validation) that need attention before any computational model becomes part of an operation system. These issues are examined with a view towards guessing how best to integrate and exploit the promise of the neural approach with there efforts aimed at advancing the art and science of pattern recognition and its applications in fielded systems in the next decade.

  14. HAWC Energy Reconstruction via Neural Network

    NASA Astrophysics Data System (ADS)

    Marinelli, Samuel; HAWC Collaboration

    2016-03-01

    The High-Altitude Water-Cherenkov (HAWC) γ-ray observatory is located at 4100 m above sea level on the Sierra Negra mountain in the state of Puebla, Mexico. Its 300 water-filled tanks are instrumented with PMTs that detect Cherenkov light produced by charged particles in atmospheric air showers induced by TeV γ-rays. The detector became fully operational in March of 2015. With a 2-sr field of view and duty cycle exceeding 90%, HAWC is a survey instrument sensitive to diverse γ-ray sources, including supernova remnants, pulsar wind nebulae, active galactic nuclei, and others. Particle-acceleration mechanisms at these sources can be inferred by studying their energy spectra, particularly at high energies. We have developed a technique for estimating primary- γ-ray energies using an artificial neural network (ANN). Input variables to the ANN are selected to characterize shower multiplicity in the detector, the fraction of the shower contained in the detector, and atmospheric attenuation of the shower. Monte Carlo simulations show that the new estimator has superior performance to the current estimator used in HAWC publications. This work was supported by the National Science Foundation.

  15. When Networks Disagree: Ensemble Methods for Hybrid Neural Networks

    DTIC Science & Technology

    1992-10-27

    takes the form of repeated on-line stochastic gradient descent of randomly initialized nets. However, unlike the combination process in parametric ... estimation which usually takes the form of a simple average in parameter space, the parameters in a neural network take the form of neuronal weights which

  16. 1991 IEEE International Joint Conference on Neural Networks, Singapore, Nov. 18-21, 1991, Proceedings. Vols. 1-3

    SciTech Connect

    Not Available

    1991-01-01

    The present conference the application of neural networks to associative memories, neurorecognition, hybrid systems, supervised and unsupervised learning, image processing, neurophysiology, sensation and perception, electrical neurocomputers, optimization, robotics, machine vision, sensorimotor control systems, and neurodynamics. Attention is given to such topics as optimal associative mappings in recurrent networks, self-improving associative neural network models, fuzzy activation functions, adaptive pattern recognition with sparse associative networks, efficient question-answering in a hybrid system, the use of abstractions by neural networks, remote-sensing pattern classification, speech recognition with guided propagation, inverse-step competitive learning, and rotational quadratic function neural networks. Also discussed are electrical load forecasting, evolutionarily stable and unstable strategies, the capacity of recurrent networks, neural net vs control theory, perceptrons for image recognition, storage capacity of bidirectional associative memories, associative random optimization for control, automatic synthesis of digital neural architectures, self-learning robot vision, and the associative dynamics of chaotic neural networks.

  17. Vitality of Neural Networks under Reoccurring Catastrophic Failures

    NASA Astrophysics Data System (ADS)

    Sardi, Shira; Goldental, Amir; Amir, Hamutal; Vardi, Roni; Kanter, Ido

    2016-08-01

    Catastrophic failures are complete and sudden collapses in the activity of large networks such as economics, electrical power grids and computer networks, which typically require a manual recovery process. Here we experimentally show that excitatory neural networks are governed by a non-Poissonian reoccurrence of catastrophic failures, where their repetition time follows a multimodal distribution characterized by a few tenths of a second and tens of seconds timescales. The mechanism underlying the termination and reappearance of network activity is quantitatively shown here to be associated with nodal time-dependent features, neuronal plasticity, where hyperactive nodes damage the response capability of their neighbors. It presents a complementary mechanism for the emergence of Poissonian catastrophic failures from damage conductivity. The effect that hyperactive nodes degenerate their neighbors represents a type of local competition which is a common feature in the dynamics of real-world complex networks, whereas their spontaneous recoveries represent a vitality which enhances reliable functionality.

  18. Vitality of Neural Networks under Reoccurring Catastrophic Failures

    PubMed Central

    Sardi, Shira; Goldental, Amir; Amir, Hamutal; Vardi, Roni; Kanter, Ido

    2016-01-01

    Catastrophic failures are complete and sudden collapses in the activity of large networks such as economics, electrical power grids and computer networks, which typically require a manual recovery process. Here we experimentally show that excitatory neural networks are governed by a non-Poissonian reoccurrence of catastrophic failures, where their repetition time follows a multimodal distribution characterized by a few tenths of a second and tens of seconds timescales. The mechanism underlying the termination and reappearance of network activity is quantitatively shown here to be associated with nodal time-dependent features, neuronal plasticity, where hyperactive nodes damage the response capability of their neighbors. It presents a complementary mechanism for the emergence of Poissonian catastrophic failures from damage conductivity. The effect that hyperactive nodes degenerate their neighbors represents a type of local competition which is a common feature in the dynamics of real-world complex networks, whereas their spontaneous recoveries represent a vitality which enhances reliable functionality. PMID:27530974

  19. Vitality of Neural Networks under Reoccurring Catastrophic Failures.

    PubMed

    Sardi, Shira; Goldental, Amir; Amir, Hamutal; Vardi, Roni; Kanter, Ido

    2016-08-17

    Catastrophic failures are complete and sudden collapses in the activity of large networks such as economics, electrical power grids and computer networks, which typically require a manual recovery process. Here we experimentally show that excitatory neural networks are governed by a non-Poissonian reoccurrence of catastrophic failures, where their repetition time follows a multimodal distribution characterized by a few tenths of a second and tens of seconds timescales. The mechanism underlying the termination and reappearance of network activity is quantitatively shown here to be associated with nodal time-dependent features, neuronal plasticity, where hyperactive nodes damage the response capability of their neighbors. It presents a complementary mechanism for the emergence of Poissonian catastrophic failures from damage conductivity. The effect that hyperactive nodes degenerate their neighbors represents a type of local competition which is a common feature in the dynamics of real-world complex networks, whereas their spontaneous recoveries represent a vitality which enhances reliable functionality.

  20. Neural network error correction for solving coupled ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.