Science.gov

Sample records for active neural networks

  1. Seismic active control by neural networks.

    SciTech Connect

    Tang, Y.

    1998-01-01

    A study on the application of artificial neural networks (ANNs) to activate structural control under seismic loads is carried out. The structure considered is a single-degree-of-freedom (SDF) system with an active bracing device. The control force is computed by a trained neural network. The feed-forward neural network architecture and an adaptive back-propagation training algorithm is used in the study. The neural net is trained to reproduce the function that represents the response-excitation relationship of the SDF system under seismic loads. The input-output training patterns are generated randomly. In the back-propagation training algorithm, the learning rate is determined by ensuring the decrease of the error function at each epoch. The computer program implemented is validated by solving the classification of the XOR problem. Then, the trained ANN is used to compute the control force according to the control strategy. If the control force exceeds the actuator's capacity limit, it is set equal to that limit. The concept of the control strategy employed herein is to apply the control force at every time step to cancel the system velocity induced at the preceding time step so that the gradual rhythmic buildup of the response is destroyed. The ground motions considered in the numerical example are the 1940 El Centro earthquake and the 1979 Imperial Valley earthquake in California. The system responses with and without the control are calculated and compared. The feasibility and potential of applying ANNs to seismic active control is asserted by the promising results obtained from the numerical examples studied.

  2. Neural network with formed dynamics of activity

    SciTech Connect

    Dunin-Barkovskii, V.L.; Osovets, N.B.

    1995-03-01

    The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.

  3. Deep Neural Networks with Multistate Activation Functions

    PubMed Central

    Cai, Chenghao; Xu, Yanyan; Ke, Dengfeng; Su, Kaile

    2015-01-01

    We propose multistate activation functions (MSAFs) for deep neural networks (DNNs). These MSAFs are new kinds of activation functions which are capable of representing more than two states, including the N-order MSAFs and the symmetrical MSAF. DNNs with these MSAFs can be trained via conventional Stochastic Gradient Descent (SGD) as well as mean-normalised SGD. We also discuss how these MSAFs perform when used to resolve classification problems. Experimental results on the TIMIT corpus reveal that, on speech recognition tasks, DNNs with MSAFs perform better than the conventional DNNs, getting a relative improvement of 5.60% on phoneme error rates. Further experiments also reveal that mean-normalised SGD facilitates the training processes of DNNs with MSAFs, especially when being with large training sets. The models can also be directly trained without pretraining when the training set is sufficiently large, which results in a considerable relative improvement of 5.82% on word error rates. PMID:26448739

  4. Active Sampling in Evolving Neural Networks.

    ERIC Educational Resources Information Center

    Parisi, Domenico

    1997-01-01

    Comments on Raftopoulos article (PS 528 649) on facilitative effect of cognitive limitation in development and connectionist models. Argues that the use of neural networks within an "Artificial Life" perspective can more effectively contribute to the study of the role of cognitive limitations in development and their genetic basis than can using…

  5. Neural Networks

    SciTech Connect

    Smith, Patrick I.

    2003-09-23

    information [2]. Each one of these cells acts as a simple processor. When individual cells interact with one another, the complex abilities of the brain are made possible. In neural networks, the input or data are processed by a propagation function that adds up the values of all the incoming data. The ending value is then compared with a threshold or specific value. The resulting value must exceed the activation function value in order to become output. The activation function is a mathematical function that a neuron uses to produce an output referring to its input value. [8] Figure 1 depicts this process. Neural networks usually have three components an input, a hidden, and an output. These layers create the end result of the neural network. A real world example is a child associating the word dog with a picture. The child says dog and simultaneously looks a picture of a dog. The input is the spoken word ''dog'', the hidden is the brain processing, and the output will be the category of the word dog based on the picture. This illustration describes how a neural network functions.

  6. A neural networks study of quinone compounds with trypanocidal activity.

    PubMed

    de Molfetta, Fábio Alberto; Angelotti, Wagner Fernando Delfino; Romero, Roseli Aparecida Francelin; Montanari, Carlos Alberto; da Silva, Albérico Borges Ferreira

    2008-10-01

    This work investigates neural network models for predicting the trypanocidal activity of 28 quinone compounds. Artificial neural networks (ANN), such as multilayer perceptrons (MLP) and Kohonen models, were employed with the aim of modeling the nonlinear relationship between quantum and molecular descriptors and trypanocidal activity. The calculated descriptors and the principal components were used as input to train neural network models to verify the behavior of the nets. The best model for both network models (MLP and Kohonen) was obtained with four descriptors as input. The descriptors were T5 (torsion angle), QTS1 (sum of absolute values of the atomic charges), VOLS2 (volume of the substituent at region B) and HOMO-1 (energy of the molecular orbital below HOMO). These descriptors provide information on the kind of interaction that occurs between the compounds and the biological receptor. Both neural network models used here can predict the trypanocidal activity of the quinone compounds with good agreement, with low errors in the testing set and a high correctness rate. Thanks to the nonlinear model obtained from the neural network models, we can conclude that electronic and structural properties are important factors in the interaction between quinone compounds that exhibit trypanocidal activity and their biological receptors. The final ANN models should be useful in the design of novel trypanocidal quinones having improved potency. PMID:18629551

  7. Generating Coherent Patterns of Activity from Chaotic Neural Networks

    PubMed Central

    Sussillo, David; Abbott, L. F.

    2009-01-01

    Neural circuits display complex activity patterns both spontaneously and when responding to a stimulus or generating a motor output. How are these two forms of activity related? We develop a procedure called FORCE learning for modifying synaptic strengths either external to or within a model neural network to change chaotic spontaneous activity into a wide variety of desired activity patterns. FORCE learning works even though the networks we train are spontaneously chaotic and we leave feedback loops intact and unclamped during learning. Using this approach, we construct networks that produce a wide variety of complex output patterns, input-output transformations that require memory, multiple outputs that can be switched by control inputs, and motor patterns matching human motion capture data. Our results reproduce data on pre-movement activity in motor and premotor cortex, and suggest that synaptic plasticity may be a more rapid and powerful modulator of network activity than generally appreciated. PMID:19709635

  8. Systematic fluctuation expansion for neural network activity equations

    PubMed Central

    Buice, Michael A.; Cowan, Jack D.; Chow, Carson C.

    2009-01-01

    Population rate or activity equations are the foundation of a common approach to modeling for neural networks. These equations provide mean field dynamics for the firing rate or activity of neurons within a network given some connectivity. The shortcoming of these equations is that they take into account only the average firing rate while leaving out higher order statistics like correlations between firing. A stochastic theory of neural networks which includes statistics at all orders was recently formulated. We describe how this theory yields a systematic extension to population rate equations by introducing equations for correlations and appropriate coupling terms. Each level of the approximation yields closed equations, i.e. they depend only upon the mean and specific correlations of interest, without an ad hoc criterion for doing so. We show in an example of an all-to-all connected network how our system of generalized activity equations captures phenomena missed by the mean field rate equations alone. PMID:19852585

  9. Social status modulates neural activity in the mentalizing network

    PubMed Central

    Muscatell, Keely A.; Morelli, Sylvia A.; Falk, Emily B.; Way, Baldwin M.; Pfeifer, Jennifer H.; Galinsky, Adam D.; Lieberman, Matthew D.; Dapretto, Mirella; Eisenberger, Naomi I.

    2013-01-01

    The current research explored the neural mechanisms linking social status to perceptions of the social world. Two fMRI studies provide converging evidence that individuals lower in social status are more likely to engage neural circuitry often involved in ‘mentalizing’ or thinking about others' thoughts and feelings. Study 1 found that college students' perception of their social status in the university community was related to neural activity in the mentalizing network (e.g., DMPFC, MPFC, precuneus/PCC) while encoding social information, with lower social status predicting greater neural activity in this network. Study 2 demonstrated that socioeconomic status, an objective indicator of global standing, predicted adolescents' neural activity during the processing of threatening faces, with individuals lower in social status displaying greater activity in the DMPFC, previously associated with mentalizing, and the amygdala, previously associated with emotion/salience processing. These studies demonstrate that social status is fundamentally and neurocognitively linked to how people process and navigate their social worlds. PMID:22289808

  10. Persistent Activity in Neural Networks with Dynamic Synapses

    PubMed Central

    Barak, Omri; Tsodyks, Misha

    2007-01-01

    Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli. PMID:17319739

  11. Application of neural networks to seismic active control

    SciTech Connect

    Tang, Yu

    1995-07-01

    An exploratory study on seismic active control using an artificial neural network (ANN) is presented in which a singledegree-of-freedom (SDF) structural system is controlled by a trained neural network. A feed-forward neural network and the backpropagation training method are used in the study. In backpropagation training, the learning rate is determined by ensuring the decrease of the error function at each training cycle. The training patterns for the neural net are generated randomly. Then, the trained ANN is used to compute the control force according to the control algorithm. The control strategy proposed herein is to apply the control force at every time step to destroy the build-up of the system response. The ground motions considered in the simulations are the N21E and N69W components of the Lake Hughes No. 12 record that occurred in the San Fernando Valley in California on February 9, 1971. Significant reduction of the structural response by one order of magnitude is observed. Also, it is shown that the proposed control strategy has the ability to reduce the peak that occurs during the first few cycles of the time history. These promising results assert the potential of applying ANNs to active structural control under seismic loads.

  12. Decorrelation of Neural-Network Activity by Inhibitory Feedback

    PubMed Central

    Einevoll, Gaute T.; Diesmann, Markus

    2012-01-01

    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between

  13. Detection of interplanetary activity using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Gothoskar, Pradeep; Khobragade, Shyam

    1995-12-01

    Early detection of interplanetary activity is important when attempting to associate, with better accuracy, interplanetary phenomena with solar activity and geomagnetic disturbances. However, for a large number of interplanetary observations to be done every day, extensive data analysis is required, leading to a delay in the detection of transient interplanetary activity. In particular, the interplanetary scintillation (IPS) observations done with Ooty Radio Telescope (ORT) need extensive human effort to reduce the data and to model, often subjectively, the scintillation power spectra. We have implemented an artificial neural network (ANN) to detect interplanetary activity using the power spectrum scintillation. The ANN was trained to detect the disturbed power spectra, used as an indicator of the interplanetary activity, and to recognize normal and strong scattering spectra from a large data base of IPS spectra. The coincidence efficiency of classification by the network compared with the experts' judgement to detect the normal, disturbed and strong scattering spectra was found to be greater than 80 per cent. The neural network, when applied during the IPS mapping programme to provide early indication of interplanetary activity, would significantly help the ongoing efforts to predict geomagnetic disturbances.

  14. A neural network model for olfactory glomerular activity prediction

    NASA Astrophysics Data System (ADS)

    Soh, Zu; Tsuji, Toshio; Takiguchi, Noboru; Ohtake, Hisao

    2012-12-01

    Recently, the importance of odors and methods for their evaluation have seen increased emphasis, especially in the fragrance and food industries. Although odors can be characterized by their odorant components, their chemical information cannot be directly related to the flavors we perceive. Biological research has revealed that neuronal activity related to glomeruli (which form part of the olfactory system) is closely connected to odor qualities. Here we report on a neural network model of the olfactory system that can predict glomerular activity from odorant molecule structures. We also report on the learning and prediction ability of the proposed model.

  15. Lag Synchronization of Switched Neural Networks via Neural Activation Function and Applications in Image Encryption.

    PubMed

    Wen, Shiping; Zeng, Zhigang; Huang, Tingwen; Meng, Qinggang; Yao, Wei

    2015-07-01

    This paper investigates the problem of global exponential lag synchronization of a class of switched neural networks with time-varying delays via neural activation function and applications in image encryption. The controller is dependent on the output of the system in the case of packed circuits, since it is hard to measure the inner state of the circuits. Thus, it is critical to design the controller based on the neuron activation function. Comparing the results, in this paper, with the existing ones shows that we improve and generalize the results derived in the previous literature. Several examples are also given to illustrate the effectiveness and potential applications in image encryption. PMID:25594985

  16. Patterns of Neural Activity in Networks with Complex Connectivity

    NASA Astrophysics Data System (ADS)

    Solla, Sara A.

    2008-03-01

    An understanding of emergent dynamics on complex networks requires investigating the interplay between the intrinsic dynamics of the node elements and the connectivity of the network in which they are embedded. In order to address some of these questions in a specific scenario of relevance to the dynamical states of neural ensembles, we have studied the collective behavior of excitable model neurons in a network with small-world topology. The small-world network has local lattice order, but includes a number of randomly placed connections that may provide connectivity shortcuts. This topology bears a schematic resemblance to the connectivity of the cerebral cortex, in which neurons are most strongly coupled to nearby cells within fifty to a hundred micrometers, but also make projections to cells millimeters away. We find that the dynamics of this small-world network of excitable neurons depend mostly on both the density of shortcuts and the delay associated with neuronal projections. In the regime of low shortcut density, the system exhibits persistent activity in the form of propagating waves, which annihilate upon collision and are spawned anew via the re-injection of activity through shortcut connections. As the density of shortcuts reaches a critical value, the system undergoes a transition to failure. The critical shortcut density results from matching the time associated with a recurrent path through the network to an intrinsic recovery time of the individual neurons. Furthermore, if the delay associated with neuronal interactions is sufficiently long, activity reemerges above the critical density of shortcuts. The activity in this regime exhibits long, chaotic transients composed of noisy, large-amplitude population bursts.

  17. Natural lecithin promotes neural network complexity and activity.

    PubMed

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-01-01

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called "essential" fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications. PMID:27228907

  18. Natural lecithin promotes neural network complexity and activity

    PubMed Central

    Latifi, Shahrzad; Tamayol, Ali; Habibey, Rouhollah; Sabzevari, Reza; Kahn, Cyril; Geny, David; Eftekharpour, Eftekhar; Annabi, Nasim; Blau, Axel; Linder, Michel; Arab-Tehrany, Elmira

    2016-01-01

    Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called “essential” fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications. PMID:27228907

  19. Critical Branching Neural Networks

    ERIC Educational Resources Information Center

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  20. The optimization of force inputs for active structural acoustic control using a neural network

    NASA Technical Reports Server (NTRS)

    Cabell, R. H.; Lester, H. C.; Silcox, R. J.

    1992-01-01

    This paper investigates the use of a neural network to determine which force actuators, of a multi-actuator array, are best activated in order to achieve structural-acoustic control. The concept is demonstrated using a cylinder/cavity model on which the control forces, produced by piezoelectric actuators, are applied with the objective of reducing the interior noise. A two-layer neural network is employed and the back propagation solution is compared with the results calculated by a conventional, least-squares optimization analysis. The ability of the neural network to accurately and efficiently control actuator activation for interior noise reduction is demonstrated.

  1. Electronic Neural Networks

    NASA Technical Reports Server (NTRS)

    Thakoor, Anil

    1990-01-01

    Viewgraphs on electronic neural networks for space station are presented. Topics covered include: electronic neural networks; electronic implementations; VLSI/thin film hybrid hardware for neurocomputing; computations with analog parallel processing; features of neuroprocessors; applications of neuroprocessors; neural network hardware for terrain trafficability determination; a dedicated processor for path planning; neural network system interface; neural network for robotic control; error backpropagation algorithm for learning; resource allocation matrix; global optimization neuroprocessor; and electrically programmable read only thin-film synaptic array.

  2. Nested Neural Networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1992-01-01

    Report presents analysis of nested neural networks, consisting of interconnected subnetworks. Analysis based on simplified mathematical models more appropriate for artificial electronic neural networks, partly applicable to biological neural networks. Nested structure allows for retrieval of individual subpatterns. Requires fewer wires and connection devices than fully connected networks, and allows for local reconstruction of damaged subnetworks without rewiring entire network.

  3. Morphological neural networks

    SciTech Connect

    Ritter, G.X.; Sussner, P.

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  4. Information content of neural networks with self-control and variable activity

    NASA Astrophysics Data System (ADS)

    Bollé, D.; Amari, S. I.; Dominguez Carreta, D. R. C.; Massolo, G.

    2001-02-01

    A self-control mechanism for the dynamics of neural networks with variable activity is discussed using a recursive scheme for the time evolution of the local field. It is based upon the introduction of a self-adapting time-dependent threshold as a function of both the neural and pattern activity in the network. This mechanism leads to an improvement of the information content of the network as well as an increase of the storage capacity and the basins of attraction. Different architectures are considered and the results are compared with numerical simulations.

  5. Real-time Neural Network predictions of geomagnetic activity indices

    NASA Astrophysics Data System (ADS)

    Bala, R.; Reiff, P. H.

    2009-12-01

    The Boyle potential or the Boyle Index (BI), Φ (kV)=10-4 (V/(km/s))2 + 11.7 (B/nT) sin3(θ/2), is an empirically-derived formula that can characterize the Earth's polar cap potential, which is readily derivable in real time using the solar wind data from ACE (Advanced Composition Explorer). The BI has a simplistic form that utilizes a non-magnetic "viscous" and a magnetic "merging" component to characterize the magnetospheric behavior in response to the solar wind. We have investigated its correlation with two of conventional geomagnetic activity indices in Kp and the AE index. We have shown that the logarithms of both 3-hr and 1-hr averages of the BI correlate well with the subsequent Kp: Kp = 8.93 log10(BI) - 12.55 along with 1-hr BI correlating with the subsequent log10(AE): log10(AE) = 1.78 log10(BI) - 3.6. We have developed a new set of algorithms based on Artificial Neural Networks (ANNs) suitable for short term space weather forecasts with an enhanced lead-time and better accuracy in predicting Kp and AE over some leading models; the algorithms omit the time history of its targets to utilize only the solar wind data. Inputs to our ANN models benefit from the BI and its proven record as a forecasting parameter since its initiation in October, 2003. We have also performed time-sensitivity tests using cross-correlation analysis to demonstrate that our models are as efficient as those that incorporates the time history of the target indices in their inputs. Our algorithms can predict the upcoming full 3-hr Kp, purely from the solar wind data and achieve a linear correlation coefficient of 0.840, which means that it predicts the upcoming Kp value on average to within 1.3 step, which is approximately the resolution of the real-time Kp estimate. Our success in predicting Kp during a recent unexpected event (22 July ’09) is shown in the figure. Also, when predicting an equivalent "one hour Kp'', the correlation coefficient is 0.86, meaning on average a prediction

  6. Application of neural networks with orthogonal activation functions in control of dynamical systems

    NASA Astrophysics Data System (ADS)

    Nikolić, Saša S.; Antić, Dragan S.; Milojković, Marko T.; Milovanović, Miroslav B.; Perić, Staniša Lj.; Mitić, Darko B.

    2016-04-01

    In this article, we present a new method for the synthesis of almost and quasi-orthogonal polynomials of arbitrary order. Filters designed on the bases of these functions are generators of generalised quasi-orthogonal signals for which we derived and presented necessary mathematical background. Based on theoretical results, we designed and practically implemented generalised first-order (k = 1) quasi-orthogonal filter and proved its quasi-orthogonality via performed experiments. Designed filters can be applied in many scientific areas. In this article, generated functions were successfully implemented in Nonlinear Auto Regressive eXogenous (NARX) neural network as activation functions. One practical application of the designed orthogonal neural network is demonstrated through the example of control of the complex technical non-linear system - laboratory magnetic levitation system. Obtained results were compared with neural networks with standard activation functions and orthogonal functions of trigonometric shape. The proposed network demonstrated superiority over existing solutions in the sense of system performances.

  7. Fractal Patterns of Neural Activity Exist within the Suprachiasmatic Nucleus and Require Extrinsic Network Interactions

    PubMed Central

    Hu, Kun; Meijer, Johanna H.; Shea, Steven A.; vanderLeest, Henk Tjebbe; Pittman-Polletta, Benjamin; Houben, Thijs; van Oosterhout, Floor; Deboer, Tom; Scheer, Frank A. J. L.

    2012-01-01

    The mammalian central circadian pacemaker (the suprachiasmatic nucleus, SCN) contains thousands of neurons that are coupled through a complex network of interactions. In addition to the established role of the SCN in generating rhythms of ∼24 hours in many physiological functions, the SCN was recently shown to be necessary for normal self-similar/fractal organization of motor activity and heart rate over a wide range of time scales—from minutes to 24 hours. To test whether the neural network within the SCN is sufficient to generate such fractal patterns, we studied multi-unit neural activity of in vivo and in vitro SCNs in rodents. In vivo SCN-neural activity exhibited fractal patterns that are virtually identical in mice and rats and are similar to those in motor activity at time scales from minutes up to 10 hours. In addition, these patterns remained unchanged when the main afferent signal to the SCN, namely light, was removed. However, the fractal patterns of SCN-neural activity are not autonomous within the SCN as these patterns completely broke down in the isolated in vitro SCN despite persistence of circadian rhythmicity. Thus, SCN-neural activity is fractal in the intact organism and these fractal patterns require network interactions between the SCN and extra-SCN nodes. Such a fractal control network could underlie the fractal regulation observed in many physiological functions that involve the SCN, including motor control and heart rate regulation. PMID:23185285

  8. Noise influence on spike activation in a Hindmarsh–Rose small-world neural network

    NASA Astrophysics Data System (ADS)

    Zhe, Sun; Micheletto, Ruggero

    2016-07-01

    We studied the role of noise in neural networks, especially focusing on its relation to the propagation of spike activity in a small sized system. We set up a source of information using a single neuron that is constantly spiking. This element called initiator x o feeds spikes to the rest of the network that is initially quiescent and subsequently reacts with vigorous spiking after a transitional period of time. We found that noise quickly suppresses the initiator’s influence and favors spontaneous spike activity and, using a decibel representation of noise intensity, we established a linear relationship between noise amplitude and the interval from the initiator’s first spike and the rest of the network activation. We studied the same process with networks of different sizes (number of neurons) and found that the initiator x o has a measurable influence on small networks, but as the network grows in size, spontaneous spiking emerges disrupting its effects on networks of more than about N = 100 neurons. This suggests that the mechanism of internal noise generation allows information transmission within a small neural neighborhood, but decays for bigger network domains. We also analyzed the Fourier spectrum of the whole network membrane potential and verified that noise provokes the reduction of main θ and α peaks before transitioning into chaotic spiking. However, network size does not reproduce a similar phenomena; instead we recorded a reduction in peaks’ amplitude, a better sharpness and definition of Fourier peaks, but not the evident degeneration to chaos observed with increasing external noise. This work aims to contribute to the understanding of the fundamental mechanisms of propagation of spontaneous spiking in neural networks and gives a quantitative assessment of how noise can be used to control and modulate this phenomenon in Hindmarsh‑Rose (H‑R) neural networks.

  9. Improved training of neural networks for the nonlinear active control of sound and vibration.

    PubMed

    Bouchard, M; Paillard, B; Le Dinh, C T

    1999-01-01

    Active control of sound and vibration has been the subject of a lot of research in recent years, and examples of applications are now numerous. However, few practical implementations of nonlinear active controllers have been realized. Nonlinear active controllers may be required in cases where the actuators used in active control systems exhibit nonlinear characteristics, or in cases when the structure to be controlled exhibits a nonlinear behavior. A multilayer perceptron neural-network based control structure was previously introduced as a nonlinear active controller, with a training algorithm based on an extended backpropagation scheme. This paper introduces new heuristical training algorithms for the same neural-network control structure. The objective is to develop new algorithms with faster convergence speed (by using nonlinear recursive-least-squares algorithms) and/or lower computational loads (by using an alternative approach to compute the instantaneous gradient of the cost function). Experimental results of active sound control using a nonlinear actuator with linear and nonlinear controllers are presented. The results show that some of the new algorithms can greatly improve the learning rate of the neural-network control structure, and that for the considered experimental setup a neural-network controller can outperform linear controllers. PMID:18252535

  10. A consensual neural network

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Ersoy, O. K.; Swain, P. H.

    1991-01-01

    A neural network architecture called a consensual neural network (CNN) is proposed for the classification of data from multiple sources. Its relation to hierarchical and ensemble neural networks is discussed. CNN is based on the statistical consensus theory and uses nonlinearly transformed input data. The input data are transformed several times, and the different transformed data are applied as if they were independent inputs. The independent inputs are classified using stage neural networks and outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote-sensing data and geographic data are given.

  11. Global robust dissipativity of interval recurrent neural networks with time-varying delay and discontinuous activations.

    PubMed

    Duan, Lian; Huang, Lihong; Guo, Zhenyuan

    2016-07-01

    In this paper, the problems of robust dissipativity and robust exponential dissipativity are discussed for a class of recurrent neural networks with time-varying delay and discontinuous activations. We extend an invariance principle for the study of the dissipativity problem of delay systems to the discontinuous case. Based on the developed theory, some novel criteria for checking the global robust dissipativity and global robust exponential dissipativity of the addressed neural network model are established by constructing appropriate Lyapunov functionals and employing the theory of Filippov systems and matrix inequality techniques. The effectiveness of the theoretical results is shown by two examples with numerical simulations. PMID:27475061

  12. Global robust dissipativity of interval recurrent neural networks with time-varying delay and discontinuous activations

    NASA Astrophysics Data System (ADS)

    Duan, Lian; Huang, Lihong; Guo, Zhenyuan

    2016-07-01

    In this paper, the problems of robust dissipativity and robust exponential dissipativity are discussed for a class of recurrent neural networks with time-varying delay and discontinuous activations. We extend an invariance principle for the study of the dissipativity problem of delay systems to the discontinuous case. Based on the developed theory, some novel criteria for checking the global robust dissipativity and global robust exponential dissipativity of the addressed neural network model are established by constructing appropriate Lyapunov functionals and employing the theory of Filippov systems and matrix inequality techniques. The effectiveness of the theoretical results is shown by two examples with numerical simulations.

  13. Exploring neural network technology

    SciTech Connect

    Naser, J.; Maulbetsch, J.

    1992-12-01

    EPRI is funding several projects to explore neural network technology, a form of artificial intelligence that some believe may mimic the way the human brain processes information. This research seeks to provide a better understanding of fundamental neural network characteristics and to identify promising utility industry applications. Results to date indicate that the unique attributes of neural networks could lead to improved monitoring, diagnostic, and control capabilities for a variety of complex utility operations. 2 figs.

  14. Model for a flexible motor memory based on a self-active recurrent neural network.

    PubMed

    Boström, Kim Joris; Wagner, Heiko; Prieske, Markus; de Lussanet, Marc

    2013-10-01

    Using recent recurrent network architecture based on the reservoir computing approach, we propose and numerically simulate a model that is focused on the aspects of a flexible motor memory for the storage of elementary movement patterns into the synaptic weights of a neural network, so that the patterns can be retrieved at any time by simple static commands. The resulting motor memory is flexible in that it is capable to continuously modulate the stored patterns. The modulation consists in an approximately linear inter- and extrapolation, generating a large space of possible movements that have not been learned before. A recurrent network of thousand neurons is trained in a manner that corresponds to a realistic exercising scenario, with experimentally measured muscular activations and with kinetic data representing proprioceptive feedback. The network is "self-active" in that it maintains recurrent flow of activation even in the absence of input, a feature that resembles the "resting-state activity" found in the human and animal brain. The model involves the concept of "neural outsourcing" which amounts to the permanent shifting of computational load from higher to lower-level neural structures, which might help to explain why humans are able to execute learned skills in a fluent and flexible manner without the need for attention to the details of the movement. PMID:24120277

  15. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  16. Model neural networks

    SciTech Connect

    Kepler, T.B.

    1989-01-01

    After a brief introduction to the techniques and philosophy of neural network modeling by spin glass inspired system, the author investigates several properties of these discrete models for autoassociative memory. Memories are represented as patterns of neural activity; their traces are stored in a distributed manner in the matrix of synaptic coupling strengths. Recall is dynamic, an initial state containing partial information about one of the memories evolves toward that memory. Activity in each neuron creates fields at every other neuron, the sum total of which determines its activity. By averaging over the space of interaction matrices with memory constraints enforced by the choice of measure, we show that the exist universality classes defined by families of field distributions and the associated network capacities. He demonstrates the dominant role played by the field distribution in determining the size of the domains of attraction and present, in two independent ways, an expression for this size. He presents a class of convergent learning algorithms which improve upon known algorithms for producing such interaction matrices. He demonstrates that spurious states, or unexperienced memories, may be practically suppressed by the inducement of n-cycles and chaos. He investigates aspects of chaos in these systems, and then leave discrete modeling to implement the analysis of chaotic behavior on a continuous valued network realized in electronic hardware. In each section he combine analytical calculation and computer simulations.

  17. Forecast and restoration of geomagnetic activity indices by using the software-computational neural network complex

    NASA Astrophysics Data System (ADS)

    Barkhatov, Nikolay; Revunov, Sergey

    2010-05-01

    It is known that currently used indices of geomagnetic activity to some extent reflect the physical processes occurring in the interaction of the perturbed solar wind with Earth's magnetosphere. Therefore, they are connected to each other and with the parameters of near-Earth space. The establishment of such nonlinear connections is interest. For such purposes when the physical problem is complex or has many parameters the technology of artificial neural networks is applied. Such approach for development of the automated forecast and restoration method of geomagnetic activity indices with the establishment of creative software-computational neural network complex is used. Each neural network experiments were carried out at this complex aims to search for a specific nonlinear relation between the analyzed indices and parameters. At the core of the algorithm work program a complex scheme of the functioning of artificial neural networks (ANN) of different types is contained: back propagation Elman network, feed forward network, fuzzy logic network and Kohonen layer classification network. Tools of the main window of the complex (the application) the settings used by neural networks allow you to change: the number of hidden layers, the number of neurons in the layer, the input and target data, the number of cycles of training. Process and the quality of training the ANN is a dynamic plot of changing training error. Plot of comparison of network response with the test sequence is result of the network training. The last-trained neural network with established nonlinear connection for repeated numerical experiments can be run. At the same time additional training is not executed and the previously trained network as a filter input parameters get through and output parameters with the test event are compared. At statement of the large number of different experiments provided the ability to run the program in a "batch" mode is stipulated. For this purpose the user a

  18. Mutual information and self-control of a fully-connected low-activity neural network

    NASA Astrophysics Data System (ADS)

    Bollé, D.; Carreta, D. Dominguez

    2000-11-01

    A self-control mechanism for the dynamics of a three-state fully connected neural network is studied through the introduction of a time-dependent threshold. The self-adapting threshold is a function of both the neural and the pattern activity in the network. The time evolution of the order parameters is obtained on the basis of a recently developed dynamical recursive scheme. In the limit of low activity the mutual information is shown to be the relevant parameter in order to determine the retrieval quality. Due to self-control an improvement of this mutual information content as well as an increase of the storage capacity and an enlargement of the basins of attraction are found. These results are compared with numerical simulations.

  19. Neural networks for aircraft control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis

    1990-01-01

    Current research in Artificial Neural Networks indicates that networks offer some potential advantages in adaptation and fault tolerance. This research is directed at determining the possible applicability of neural networks to aircraft control. The first application will be to aircraft trim. Neural network node characteristics, network topology and operation, neural network learning and example histories using neighboring optimal control with a neural net are discussed.

  20. Integration of Optical Manipulation and Electrophysiological Tools to Modulate and Record Activity in Neural Networks

    NASA Astrophysics Data System (ADS)

    Difato, F.; Schibalsky, L.; Benfenati, F.; Blau, A.

    2011-07-01

    We present an optical system that combines IR (1064 nm) holographic optical tweezers with a sub-nanosecond-pulsed UV (355 nm) laser microdissector for the optical manipulation of single neurons and entire networks both on transparent and non-transparent substrates in vitro. The phase-modulated laser beam can illuminate the sample concurrently or independently from above or below assuring compatibility with different types of microelectrode array and patch-clamp electrophysiology. By combining electrophysiological and optical tools, neural activity in response to localized stimuli or injury can be studied and quantified at sub-cellular, cellular, and network level.

  1. Active vibration control of flexible cantilever plates using piezoelectric materials and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Abdeljaber, Osama; Avci, Onur; Inman, Daniel J.

    2016-02-01

    The study presented in this paper introduces a new intelligent methodology to mitigate the vibration response of flexible cantilever plates. The use of the piezoelectric sensor/actuator pairs for active control of plates is discussed. An intelligent neural network based controller is designed to control the optimal voltage applied on the piezoelectric patches. The control technique utilizes a neurocontroller along with a Kalman Filter to compute the appropriate actuator command. The neurocontroller is trained based on an algorithm that incorporates a set of emulator neural networks which are also trained to predict the future response of the cantilever plate. Then, the neurocontroller is evaluated by comparing the uncontrolled and controlled responses under several types of dynamic excitations. It is observed that the neurocontroller reduced the vibration response of the flexible cantilever plate significantly; the results demonstrated the success and robustness of the neurocontroller independent of the type and distribution of the excitation force.

  2. Optimal Recognition Method of Human Activities Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Oniga, Stefan; József, Sütő

    2015-12-01

    The aim of this research is an exhaustive analysis of the various factors that may influence the recognition rate of the human activity using wearable sensors data. We made a total of 1674 simulations on a publically released human activity database by a group of researcher from the University of California at Berkeley. In a previous research, we analyzed the influence of the number of sensors and their placement. In the present research we have examined the influence of the number of sensor nodes, the type of sensor node, preprocessing algorithms, type of classifier and its parameters. The final purpose is to find the optimal setup for best recognition rates with lowest hardware and software costs.

  3. Analytically tractable studies of traveling waves of activity in integrate-and-fire neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Osan, Remus

    2016-05-01

    In contrast to other large-scale network models for propagation of electrical activity in neural tissue that have no analytical solutions for their dynamics, we show that for a specific class of integrate and fire neural networks the acceleration depends quadratically on the instantaneous speed of the activity propagation. We use this property to analytically compute the network spike dynamics and to highlight the emergence of a natural time scale for the evolution of the traveling waves. These results allow us to examine other applications of this model such as the effect that a nonconductive gap of tissue has on further activity propagation. Furthermore we show that activity propagation also depends on local conditions for other more general connectivity functions, by converting the evolution equations for network dynamics into a low-dimensional system of ordinary differential equations. This approach greatly enhances our intuition into the mechanisms of the traveling waves evolution and significantly reduces the simulation time for this class of models.

  4. Analytically tractable studies of traveling waves of activity in integrate-and-fire neural networks.

    PubMed

    Zhang, Jie; Osan, Remus

    2016-05-01

    In contrast to other large-scale network models for propagation of electrical activity in neural tissue that have no analytical solutions for their dynamics, we show that for a specific class of integrate and fire neural networks the acceleration depends quadratically on the instantaneous speed of the activity propagation. We use this property to analytically compute the network spike dynamics and to highlight the emergence of a natural time scale for the evolution of the traveling waves. These results allow us to examine other applications of this model such as the effect that a nonconductive gap of tissue has on further activity propagation. Furthermore we show that activity propagation also depends on local conditions for other more general connectivity functions, by converting the evolution equations for network dynamics into a low-dimensional system of ordinary differential equations. This approach greatly enhances our intuition into the mechanisms of the traveling waves evolution and significantly reduces the simulation time for this class of models. PMID:27300901

  5. Average synaptic activity and neural networks topology: a global inverse problem

    NASA Astrophysics Data System (ADS)

    Burioni, Raffaella; Casartelli, Mario; di Volo, Matteo; Livi, Roberto; Vezzani, Alessandro

    2014-03-01

    The dynamics of neural networks is often characterized by collective behavior and quasi-synchronous events, where a large fraction of neurons fire in short time intervals, separated by uncorrelated firing activity. These global temporal signals are crucial for brain functioning. They strongly depend on the topology of the network and on the fluctuations of the connectivity. We propose a heterogeneous mean-field approach to neural dynamics on random networks, that explicitly preserves the disorder in the topology at growing network sizes, and leads to a set of self-consistent equations. Within this approach, we provide an effective description of microscopic and large scale temporal signals in a leaky integrate-and-fire model with short term plasticity, where quasi-synchronous events arise. Our equations provide a clear analytical picture of the dynamics, evidencing the contributions of both periodic (locked) and aperiodic (unlocked) neurons to the measurable average signal. In particular, we formulate and solve a global inverse problem of reconstructing the in-degree distribution from the knowledge of the average activity field. Our method is very general and applies to a large class of dynamical models on dense random networks.

  6. Wrestling model of the repertoire of activity propagation modes in quadruple neural networks.

    PubMed

    Shteingart, Hanan; Raichman, Nadav; Baruchi, Itay; Ben-Jacob, Eshel

    2010-01-01

    The spontaneous activity of engineered quadruple cultured neural networks (of four-coupled sub-networks) exhibits a repertoire of different types of mutual synchronization events. Each event corresponds to a specific activity propagation mode (APM) defined by the order of activity propagation between the sub-networks. We statistically characterized the frequency of spontaneous appearance of the different types of APMs. The relative frequencies of the APMs were then examined for their power-law properties. We found that the frequencies of appearance of the leading (most frequent) APMs have close to constant algebraic ratio reminiscent of Zipf's scaling of words. We show that the observations are consistent with a simplified "wrestling" model. This model represents an extension of the "boxing arena" model which was previously proposed to describe the ratio between the two activity modes in two coupled sub-networks. The additional new element in the "wrestling" model presented here is that the firing within each network is modeled by a time interval generator with similar intra-network Lévy distribution. We modeled the different burst-initiation zones' interaction by competition between the stochastic generators with Gaussian inter-network variability. Estimation of the model parameters revealed similarity across different cultures while the inter-burst-interval of the cultures was similar across different APMs as numerical simulation of the model predicts. PMID:20890451

  7. Neural network applications

    NASA Technical Reports Server (NTRS)

    Padgett, Mary L.; Desai, Utpal; Roppel, T.A.; White, Charles R.

    1993-01-01

    A design procedure is suggested for neural networks which accommodates the inclusion of such knowledge-based systems techniques as fuzzy logic and pairwise comparisons. The use of these procedures in the design of applications combines qualitative and quantitative factors with empirical data to yield a model with justifiable design and parameter selection procedures. The procedure is especially relevant to areas of back-propagation neural network design which are highly responsive to the use of precisely recorded expert knowledge.

  8. Artificial neural network and multiple regression model for nickel(II) adsorption on powdered activated carbons.

    PubMed

    Hema, M; Srinivasan, K

    2011-07-01

    Nickel removal efficiency of powered activated carbons of coconut oilcake, neem oilcake and commercial carbon was investigated by using artificial neural network. The effective parameters for the removal of nickel (%R) by adsorption process, which included the pH, contact time (T), distinctiveness of activated carbon (Cn), amount of activated carbon (Cw) and initial concentration of nickel (Co) were investigated. Levenberg-Marquardt (LM) Back-propagation algorithm is used to train the network. The network topology was optimized by varying number of hidden layer and number of neurons in hidden layer. The model was developed in terms of training; validation and testing of experimental data, the test subsets that each of them contains 60%, 20% and 20% of total experimental data, respectively. Multiple regression equation was developed for nickel adsorption system and the output was compared with both simulated and experimental outputs. Standard deviation (SD) with respect to experimental output was quite higher in the case of regression model when compared with ANN model. The obtained experimental data best fitted with the artificial neural network. PMID:23029923

  9. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    PubMed

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation. PMID:26797612

  10. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    PubMed Central

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  11. Using convolutional neural networks for human activity classification on micro-Doppler radar spectrograms

    NASA Astrophysics Data System (ADS)

    Jordan, Tyler S.

    2016-05-01

    This paper presents the findings of using convolutional neural networks (CNNs) to classify human activity from micro-Doppler features. An emphasis on activities involving potential security threats such as holding a gun are explored. An automotive 24 GHz radar on chip was used to collect the data and a CNN (normally applied to image classification) was trained on the resulting spectrograms. The CNN achieves an error rate of 1.65 % on classifying running vs. walking, 17.3 % error on armed walking vs. unarmed walking, and 22 % on classifying six different actions.

  12. Active Control of Wind-Tunnel Model Aeroelastic Response Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.

    2000-01-01

    NASA Langley Research Center, Hampton, VA 23681 Under a joint research and development effort conducted by the National Aeronautics and Space Administration and The Boeing Company (formerly McDonnell Douglas) three neural-network based control systems were developed and tested. The control systems were experimentally evaluated using a transonic wind-tunnel model in the Langley Transonic Dynamics Tunnel. One system used a neural network to schedule flutter suppression control laws, another employed a neural network in a predictive control scheme, and the third employed a neural network in an inverse model control scheme. All three of these control schemes successfully suppressed flutter to or near the limits of the testing apparatus, and represent the first experimental applications of neural networks to flutter suppression. This paper will summarize the findings of this project.

  13. Implications of the Dependence of Neuronal Activity on Neural Network States for the Design of Brain-Machine Interfaces

    PubMed Central

    Panzeri, Stefano; Safaai, Houman; De Feo, Vito; Vato, Alessandro

    2016-01-01

    Brain-machine interfaces (BMIs) can improve the quality of life of patients with sensory and motor disabilities by both decoding motor intentions expressed by neural activity, and by encoding artificially sensed information into patterns of neural activity elicited by causal interventions on the neural tissue. Yet, current BMIs can exchange relatively small amounts of information with the brain. This problem has proved difficult to overcome by simply increasing the number of recording or stimulating electrodes, because trial-to-trial variability of neural activity partly arises from intrinsic factors (collectively known as the network state) that include ongoing spontaneous activity and neuromodulation, and so is shared among neurons. Here we review recent progress in characterizing the state dependence of neural responses, and in particular of how neural responses depend on endogenous slow fluctuations of network excitability. We then elaborate on how this knowledge may be used to increase the amount of information that BMIs exchange with brain. Knowledge of network state can be used to fine-tune the stimulation pattern that should reliably elicit a target neural response used to encode information in the brain, and to discount part of the trial-by-trial variability of neural responses, so that they can be decoded more accurately. PMID:27147955

  14. Hyperbolic Hopfield neural networks.

    PubMed

    Kobayashi, M

    2013-02-01

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states. PMID:24808287

  15. Human activities recognition by head movement using partial recurrent neural network

    NASA Astrophysics Data System (ADS)

    Tan, Henry C. C.; Jia, Kui; De Silva, Liyanage C.

    2003-06-01

    Traditionally, human activities recognition has been achieved mainly by the statistical pattern recognition methods or the Hidden Markov Model (HMM). In this paper, we propose a novel use of the connectionist approach for the recognition of ten simple human activities: walking, sitting down, getting up, squatting down and standing up, in both lateral and frontal views, in an office environment. By means of tracking the head movement of the subjects over consecutive frames from a database of different color image sequences, and incorporating the Elman model of the partial recurrent neural network (RNN) that learns the sequential patterns of relative change of the head location in the images, the proposed system is able to robustly classify all the ten activities performed by unseen subjects from both sexes, of different race and physique, with a recognition rate as high as 92.5%. This demonstrates the potential of employing partial RNN to recognize complex activities in the increasingly popular human-activities-based applications.

  16. Nested neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    Nested neural networks, consisting of small interconnected subnetworks, allow for the storage and retrieval of neural state patterns of different sizes. The subnetworks are naturally categorized by layers of corresponding to spatial frequencies in the pattern field. The storage capacity and the error correction capability of the subnetworks generally increase with the degree of connectivity between layers (the nesting degree). Storage of only few subpatterns in each subnetworks results in a vast storage capacity of patterns and subpatterns in the nested network, maintaining high stability and error correction capability.

  17. Can the activities of the large scale cortical network be expressed by neural energy? A brief review.

    PubMed

    Wang, Rubin; Zhu, Yating

    2016-02-01

    This paper mainly discusses and summarize that the changes of biological energy in the brain can be expressed by the biophysical energy we constructed. Different from the electrochemical energy, the biophysical energy proposed in the paper not only can be used to simulate the activity of neurons but also be used to simulate the neural activity of large scale cortical networks, so that the scientific nature of the neural energy coding was discussed. PMID:26834857

  18. Molecular Fingerprint-based Artificial Neural Networks QSAR for Ligand Biological Activity Predictions

    PubMed Central

    Myint, Kyaw-Zeyar; Wang, Lirong; Tong, Qin; Xie, Xiang-Qun

    2012-01-01

    In this manuscript, we have reported a novel 2D fingerprint-based artificial neural network QSAR (FANN-QSAR) method in order to effectively predict biological activities of structurally diverse chemical ligands. Three different types of fingerprints, namely ECFP6, FP2 and MACCS, were used in FANN-QSAR algorithm development, and FANN-QSAR models were compared to known 3D and 2D QSAR methods using five data sets previously reported. In addition, the derived models were used to predict GPCR cannabinoid ligand binding affinities using our manually curated cannabinoid ligand database containing 1699 structurally diverse compounds with reported cannabinoid receptor subtype CB2 activities. To demonstrate its useful applications, the established FANN-QSAR algorithm was used as a virtual screening tool to search a large NCI compound database for lead cannabinoid compounds and we have discovered several compounds with good CB2 binding affinities ranging from 6.70 nM to 3.75 μM. To the best of our knowledge, this is the first report for a fingerprint-based neural network approach validated with a successful virtual screening application in identifying lead compounds. The studies proved that the FANN-QSAR method is a useful approach to predict bioactivities or properties of ligands and to find novel lead compounds for drug discovery research. PMID:22937990

  19. Dynamics on Networks: The Role of Local Dynamics and Global Networks on the Emergence of Hypersynchronous Neural Activity

    PubMed Central

    Schmidt, Helmut; Petkov, George; Richardson, Mark P.; Terry, John R.

    2014-01-01

    Graph theory has evolved into a useful tool for studying complex brain networks inferred from a variety of measures of neural activity, including fMRI, DTI, MEG and EEG. In the study of neurological disorders, recent work has discovered differences in the structure of graphs inferred from patient and control cohorts. However, most of these studies pursue a purely observational approach; identifying correlations between properties of graphs and the cohort which they describe, without consideration of the underlying mechanisms. To move beyond this necessitates the development of computational modeling approaches to appropriately interpret network interactions and the alterations in brain dynamics they permit, which in the field of complexity sciences is known as dynamics on networks. In this study we describe the development and application of this framework using modular networks of Kuramoto oscillators. We use this framework to understand functional networks inferred from resting state EEG recordings of a cohort of 35 adults with heterogeneous idiopathic generalized epilepsies and 40 healthy adult controls. Taking emergent synchrony across the global network as a proxy for seizures, our study finds that the critical strength of coupling required to synchronize the global network is significantly decreased for the epilepsy cohort for functional networks inferred from both theta (3–6 Hz) and low-alpha (6–9 Hz) bands. We further identify left frontal regions as a potential driver of seizure activity within these networks. We also explore the ability of our method to identify individuals with epilepsy, observing up to 80 predictive power through use of receiver operating characteristic analysis. Collectively these findings demonstrate that a computer model based analysis of routine clinical EEG provides significant additional information beyond standard clinical interpretation, which should ultimately enable a more appropriate mechanistic stratification of people

  20. Neural Networks and Micromechanics

    NASA Astrophysics Data System (ADS)

    Kussul, Ernst; Baidyk, Tatiana; Wunsch, Donald C.

    The title of the book, "Neural Networks and Micromechanics," seems artificial. However, the scientific and technological developments in recent decades demonstrate a very close connection between the two different areas of neural networks and micromechanics. The purpose of this book is to demonstrate this connection. Some artificial intelligence (AI) methods, including neural networks, could be used to improve automation system performance in manufacturing processes. However, the implementation of these AI methods within industry is rather slow because of the high cost of conducting experiments using conventional manufacturing and AI systems. To lower the cost, we have developed special micromechanical equipment that is similar to conventional mechanical equipment but of much smaller size and therefore of lower cost. This equipment could be used to evaluate different AI methods in an easy and inexpensive way. The proved methods could be transferred to industry through appropriate scaling. In this book, we describe the prototypes of low cost microequipment for manufacturing processes and the implementation of some AI methods to increase precision, such as computer vision systems based on neural networks for microdevice assembly and genetic algorithms for microequipment characterization and the increase of microequipment precision.

  1. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  2. Improved Autoassociative Neural Networks

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application- specific integrated circuit would be needed. A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network. The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.

  3. Experimental investigation of active vibration control using neural networks and piezoelectric actuators

    NASA Astrophysics Data System (ADS)

    Jha, Ratneshwar; Rower, Jacob

    2002-02-01

    The use of neural networks for identification and control of smart structures is investigated experimentally. Piezoelectric actuators are employed to suppress the vibrations of a cantilevered plate subject to impulse, sine wave and band-limited white noise disturbances. The neural networks used are multilayer perceptrons trained with error backpropagation. Validation studies show that the identifier predicts the system dynamics accurately. The controller is trained adaptively with the help of the neural identifier. Experimental results demonstrate excellent closed-loop performance and robustness of the neurocontroller.

  4. Where's the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network.

    PubMed

    Hartmann, Christoph; Lazar, Andreea; Nessler, Bernhard; Triesch, Jochen

    2015-12-01

    Even in the absence of sensory stimulation the brain is spontaneously active. This background "noise" seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network's spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network's behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be

  5. Determination of DPPH free radical scavenging activity: application of artificial neural networks.

    PubMed

    Musa, Khalid Hamid; Abdullah, Aminah; Al-Haiqi, Ahmed

    2016-03-01

    A new computational approach for the determination of 2,2-diphenyl-1-picrylhydrazyl free radical scavenging activity (DPPH-RSA) in food is reported, based on the concept of machine learning. Trolox standard was mix with DPPH at different concentrations to produce different colors from purple to yellow. Artificial neural network (ANN) was trained on a typical set of images of the DPPH radical reacting with different levels of Trolox. This allowed the neural network to classify future images of any sample into the correct class of RSA level. The ANN was then able to determine the DPPH-RSA of cinnamon, clove, mung bean, red bean, red rice, brown rice, black rice and tea extract and the results were compared with data obtained using a spectrophotometer. The application of ANN correlated well to the spectrophotometric classical procedure and thus do not require the use of spectrophotometer, and it could be used to obtain semi-quantitative results of DPPH-RSA. PMID:26471610

  6. DETECTING ACTIVE GALACTIC NUCLEI USING MULTI-FILTER IMAGING DATA. II. INCORPORATING ARTIFICIAL NEURAL NETWORKS

    SciTech Connect

    Dong, X. Y.; De Robertis, M. M.

    2013-10-01

    This is the second paper of the series Detecting Active Galactic Nuclei Using Multi-filter Imaging Data. In this paper we review shapelets, an image manipulation algorithm, which we employ to adjust the point-spread function (PSF) of galaxy images. This technique is used to ensure the image in each filter has the same and sharpest PSF, which is the preferred condition for detecting AGNs using multi-filter imaging data as we demonstrated in Paper I of this series. We apply shapelets on Canada-France-Hawaii Telescope Legacy Survey Wide Survey ugriz images. Photometric parameters such as effective radii, integrated fluxes within certain radii, and color gradients are measured on the shapelets-reconstructed images. These parameters are used by artificial neural networks (ANNs) which yield: photometric redshift with an rms of 0.026 and a regression R-value of 0.92; galaxy morphological types with an uncertainty less than 2 T types for z ≤ 0.1; and identification of galaxies as AGNs with 70% confidence, star-forming/starburst (SF/SB) galaxies with 90% confidence, and passive galaxies with 70% confidence for z ≤ 0.1. The incorporation of ANNs provides a more reliable technique for identifying AGN or SF/SB candidates, which could be very useful for large-scale multi-filter optical surveys that also include a modest set of spectroscopic data sufficient to train neural networks.

  7. Neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  8. Triphasic spike-timing-dependent plasticity organizes networks to produce robust sequences of neural activity

    PubMed Central

    Waddington, Amelia; Appleby, Peter A.; De Kamps, Marc; Cohen, Netta

    2012-01-01

    Synfire chains have long been proposed to generate precisely timed sequences of neural activity. Such activity has been linked to numerous neural functions including sensory encoding, cognitive and motor responses. In particular, it has been argued that synfire chains underlie the precise spatiotemporal firing patterns that control song production in a variety of songbirds. Previous studies have suggested that the development of synfire chains requires either initial sparse connectivity or strong topological constraints, in addition to any synaptic learning rules. Here, we show that this necessity can be removed by using a previously reported but hitherto unconsidered spike-timing-dependent plasticity (STDP) rule and activity-dependent excitability. Under this rule the network develops stable synfire chains that possess a non-trivial, scalable multi-layer structure, in which relative layer sizes appear to follow a universal function. Using computational modeling and a coarse grained random walk model, we demonstrate the role of the STDP rule in growing, molding and stabilizing the chain, and link model parameters to the resulting structure. PMID:23162457

  9. Ligand Biological Activity Predictions Using Fingerprint-Based Artificial Neural Networks (FANN-QSAR)

    PubMed Central

    Myint, Kyaw Z.; Xie, Xiang-Qun

    2015-01-01

    This chapter focuses on the fingerprint-based artificial neural networks QSAR (FANN-QSAR) approach to predict biological activities of structurally diverse compounds. Three types of fingerprints, namely ECFP6, FP2, and MACCS, were used as inputs to train the FANN-QSAR models. The results were benchmarked against known 2D and 3D QSAR methods, and the derived models were used to predict cannabinoid (CB) ligand binding activities as a case study. In addition, the FANN-QSAR model was used as a virtual screening tool to search a large NCI compound database for lead cannabinoid compounds. We discovered several compounds with good CB2 binding affinities ranging from 6.70 nM to 3.75 μM. The studies proved that the FANN-QSAR method is a useful approach to predict bioactivities or properties of ligands and to find novel lead compounds for drug discovery research. PMID:25502380

  10. Integration and transmission of distributed deterministic neural activity in feed-forward networks.

    PubMed

    Asai, Yoshiyuki; Villa, Alessandro E P

    2012-01-24

    A ten layer feed-forward network characterized by diverging/converging patterns of projection between successive layers of regular spiking (RS) neurons is activated by an external spatiotemporal input pattern fed to Layer 1 in presence of stochastic background activities fed to all layers. We used three dynamical systems to derive the external input spike trains including the temporal information, and three types of neuron models for the network, i.e. either a network formed either by neurons modeled by exponential integrate-and-fire dynamics (RS-EIF, Fourcaud-Trocmé et al., 2003), or by simple spiking neurons (RS-IZH, Izhikevich, 2004) or by multiple-timescale adaptive threshold neurons (RS-MAT, Kobayashi et al., 2009), given five intensities for the background activity. The assessment of the temporal structure embedded in the output spike trains was carried out by detecting the preferred firing sequences for the reconstruction of de-noised spike trains (Asai and Villa, 2008). We confirmed that the RS-MAT model is likely to be more efficient in integrating and transmitting the temporal structure embedded in the external input. We observed that this structure could be propagated not only up to the 10th layer but in some cases it was retained better beyond the 4th downstream layers. This study suggests that diverging/converging network structures, by the propagation of synfire activity, could play a key role in the transmission of complex temporal patterns of discharges associated to deterministic nonlinear activity. This article is part of a Special Issue entitled Neural Coding. PMID:22071564

  11. Stochastic cellular automata model of neural networks.

    PubMed

    Goltsev, A V; de Abreu, F V; Dorogovtsev, S N; Mendes, J F F

    2010-06-01

    We propose a stochastic dynamical model of noisy neural networks with complex architectures and discuss activation of neural networks by a stimulus, pacemakers, and spontaneous activity. This model has a complex phase diagram with self-organized active neural states, hybrid phase transitions, and a rich array of behaviors. We show that if spontaneous activity (noise) reaches a threshold level then global neural oscillations emerge. Stochastic resonance is a precursor of this dynamical phase transition. These oscillations are an intrinsic property of even small groups of 50 neurons. PMID:20866454

  12. Self-organization of neural networks

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Winston, Jeffrey V.; Rafelski, Johann

    1984-05-01

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (“brainwashing”) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conductive to the simulation of memory and learning phenomena.

  13. The hysteretic Hopfield neural network.

    PubMed

    Bharitkar, S; Mendel, J M

    2000-01-01

    A new neuron activation function based on a property found in physical systems--hysteresis--is proposed. We incorporate this neuron activation in a fully connected dynamical system to form the hysteretic Hopfield neural network (HHNN). We then present an analog implementation of this architecture and its associated dynamical equation and energy function.We proceed to prove Lyapunov stability for this new model, and then solve a combinatorial optimization problem (i.e., the N-queen problem) using this network. We demonstrate the advantages of hysteresis by showing increased frequency of convergence to a solution, when the parameters associated with the activation function are varied. PMID:18249816

  14. Seasonal prediction of tropical cyclone activity over the north Indian Ocean using three artificial neural networks

    NASA Astrophysics Data System (ADS)

    Nath, Sankar; Kotal, S. D.; Kundu, P. K.

    2016-03-01

    Three artificial neural network (ANN) methods, namely, multilayer perceptron (MLP), radial basis function (RBF) and generalized regression neural network (GRNN) are utilized to predict the seasonal tropical cyclone (TC) activity over the north Indian Ocean (NIO) during the post-monsoon season (October, November, December). The frequency of TC and large-scale climate variables derived from NCEP/NCAR reanalysis dataset of resolution 2.5° × 2.5° were analyzed for the period 1971-2013. Data for the years 1971-2002 were used for the development of the models, which were tested with independent sample data for the year 2003-2013. Using the correlation analysis, the five large-scale climate variables, namely, geopotential height at 500 hPa, relative humidity at 500 hPa, sea-level pressure, zonal wind at 700 hPa and 200 hPa for the preceding month September, are selected as potential predictors of the post-monsoon season TC activity. The result reveals that all the three different ANN methods are able to provide satisfactory forecast in terms of the various metrics, such as root mean-square error (RMSE), standard deviation (SD), correlation coefficient (r), and bias and index of agreement (d). Additionally, leave-one-out cross validation (LOOCV) method is also performed and the forecast skill is evaluated. The results show that the MLP model is found to be superior to the other two models (RBF, GRNN). The (MLP) is expected to be very useful to operational forecasters for prediction of TC activity.

  15. Neural network approach of active ultrasonic signals for structural health monitoring analysis

    NASA Astrophysics Data System (ADS)

    Kral, Zachary; Horn, Walter; Steck, James

    2009-03-01

    Maintenance is an important issue for aerospace systems, since they are in service beyond their designed lifetime. This requires scheduled inspections and damage repair before failure. Research is in progress to develop a structural health monitoring system (SHMS) to improve this maintenance routine. Ultrasonic testing, utilizing a system of piezoelectric actuators and sensors, is a promising concept Measured wave signals are compared with signals for previously scanned states. Changes to the signal could be the result of damage to the component. This paper focuses on analyzing the differences of states, using artificial neural networks. Neural network analysis has the potential of creating a SHMS of greater ability and processing. Experiments were performed on a thin, flat aluminum panel. Ultrasonic actuators and sensors were installed and a baseline scan was performed on the undamaged panel. Simulated damage was introduced in specific areas, and scans were conducted for several damaged states. Neural networks were created to assess the changing conditions of the panel. The system was later tested on a lap joint specimen to confirm the abilities of the neural network. This form of analysis performed well at locating and quantifying areas of change within the structure. The neural network performance indicated that it has a role in the SHMS of aerospace structures.

  16. Parallel processing neural networks

    SciTech Connect

    Zargham, M.

    1988-09-01

    A model for Neural Network which is based on a particular kind of Petri Net has been introduced. The model has been implemented in C and runs on the Sequent Balance 8000 multiprocessor, however it can be directly ported to different multiprocessor environments. The potential advantages of using Petri Nets include: (1) the overall system is often easier to understand due to the graphical and precise nature of the representation scheme, (2) the behavior of the system can be analyzed using Petri Net theory. Though, the Petri Net is an obvious choice as a basis for the model, the basic Petri Net definition is not adequate to represent the neuronal system. To eliminate certain inadequacies more information has been added to the Petri Net model. In the model, a token represents either a processor or a post synaptic potential. Progress through a particular Neural Network is thus graphically depicted in the movement of the processor tokens through the Petri Net.

  17. Where’s the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network

    PubMed Central

    Hartmann, Christoph; Lazar, Andreea; Nessler, Bernhard; Triesch, Jochen

    2015-01-01

    Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be

  18. Global Mittag-Leffler synchronization of fractional-order neural networks with discontinuous activations.

    PubMed

    Ding, Zhixia; Shen, Yi; Wang, Leimin

    2016-01-01

    This paper is concerned with the global Mittag-Leffler synchronization for a class of fractional-order neural networks with discontinuous activations (FNNDAs). We give the concept of Filippov solution for FNNDAs in the sense of Caputo's fractional derivation. By using a singular Gronwall inequality and the properties of fractional calculus, the existence of global solution under the framework of Filippov for FNNDAs is proved. Based on the nonsmooth analysis and control theory, some sufficient criteria for the global Mittag-Leffler synchronization of FNNDAs are derived by designing a suitable controller. The proposed results enrich and enhance the previous reports. Finally, one numerical example is given to demonstrate the effectiveness of the theoretical results. PMID:26562442

  19. Neural networks for triggering

    SciTech Connect

    Denby, B. ); Campbell, M. ); Bedeschi, F. ); Chriss, N.; Bowers, C. ); Nesti, F. )

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  20. Anti-glycated activity prediction of polysaccharides from two guava fruits using artificial neural networks.

    PubMed

    Yan, Chunyan; Lee, Jinsheng; Kong, Fansheng; Zhang, Dezhi

    2013-10-15

    High-efficiency ultrasonic treatment was used to extract the polysaccharides of Psidium guajava (PPG) and Psidium littorale (PPL). The aims of this study were to compare polysaccharide activities from these two guavas, as well as to investigate the relationship between ultrasonic conditions and anti-glycated activity. A mathematical model of anti-glycated activity was constructed with the artificial neural network (ANN) toolbox of MATLAB software. Response surface plots showed the correlation between ultrasonic conditions and bioactivity. The optimal ultrasonic conditions of PPL for the highest anti-glycated activity were predicted to be 256 W, 60 °C, and 12 min, and the predicted activity was 42.2%. The predicted highest anti-glycated activity of PPG was 27.2% under its optimal predicted ultrasonic condition. The experimental result showed that PPG and PPL possessed anti-glycated and antioxidant activities, and those of PPL were greater. The experimental data also indicated that ANN had good prediction and optimization capability. PMID:23987324

  1. Uniformly sparse neural networks

    NASA Astrophysics Data System (ADS)

    Haghighi, Siamack

    1992-07-01

    Application of neural networks to problems with a large number of sensory inputs is severely limited when the processing elements (PEs) need to be fully connected. This paper presents a new network model in which a trade off between the number of connections to a node and the number of processing layers can be made. This trade off is an important issue in the VLSI implementation of neural networks. The performance and capability of a hierarchical pyramidal network architecture of limited fan-in PE layers is analyzed. Analysis of this architecture requires the development of a new learning rule, since each PE has access to limited information about the entire network input. A spatially local unsupervised training rule is developed in which each PE optimizes the fraction of its output variance contributed by input correlations, resulting in PEs behaving as adaptive local correlation detectors. It is also shown that the output of a PE optimally represents the mutual information among the inputs to that PE. Applications of the developed model in image compression and motion detection are presented.

  2. A neural network approach for on-line fault detection of nitrogen sensors in alternated active sludge treatment plants.

    PubMed

    Caccavale, F; Digiulio, P; Iamarino, M; Masi, S; Pierri, F

    2010-01-01

    In this paper, an effective strategy for fault detection of nitrogen sensors in alternated active sludge treatment plants is proposed and tested on a simulated set-up. It is based on two predictive neural networks, which are trained using a historical set of data collected during fault-free operation of a wastewater treatment plant and their ability to predict reduced (ammonium) and oxidized (nitrates and nitrites) nitrogen is tested. The neural networks are also characterized by good generalization ability and robustness with respect to the influent variability with time and weather conditions. Then, simulations have been carried out imposing different kinds of fault on both sensors, as isolated spikes, abrupt bias and increased noise. Processing of residuals, based on the difference between measured concentration values and neural networks predictions, allows a quick revealing of the fault as well as the isolation of the corrupted sensor. PMID:21123904

  3. Almost periodic dynamical behaviors for generalized Cohen-Grossberg neural networks with discontinuous activations via differential inclusions

    NASA Astrophysics Data System (ADS)

    Wang, Dongshu; Huang, Lihong

    2014-10-01

    In this paper, we investigate the almost periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and nonsmooth analysis theory with generalized Lyapunov approach, we obtain the existence, uniqueness and global stability of almost periodic solution to the neural networks system. It is worthy to pointed out that, without assuming the boundedness or monotonicity of the discontinuous neuron activation functions, our results will also be valid. Finally, we give some numerical examples to show the applicability and effectiveness of our main results.

  4. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  5. From baseline to epileptiform activity: A path to synchronized rhythmicity in large-scale neural networks

    NASA Astrophysics Data System (ADS)

    Shusterman, Vladimir; Troy, William C.

    2008-06-01

    In large-scale neural networks in the brain the emergence of global behavioral patterns, manifested by electroencephalographic activity, is driven by the self-organization of local neuronal groups into synchronously functioning ensembles. However, the laws governing such macrobehavior and its disturbances, in particular epileptic seizures, are poorly understood. Here we use a mean-field population network model to describe a state of baseline physiological activity and the transition from the baseline state to rhythmic epileptiform activity. We describe principles which explain how this rhythmic activity arises in the form of spatially uniform self-sustained synchronous oscillations. In addition, we show how the rate of migration of the leading edge of the synchronous oscillations can be theoretically predicted, and compare the accuracy of this prediction with that measured experimentally using multichannel electrocorticographic recordings obtained from a human subject experiencing epileptic seizures. The comparison shows that the experimentally measured rate of migration of the leading edge of synchronous oscillations is within the theoretically predicted range of values. Computer simulations have been performed to investigate the interactions between different regions of the brain and to show how organization in one spatial region can promote or inhibit organization in another. Our theoretical predictions are also consistent with the results of functional magnetic resonance imaging (fMRI), in particular with observations that lower-frequency electroencephalographic (EEG) rhythms entrain larger areas of the brain than higher-frequency rhythms. These findings advance the understanding of functional behavior of interconnected populations and might have implications for the analysis of diverse classes of networks.

  6. Development of a computational model on the neural activity patterns of a visual working memory in a hierarchical feedforward Network

    NASA Astrophysics Data System (ADS)

    An, Soyoung; Choi, Woochul; Paik, Se-Bum

    2015-11-01

    Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.

  7. Program Helps Simulate Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Mcintire, Gary

    1993-01-01

    Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.

  8. Space-Time Neural Networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1992-01-01

    Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.

  9. Classification of human activity on water through micro-Dopplers using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Kim, Youngwook; Moon, Taesup

    2016-05-01

    Detecting humans and classifying their activities on the water has significant applications for surveillance, border patrols, and rescue operations. When humans are illuminated by radar signal, they produce micro-Doppler signatures due to moving limbs. There has been a number of research into recognizing humans on land by their unique micro-Doppler signatures, but there is scant research into detecting humans on water. In this study, we investigate the micro-Doppler signatures of humans on water, including a swimming person, a swimming person pulling a floating object, and a rowing person in a small boat. The measured swimming styles were free stroke, backstroke, and breaststroke. Each activity was observed to have a unique micro-Doppler signature. Human activities were classified based on their micro-Doppler signatures. For the classification, we propose to apply deep convolutional neural networks (DCNN), a powerful deep learning technique. Rather than using conventional supervised learning that relies on handcrafted features, we present an alternative deep learning approach. We apply the DCNN, one of the most successful deep learning algorithms for image recognition, directly to a raw micro-Doppler spectrogram of humans on the water. Without extracting any explicit features from the micro-Dopplers, the DCNN can learn the necessary features and build classification boundaries using the training data. We show that the DCNN can achieve accuracy of more than 87.8% for activity classification using 5- fold cross validation.

  10. Accelerating Learning By Neural Networks

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.

  11. Artificial neural network modelling of the antioxidant activity and phenolic compounds of bananas submitted to different drying treatments.

    PubMed

    Guiné, Raquel P F; Barroca, Maria João; Gonçalves, Fernando J; Alves, Mariana; Oliveira, Solange; Mendes, Mateus

    2015-02-01

    Bananas (cv. Musa nana and Musa cavendishii) fresh and dried by hot air at 50 and 70°C and lyophilisation were analysed for phenolic contents and antioxidant activity. All samples were subject to six extractions (three with methanol followed by three with acetone/water solution). The experimental data served to train a neural network adequate to describe the experimental observations for both output variables studied: total phenols and antioxidant activity. The results show that both bananas are similar and air drying decreased total phenols and antioxidant activity for both temperatures, whereas lyophilisation decreased the phenolic content in a lesser extent. Neural network experiments showed that antioxidant activity and phenolic compounds can be predicted accurately from the input variables: banana variety, dryness state and type and order of extract. Drying state and extract order were found to have larger impact in the values of antioxidant activity and phenolic compounds. PMID:25172734

  12. Interacting neural networks.

    PubMed

    Metzler, R; Kinzel, W; Kanter, I

    2000-08-01

    Several scenarios of interacting neural networks which are trained either in an identical or in a competitive way are solved analytically. In the case of identical training each perceptron receives the output of its neighbor. The symmetry of the stationary state as well as the sensitivity to the used training algorithm are investigated. Two competitive perceptrons trained on mutually exclusive learning aims and a perceptron which is trained on the opposite of its own output are examined analytically. An ensemble of competitive perceptrons is used as decision-making algorithms in a model of a closed market (El Farol Bar problem or the Minority Game. In this game, a set of agents who have to make a binary decision is considered.); each network is trained on the history of minority decisions. This ensemble of perceptrons relaxes to a stationary state whose performance can be better than random. PMID:11088736

  13. Finite-time robust stabilization of uncertain delayed neural networks with discontinuous activations via delayed feedback control.

    PubMed

    Wang, Leimin; Shen, Yi; Sheng, Yin

    2016-04-01

    This paper is concerned with the finite-time robust stabilization of delayed neural networks (DNNs) in the presence of discontinuous activations and parameter uncertainties. By using the nonsmooth analysis and control theory, a delayed controller is designed to realize the finite-time robust stabilization of DNNs with discontinuous activations and parameter uncertainties, and the upper bound of the settling time functional for stabilization is estimated. Finally, two examples are provided to demonstrate the effectiveness of the theoretical results. PMID:26878721

  14. Dynamic interactions in neural networks

    SciTech Connect

    Arbib, M.A. ); Amari, S. )

    1989-01-01

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  15. Neural network applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Alspector, Joshua

    1994-01-01

    Neural network capabilities include automatic and organized handling of complex information, quick adaptation to continuously changing environments, nonlinear modeling, and parallel implementation. This viewgraph presentation presents Bellcore work on applications, learning chip computational function, learning system block diagram, neural network equalization, broadband access control, calling-card fraud detection, software reliability prediction, and conclusions.

  16. Neural Networks for the Beginner.

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  17. Estimating nonnegative matrix model activations with deep neural networks to increase perceptual speech quality.

    PubMed

    Williamson, Donald S; Wang, Yuxuan; Wang, DeLiang

    2015-09-01

    As a means of speech separation, time-frequency masking applies a gain function to the time-frequency representation of noisy speech. On the other hand, nonnegative matrix factorization (NMF) addresses separation by linearly combining basis vectors from speech and noise models to approximate noisy speech. This paper presents an approach for improving the perceptual quality of speech separated from background noise at low signal-to-noise ratios. An ideal ratio mask is estimated, which separates speech from noise with reasonable sound quality. A deep neural network then approximates clean speech by estimating activation weights from the ratio-masked speech, where the weights linearly combine elements from a NMF speech model. Systematic comparisons using objective metrics, including the perceptual evaluation of speech quality, show that the proposed algorithm achieves higher speech quality than related masking and NMF methods. In addition, a listening test was performed and its results show that the output of the proposed algorithm is preferred over the comparison systems in terms of speech quality. PMID:26428778

  18. Neural Network Development Tool (NETS)

    NASA Technical Reports Server (NTRS)

    Baffes, Paul T.

    1990-01-01

    Artificial neural networks formed from hundreds or thousands of simulated neurons, connected in manner similar to that in human brain. Such network models learning behavior. Using NETS involves translating problem to be solved into input/output pairs, designing network configuration, and training network. Written in C.

  19. A nanoflare model for active region radiance: application of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Bazarghan, M.; Safari, H.; Innes, D. E.; Karami, E.; Solanki, S. K.

    2008-12-01

    Context: Nanoflares are small impulsive bursts of energy that blend with and possibly make up much of the solar background emission. Determining their frequency and energy input is central to understanding the heating of the solar corona. One method is to extrapolate the energy frequency distribution of larger individually observed flares to lower energies. Only if the power law exponent is greater than 2 is it considered possible that nanoflares contribute significantly to the energy input. Aims: Time sequences of ultraviolet line radiances observed in the corona of an active region are modelled with the aim of determining the power law exponent of the nanoflare energy distribution. Methods: A simple nanoflare model based on three key parameters (the flare rate, the flare duration, and the power law exponent of the flare energy frequency distribution) is used to simulate emission line radiances from the ions Fe XIX, Ca XIII, and Si III, observed by SUMER in the corona of an active region as it rotates around the east limb of the Sun. Light curve pattern recognition by an Artificial Neural Network (ANN) scheme is used to determine the values. Results: The power law exponents, α≈2.8, 2.8, and 2.6 are obtained for Fe XIX, Ca XIII, and Si III respectively. Conclusions: The light curve simulations imply a power law exponent greater than the critical value of 2 for all ion species. This implies that if the energy of flare-like events is extrapolated to low energies, nanoflares could provide a significant contribution to the heating of active region coronae.

  20. Soil Moisture Retrieval from Active/Passive Microwave Observation Synergy Using a Neural Network Approach

    NASA Astrophysics Data System (ADS)

    Kolassa, J.; Gentine, P.; Aires, F.; Prigent, C.

    2014-12-01

    In November 2014 NASA will launch the Soil Moisture Active/Passive (SMAP) mission carrying an L-band radiometer and radar sensor to observe surface soil moisture globally. This new type of instrument requires the development of innovative retrieval algorithms that are able to account for the different surface contributions to the satellite signal and at the same time can optimally exploit the synergy of active and passive microwave data. In this study, a neural network (NN) based retrieval algorithm has been developed using the example of active microwave observations from ASCAT and passive microwave observations from AMSR-E. In a first step, different preprocessing techniques, aiming to highlight the various contributions to the satellite signal, have been investigated. It was found that in particular for the passive microwave observations, the use of multiple frequencies and preprocessing steps could help the retrieval to disentangle the effects of soil moisture, vegetation and surface temperature. A spectral analysis investigated the temporal patterns in the satellite observations and thus assessed which soil moisture temporal variations could realistically be retrieved. The preprocessed data was then used in a NN based retrieval to estimate daily volumetric surface soil moisture at the global scale for the period 2002-2013. It could be shown that the synergy of data from the two sensors yielded a significant improvement of the retrieval performance demonstrating the benefit of multi-sensor approaches as proposed for SMAP. A comparison with a more traditional retrieval product merging approach furthermore showed that the NN technique is better able to exploit the complementarity of information provided by active and passive sensors. The soil moisture retrieval product was evaluated in the spatial, temporal and frequency domain against retrieved soil moisture from WACMOS and SMOS, modeled fields from ERA-interim/Land and in situ observations from the

  1. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  2. Deinterlacing using modular neural network

    NASA Astrophysics Data System (ADS)

    Woo, Dong H.; Eom, Il K.; Kim, Yoo S.

    2004-05-01

    Deinterlacing is the conversion process from the interlaced scan to progressive one. While many previous algorithms that are based on weighted-sum cause blurring in edge region, deinterlacing using neural network can reduce the blurring through recovering of high frequency component by learning process, and is found robust to noise. In proposed algorithm, input image is divided into edge and smooth region, and then, to each region, one neural network is assigned. Through this process, each neural network learns only patterns that are similar, therefore it makes learning more effective and estimation more accurate. But even within each region, there are various patterns such as long edge and texture in edge region. To solve this problem, modular neural network is proposed. In proposed modular neural network, two modules are combined in output node. One is for low frequency feature of local area of input image, and the other is for high frequency feature. With this structure, each modular neural network can learn different patterns with compensating for drawback of counterpart. Therefore it can adapt to various patterns within each region effectively. In simulation, the proposed algorithm shows better performance compared with conventional deinterlacing methods and single neural network method.

  3. Evaluation of the suitability of neural network method for prediction of uranium activity ratio in environmental alpha spectra.

    PubMed

    Einian, Mohammad Reza; Aghamiri, Seyed Mahmood Reza; Ghaderi, Reza

    2015-11-01

    Applying Artificial Neural Network to an alpha spectrometry system is a good idea to discriminate the composition of environmental and non-environmental materials by the estimation of the (234)U/(238)U activity ratio. Because it eliminates limitations of classical approaches by the extraction the desired information from the average of a partial uranium raw spectrum. The network was trained by an alpha spectrum library which was developed in this work. The results indicated that there was a small difference between the target values and the predictions. These results were acceptable, because the thickness of samples and the inferring elements were different in the real library. PMID:26340268

  4. Modular, Hierarchical Learning By Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F.; Toomarian, Nikzad

    1996-01-01

    Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.

  5. Neural Networks for Readability Analysis.

    ERIC Educational Resources Information Center

    McEneaney, John E.

    This paper describes and reports on the performance of six related artificial neural networks that have been developed for the purpose of readability analysis. Two networks employ counts of linguistic variables that simulate a traditional regression-based approach to readability. The remaining networks determine readability from "visual snapshots"…

  6. Neural Networks Of VLSI Components

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1991-01-01

    Concept for design of electronic neural network calls for assembly of very-large-scale integrated (VLSI) circuits of few standard types. Each VLSI chip, which contains both analog and digital circuitry, used in modular or "building-block" fashion by interconnecting it in any of variety of ways with other chips. Feedforward neural network in typical situation operates under control of host computer and receives inputs from, and sends outputs to, other equipment.

  7. Interval neural networks

    SciTech Connect

    Patil, R.B.

    1995-05-01

    Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.

  8. Correlational Neural Networks.

    PubMed

    Chandar, Sarath; Khapra, Mitesh M; Larochelle, Hugo; Ravindran, Balaraman

    2016-02-01

    Common representation learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, has been receiving a lot of attention recently. Two popular paradigms here are canonical correlation analysis (CCA)-based approaches and autoencoder (AE)-based approaches. CCA-based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace. AE-based methods learn a common representation by minimizing the error of reconstructing the two views. Each of these approaches has its own advantages and disadvantages. For example, while CCA-based approaches outperform AE-based approaches for the task of transfer learning, they are not as scalable as the latter. In this work, we propose an AE-based approach, correlational neural network (CorrNet), that explicitly maximizes correlation among the views when projected to the common subspace. Through a series of experiments, we demonstrate that the proposed CorrNet is better than AE and CCA with respect to its ability to learn correlated common representations. We employ CorrNet for several cross-language tasks and show that the representations learned using it perform better than the ones learned using other state-of-the-art approaches. PMID:26654210

  9. Artificial Astrocytes Improve Neural Network Performance

    PubMed Central

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  10. Artificial astrocytes improve neural network performance.

    PubMed

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  11. Neural-Network-Development Program

    NASA Technical Reports Server (NTRS)

    Phillips, Todd A.

    1993-01-01

    NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.

  12. Application of artificial neural network in precise prediction of cement elements percentages based on the neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Eftekhari Zadeh, E.; Feghhi, S. A. H.; Roshani, G. H.; Rezaei, A.

    2016-05-01

    Due to variation of neutron energy spectrum in the target sample during the activation process and to peak overlapping caused by the Compton effect with gamma radiations emitted from activated elements, which results in background changes and consequently complex gamma spectrum during the measurement process, quantitative analysis will ultimately be problematic. Since there is no simple analytical correlation between peaks' counts with elements' concentrations, an artificial neural network for analyzing spectra can be a helpful tool. This work describes a study on the application of a neural network to determine the percentages of cement elements (mainly Ca, Si, Al, and Fe) using the neutron capture delayed gamma-ray spectra of the substance emitted by the activated nuclei as patterns which were simulated via the Monte Carlo N-particle transport code, version 2.7. The Radial Basis Function (RBF) network is developed with four specific peaks related to Ca, Si, Al and Fe, which were extracted as inputs. The proposed RBF model is developed and trained with MATLAB 7.8 software. To obtain the optimal RBF model, several structures have been constructed and tested. The comparison between simulated and predicted values using the proposed RBF model shows that there is a good agreement between them.

  13. Working memory activation of neural networks in the elderly as a function of information processing phase and task complexity.

    PubMed

    Charroud, Céline; Steffener, Jason; Le Bars, Emmanuelle; Deverdun, Jérémy; Bonafe, Alain; Abdennour, Meriem; Portet, Florence; Molino, François; Stern, Yaakov; Ritchie, Karen; Menjot de Champfleur, Nicolas; Akbaraly, Tasnime N

    2015-11-01

    Changes in working memory are sensitive indicators of both normal and pathological brain aging and associated disability. The present study aims to further understanding of working memory in normal aging using a large cohort of healthy elderly in order to examine three separate phases of information processing in relation to changes in task load activation. Using covariance analysis, increasing and decreasing neural activation was observed on fMRI in response to a delayed item recognition task in 337 cognitively healthy elderly persons as part of the CRESCENDO (Cognitive REServe and Clinical ENDOphenotypes) study. During three phases of the task (stimulation, retention, probe), increased activation was observed with increasing task load in bilateral regions of the prefrontal cortex, parietal lobule, cingulate gyrus, insula and in deep gray matter nuclei, suggesting an involvement of central executive and salience networks. Decreased activation associated with increasing task load was observed during the stimulation phase, in bilateral temporal cortex, parietal lobule, cingulate gyrus and prefrontal cortex. This spatial distribution of decreased activation is suggestive of the default mode network. These findings support the hypothesis of an increased activation in salience and central executive networks and a decreased activation in default mode network concomitant to increasing task load. PMID:26456114

  14. Sunspot prediction using neural networks

    NASA Technical Reports Server (NTRS)

    Villarreal, James; Baffes, Paul

    1990-01-01

    The earliest systematic observance of sunspot activity is known to have been discovered by the Chinese in 1382 during the Ming Dynasty (1368 to 1644) when spots on the sun were noticed by looking at the sun through thick, forest fire smoke. Not until after the 18th century did sunspot levels become more than a source of wonderment and curiosity. Since 1834 reliable sunspot data has been collected by the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Naval Observatory. Recently, considerable effort has been placed upon the study of the effects of sunspots on the ecosystem and the space environment. The efforts of the Artificial Intelligence Section of the Mission Planning and Analysis Division of the Johnson Space Center involving the prediction of sunspot activity using neural network technologies are described.

  15. Improving neural network performance on SIMD architectures

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Ilin, Dmitry; Nikolaev, Dmitry

    2015-12-01

    Neural network calculations for the image recognition problems can be very time consuming. In this paper we propose three methods of increasing neural network performance on SIMD architectures. The usage of SIMD extensions is a way to speed up neural network processing available for a number of modern CPUs. In our experiments, we use ARM NEON as SIMD architecture example. The first method deals with half float data type for matrix computations. The second method describes fixed-point data type for the same purpose. The third method considers vectorized activation functions implementation. For each method we set up a series of experiments for convolutional and fully connected networks designed for image recognition task.

  16. Signal dispersion within a hippocampal neural network

    NASA Technical Reports Server (NTRS)

    Horowitz, J. M.; Mates, J. W. B.

    1975-01-01

    A model network is described, representing two neural populations coupled so that one population is inhibited by activity it excites in the other. Parameters and operations within the model represent EPSPs, IPSPs, neural thresholds, conduction delays, background activity and spatial and temporal dispersion of signals passing from one population to the other. Simulations of single-shock and pulse-train driving of the network are presented for various parameter values. Neuronal events from 100 to 300 msec following stimulation are given special consideration in model calculations.

  17. Application of Kohonen Neural Networks in classification of biologically active compounds.

    PubMed

    Kirew, D B; Chretien, J R; Bernard, P; Ros, F

    1998-01-01

    Automated data classification is an indispensable tool in Drug Design. It allows to select homogeneous training sets or to distinguish compounds with required biological properties. The Kohonen Neural Networks (KNN) suggest new means for classification of biologically interesting compounds. In this paper, first, capabilities of KNN in data dimensionality reduction are presented as compared with the capabilities of Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA). The advantages of KNN become evident with increasing data dimensionality and size of the training set. Then, new methods are suggested to evaluate the quality of KNN models. Finally, a case study on chemical and biological data is presented. The database studied includes more than 2000 organophosphorous potent pesticides. The Kohonen maps were obtained which allow to distinguish compounds with different biological behavior. PMID:9517011

  18. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources. PMID:25974951

  19. Critical and resonance phenomena in neural networks

    NASA Astrophysics Data System (ADS)

    Goltsev, A. V.; Lopes, M. A.; Lee, K.-E.; Mendes, J. F. F.

    2013-01-01

    Brain rhythms contribute to every aspect of brain function. Here, we study critical and resonance phenomena that precede the emergence of brain rhythms. Using an analytical approach and simulations of a cortical circuit model of neural networks with stochastic neurons in the presence of noise, we show that spontaneous appearance of network oscillations occurs as a dynamical (non-equilibrium) phase transition at a critical point determined by the noise level, network structure, the balance between excitatory and inhibitory neurons, and other parameters. We find that the relaxation time of neural activity to a steady state, response to periodic stimuli at the frequency of the oscillations, amplitude of damped oscillations, and stochastic fluctuations of neural activity are dramatically increased when approaching the critical point of the transition.

  20. Optical neural stimulation modeling on degenerative neocortical neural networks

    NASA Astrophysics Data System (ADS)

    Zverev, M.; Fanjul-Vélez, F.; Salas-García, I.; Arce-Diego, J. L.

    2015-07-01

    Neurodegenerative diseases usually appear at advanced age. Medical advances make people live longer and as a consequence, the number of neurodegenerative diseases continuously grows. There is still no cure for these diseases, but several brain stimulation techniques have been proposed to improve patients' condition. One of them is Optical Neural Stimulation (ONS), which is based on the application of optical radiation over specific brain regions. The outer cerebral zones can be noninvasively stimulated, without the common drawbacks associated to surgical procedures. This work focuses on the analysis of ONS effects in stimulated neurons to determine their influence in neuronal activity. For this purpose a neural network model has been employed. The results show the neural network behavior when the stimulation is provided by means of different optical radiation sources and constitute a first approach to adjust the optical light source parameters to stimulate specific neocortical areas.

  1. Neural networks: a biased overview

    SciTech Connect

    Domany, E.

    1988-06-01

    An overview of recent activity in the field of neural networks is presented. The long-range aim of this research is to understand how the brain works. First some of the problems are stated and terminology defined; then an attempt is made to explain why physicists are drawn to the field, and their main potential contribution. In particular, in recent years some interesting models have been introduced by physicists. A small subset of these models is described, with particular emphasis on those that are analytically soluble. Finally a brief review of the history and recent developments of single- and multilayer perceptrons is given, bringing the situation up to date regarding the central immediate problem of the field: search for a learning algorithm that has an associated convergence theorem.

  2. Wavelet differential neural network observer.

    PubMed

    Chairez, Isaac

    2009-09-01

    State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown. PMID:19674951

  3. Neural networks involved in adolescent reward processing: An activation likelihood estimation meta-analysis of functional neuroimaging studies.

    PubMed

    Silverman, Merav H; Jedd, Kelly; Luciana, Monica

    2015-11-15

    Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: (1) confirm the network of brain regions involved in adolescents' reward processing, (2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and (3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence. PMID:26254587

  4. Multiprocessor Neural Network in Healthcare.

    PubMed

    Godó, Zoltán Attila; Kiss, Gábor; Kocsis, Dénes

    2015-01-01

    A possible way of creating a multiprocessor artificial neural network is by the use of microcontrollers. The RISC processors' high performance and the large number of I/O ports mean they are greatly suitable for creating such a system. During our research, we wanted to see if it is possible to efficiently create interaction between the artifical neural network and the natural nervous system. To achieve as much analogy to the living nervous system as possible, we created a frequency-modulated analog connection between the units. Our system is connected to the living nervous system through 128 microelectrodes. Two-way communication is provided through A/D transformation, which is even capable of testing psychopharmacons. The microcontroller-based analog artificial neural network can play a great role in medical singal processing, such as ECG, EEG etc. PMID:26152990

  5. Using neural networks for process planning

    NASA Astrophysics Data System (ADS)

    Huang, Samuel H.; Zhang, HongChao

    1995-08-01

    Process planning has been recognized as an interface between computer-aided design and computer-aided manufacturing. Since the late 1960s, computer techniques have been used to automate process planning activities. AI-based techniques are designed for capturing, representing, organizing, and utilizing knowledge by computers, and are extremely useful for automated process planning. To date, most of the AI-based approaches used in automated process planning are some variations of knowledge-based expert systems. Due to their knowledge acquisition bottleneck, expert systems are not sufficient in solving process planning problems. Fortunately, AI has developed other techniques that are useful for knowledge acquisition, e.g., neural networks. Neural networks have several advantages over expert systems that are desired in today's manufacturing practice. However, very few neural network applications in process planning have been reported. We present this paper in order to stimulate the research on using neural networks for process planning. This paper also identifies the problems with neural networks and suggests some possible solutions, which will provide some guidelines for research and implementation.

  6. Neural network ultrasound image analysis

    NASA Astrophysics Data System (ADS)

    Schneider, Alexander C.; Brown, David G.; Pastel, Mary S.

    1993-09-01

    Neural network based analysis of ultrasound image data was carried out on liver scans of normal subjects and those diagnosed with diffuse liver disease. In a previous study, ultrasound images from a group of normal volunteers, Gaucher's disease patients, and hepatitis patients were obtained by Garra et al., who used classical statistical methods to distinguish from among these three classes. In the present work, neural network classifiers were employed with the same image features found useful in the previous study for this task. Both standard backpropagation neural networks and a recently developed biologically-inspired network called Dystal were used. Classification performance as measured by the area under a receiver operating characteristic curve was generally excellent for the back propagation networks and was roughly comparable to that of classical statistical discriminators tested on the same data set and documented in the earlier study. Performance of the Dystal network was significantly inferior; however, this may be due to the choice of network parameter. Potential methods for enhancing network performance was identified.

  7. Neural network tomography: network replication from output surface geometry.

    PubMed

    Minnett, Rupert C J; Smith, Andrew T; Lennon, William C; Hecht-Nielsen, Robert

    2011-06-01

    Multilayer perceptron networks whose outputs consist of affine combinations of hidden units using the tanh activation function are universal function approximators and are used for regression, typically by reducing the MSE with backpropagation. We present a neural network weight learning algorithm that directly positions the hidden units within input space by numerically analyzing the curvature of the output surface. Our results show that under some sampling requirements, this method can reliably recover the parameters of a neural network used to generate a data set. PMID:21377326

  8. Localizing Tortoise Nests by Neural Networks

    PubMed Central

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  9. Localizing Tortoise Nests by Neural Networks.

    PubMed

    Barbuti, Roberto; Chessa, Stefano; Micheli, Alessio; Pucci, Rita

    2016-01-01

    The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating). Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS) which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN). We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours), the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition. PMID:26985660

  10. Modulation of Neural Network Activity through Single Cell Ablation: An in Vitro Model of Minimally Invasive Neurosurgery.

    PubMed

    Soloperto, Alessandro; Bisio, Marta; Palazzolo, Gemma; Chiappalone, Michela; Bonifazi, Paolo; Difato, Francesco

    2016-01-01

    The technological advancement of optical approaches, and the growth of their applications in neuroscience, has allowed investigations of the physio-pathology of neural networks at a single cell level. Therefore, better understanding the role of single neurons in the onset and progression of neurodegenerative conditions has resulted in a strong demand for surgical tools operating with single cell resolution. Optical systems already provide subcellular resolution to monitor and manipulate living tissues, and thus allow understanding the potentiality of surgery actuated at single cell level. In the present work, we report an in vitro experimental model of minimally invasive surgery applied on neuronal cultures expressing a genetically encoded calcium sensor. The experimental protocol entails the continuous monitoring of the network activity before and after the ablation of a single neuron, to provide a robust evaluation of the induced changes in the network activity. We report that in subpopulations of about 1000 neurons, even the ablation of a single unit produces a reduction of the overall network activity. The reported protocol represents a simple and cost effective model to study the efficacy of single-cell surgery, and it could represent a test-bed to study surgical procedures circumventing the abrupt and complete tissue removal in pathological conditions. PMID:27527143

  11. Kannada character recognition system using neural network

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh D. S.; Kamalapuram, Srinivasa K.; Kumar, Ajay B. R.

    2013-03-01

    Handwriting recognition has been one of the active and challenging research areas in the field of pattern recognition. It has numerous applications which include, reading aid for blind, bank cheques and conversion of any hand written document into structural text form. As there is no sufficient number of works on Indian language character recognition especially Kannada script among 15 major scripts in India. In this paper an attempt is made to recognize handwritten Kannada characters using Feed Forward neural networks. A handwritten Kannada character is resized into 20x30 Pixel. The resized character is used for training the neural network. Once the training process is completed the same character is given as input to the neural network with different set of neurons in hidden layer and their recognition accuracy rate for different Kannada characters has been calculated and compared. The results show that the proposed system yields good recognition accuracy rates comparable to that of other handwritten character recognition systems.

  12. Adolescents' risky decision-making activates neural networks related to social cognition and cognitive control processes.

    PubMed

    Rodrigo, María José; Padrón, Iván; de Vega, Manuel; Ferstl, Evelyn C

    2014-01-01

    This study examines by means of functional magnetic resonance imaging the neural mechanisms underlying adolescents' risk decision-making in social contexts. We hypothesize that the social context could engage brain regions associated with social cognition processes and developmental changes are also expected. Sixty participants (adolescents: 17-18, and young adults: 21-22 years old) read narratives describing typical situations of decision-making in the presence of peers. They were asked to make choices in risky situations (e.g., taking or refusing a drug) or ambiguous situations (e.g., eating a hamburger or a hotdog). Risky as compared to ambiguous scenarios activated bilateral temporoparietal junction (TPJ), bilateral middle temporal gyrus (MTG), right medial prefrontal cortex, and the precuneus bilaterally; i.e., brain regions related to social cognition processes, such as self-reflection and theory of mind (ToM). In addition, brain structures related to cognitive control were active [right anterior cingulate cortex (ACC), bilateral dorsolateral prefrontal cortex (DLPFC), bilateral orbitofrontal cortex], whereas no significant clusters were obtained in the reward system (ventral striatum). Choosing the dangerous option involved a further activation of control areas (ACC) and emotional and social cognition areas (temporal pole). Adolescents employed more neural resources than young adults in the right DLPFC and the right TPJ in risk situations. When choosing the dangerous option, young adults showed a further engagement in ToM related regions (bilateral MTG) and in motor control regions related to the planning of actions (pre-supplementary motor area). Finally, the right insula and the right superior temporal gyrus were more activated in women than in men, suggesting more emotional involvement and more intensive modeling of the others' perspective in the risky conditions. These findings call for more comprehensive developmental accounts of decision-making in

  13. Adolescents’ risky decision-making activates neural networks related to social cognition and cognitive control processes

    PubMed Central

    Rodrigo, María José; Padrón, Iván; de Vega, Manuel; Ferstl, Evelyn C.

    2014-01-01

    This study examines by means of functional magnetic resonance imaging the neural mechanisms underlying adolescents’ risk decision-making in social contexts. We hypothesize that the social context could engage brain regions associated with social cognition processes and developmental changes are also expected. Sixty participants (adolescents: 17–18, and young adults: 21–22 years old) read narratives describing typical situations of decision-making in the presence of peers. They were asked to make choices in risky situations (e.g., taking or refusing a drug) or ambiguous situations (e.g., eating a hamburger or a hotdog). Risky as compared to ambiguous scenarios activated bilateral temporoparietal junction (TPJ), bilateral middle temporal gyrus (MTG), right medial prefrontal cortex, and the precuneus bilaterally; i.e., brain regions related to social cognition processes, such as self-reflection and theory of mind (ToM). In addition, brain structures related to cognitive control were active [right anterior cingulate cortex (ACC), bilateral dorsolateral prefrontal cortex (DLPFC), bilateral orbitofrontal cortex], whereas no significant clusters were obtained in the reward system (ventral striatum). Choosing the dangerous option involved a further activation of control areas (ACC) and emotional and social cognition areas (temporal pole). Adolescents employed more neural resources than young adults in the right DLPFC and the right TPJ in risk situations. When choosing the dangerous option, young adults showed a further engagement in ToM related regions (bilateral MTG) and in motor control regions related to the planning of actions (pre-supplementary motor area). Finally, the right insula and the right superior temporal gyrus were more activated in women than in men, suggesting more emotional involvement and more intensive modeling of the others’ perspective in the risky conditions. These findings call for more comprehensive developmental accounts of decision

  14. Studying the explanatory capacity of artificial neural networks for understanding environmental chemical quantitative structure-activity relationship models.

    PubMed

    Yang, Lei; Wang, Peng; Jiang, Yilin; Chen, Jian

    2005-01-01

    Although artificial neural networks (ANNs) have been shown to exhibit superior predictive power in the study of quantitative structure-activity relationships (QSARs), they have also been labeled a "black box" because they provide little explanatory insight into the relative influence of the independent variables in the predictive process so that little information on how and why compounds work can be obtained. Here, we have turned our interests to their explanatory capacities; therefore, a method was proposed for assessing the relative importance of variables indicating molecular structure, on the basis of axon connection weights and partial derivatives of the ANN output with respect to its input, which can identify variables that significantly contribute to network predictions, and providing a variable selection method for ANNs. We show that, by extending this approach to ANNs, the "black box" mechanics of ANNs can be greatly illuminated, thereby making it very useful in understanding environmental chemical QSAR models. PMID:16309287

  15. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2003-12-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  16. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  17. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2004-03-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate around

  18. Tampa Electric Neural Network Sootblowing

    SciTech Connect

    Mark A. Rhode

    2002-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, online, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce {sub x} emissions and improve heat rate

  19. Plant Growth Models Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  20. Centroid calculation using neural networks

    NASA Astrophysics Data System (ADS)

    Himes, Glenn S.; Inigo, Rafael M.

    1992-01-01

    Centroid calculation provides a means of eliminating translation problems, which is useful for automatic target recognition. a neural network implementation of centroid calculation is described that used a spatial filter and a Hopfield network to determine the centroid location of an object. spatial filtering of a segmented window creates a result whose peak vale occurs at the centroid of the input data set. A Hopfield network then finds the location of this peak and hence gives the location of the centroid. Hardware implementations of the networks are described and simulation results are provided.

  1. Neural Networks for Flight Control

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1996-01-01

    Neural networks are being developed at NASA Ames Research Center to permit real-time adaptive control of time varying nonlinear systems, enhance the fault-tolerance of mission hardware, and permit online system reconfiguration. In general, the problem of controlling time varying nonlinear systems with unknown structures has not been solved. Adaptive neural control techniques show considerable promise and are being applied to technical challenges including automated docking of spacecraft, dynamic balancing of the space station centrifuge, online reconfiguration of damaged aircraft, and reducing cost of new air and spacecraft designs. Our experiences have shown that neural network algorithms solved certain problems that conventional control methods have been unable to effectively address. These include damage mitigation in nonlinear reconfiguration flight control, early performance estimation of new aircraft designs, compensation for damaged planetary mission hardware by using redundant manipulator capability, and space sensor platform stabilization. This presentation explored these developments in the context of neural network control theory. The discussion began with an overview of why neural control has proven attractive for NASA application domains. The more important issues in control system development were then discussed with references to significant technical advances in the literature. Examples of how these methods have been applied were given, followed by projections of emerging application needs and directions.

  2. Neural networks and applications tutorial

    NASA Astrophysics Data System (ADS)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  3. Constructive approximate interpolation by neural networks

    NASA Astrophysics Data System (ADS)

    Llanas, B.; Sainz, F. J.

    2006-04-01

    We present a type of single-hidden layer feedforward neural networks with sigmoidal nondecreasing activation function. We call them ai-nets. They can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can uniformly approximate any continuous function of one variable and can be used for constructing uniform approximants of continuous functions of several variables. All these capabilities are based on a closed expression of the networks.

  4. Artificial neural networks in medicine

    SciTech Connect

    Keller, P.E.

    1994-07-01

    This Technology Brief provides an overview of artificial neural networks (ANN). A definition and explanation of an ANN is given and situations in which an ANN is used are described. ANN applications to medicine specifically are then explored and the areas in which it is currently being used are discussed. Included are medical diagnostic aides, biochemical analysis, medical image analysis and drug development.

  5. Neural networks for handwriting recognition

    NASA Astrophysics Data System (ADS)

    Kelly, David A.

    1992-09-01

    The market for a product that can read handwritten forms, such as insurance applications, re- order forms, or checks, is enormous. Companies could save millions of dollars each year if they had an effective and efficient way to read handwritten forms into a computer without human intervention. Urged on by the potential gold mine that an adequate solution would yield, a number of companies and researchers have developed, and are developing, neural network-based solutions to this long-standing problem. This paper briefly outlines the current state-of-the-art in neural network-based handwriting recognition research and products. The first section of the paper examines the potential market for this technology. The next section outlines the steps in the recognition process, followed by a number of the basic issues that need to be dealt with to solve the recognition problem in a real-world setting. Next, an overview of current commercial solutions and research projects shows the different ways that neural networks are applied to the problem. This is followed by a breakdown of the current commercial market and the future outlook for neural network-based handwriting recognition technology.

  6. How Neural Networks Learn from Experience.

    ERIC Educational Resources Information Center

    Hinton, Geoffrey E.

    1992-01-01

    Discusses computational studies of learning in artificial neural networks and findings that may provide insights into the learning abilities of the human brain. Describes efforts to test theories about brain information processing, using artificial neural networks. Vignettes include information concerning how a neural network represents…

  7. Analysis of short single rest/activation epoch fMRI by self-organizing map neural network

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Dietrich, Thomas; Kemeny, Stefan; Krings, Timo; Willmes, Klaus; Thron, Armin; Oberschelp, Walter

    2000-04-01

    Functional magnet resonance imaging (fMRI) has become a standard non invasive brain imaging technique delivering high spatial resolution. Brain activation is determined by magnetic susceptibility of the blood oxygen level (BOLD effect) during an activation task, e.g. motor, auditory and visual tasks. Usually box-car paradigms have 2 - 4 rest/activation epochs with at least an overall of 50 volumes per scan in the time domain. Statistical test based analysis methods need a large amount of repetitively acquired brain volumes to gain statistical power, like Student's t-test. The introduced technique based on a self-organizing neural network (SOM) makes use of the intrinsic features of the condition change between rest and activation epoch and demonstrated to differentiate between the conditions with less time points having only one rest and one activation epoch. The method reduces scan and analysis time and the probability of possible motion artifacts from the relaxation of the patients head. Functional magnet resonance imaging (fMRI) of patients for pre-surgical evaluation and volunteers were acquired with motor (hand clenching and finger tapping), sensory (ice application), auditory (phonological and semantic word recognition task) and visual paradigms (mental rotation). For imaging we used different BOLD contrast sensitive Gradient Echo Planar Imaging (GE-EPI) single-shot pulse sequences (TR 2000 and 4000, 64 X 64 and 128 X 128, 15 - 40 slices) on a Philips Gyroscan NT 1.5 Tesla MR imager. All paradigms were RARARA (R equals rest, A equals activation) with an epoch width of 11 time points each. We used the self-organizing neural network implementation described by T. Kohonen with a 4 X 2 2D neuron map. The presented time course vectors were clustered by similar features in the 2D neuron map. Three neural networks were trained and used for labeling with the time course vectors of one, two and all three on/off epochs. The results were also compared by using a

  8. Energy coding in neural network with inhibitory neurons.

    PubMed

    Wang, Ziyin; Wang, Rubin; Fang, Ruiyan

    2015-04-01

    This paper aimed at assessing and comparing the effects of the inhibitory neurons in the neural network on the neural energy distribution, and the network activities in the absence of the inhibitory neurons to understand the nature of neural energy distribution and neural energy coding. Stimulus, synchronous oscillation has significant difference between neural networks with and without inhibitory neurons, and this difference can be quantitatively evaluated by the characteristic energy distribution. In addition, the synchronous oscillation difference of the neural activity can be quantitatively described by change of the energy distribution if the network parameters are gradually adjusted. Compared with traditional method of correlation coefficient analysis, the quantitative indicators based on nervous energy distribution characteristics are more effective in reflecting the dynamic features of the neural network activities. Meanwhile, this neural coding method from a global perspective of neural activity effectively avoids the current defects of neural encoding and decoding theory and enormous difficulties encountered. Our studies have shown that neural energy coding is a new coding theory with high efficiency and great potential. PMID:25806094

  9. Intrinsic adaptation in autonomous recurrent neural networks.

    PubMed

    Marković, Dimitrije; Gros, Claudius

    2012-02-01

    A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the quality of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics. PMID:22091667

  10. An introduction to neural networks: A tutorial

    SciTech Connect

    Walker, J.L.; Hill, E.V.K.

    1994-12-31

    Neural networks are a powerful set of mathematical techniques used for solving linear and nonlinear classification and prediction (function approximation) problems. Inspired by studies of the brain, these series and parallel combinations of simple functional units called artificial neurons have the ability to learn or be trained to solve very complex problems. Fundamental aspects of artificial neurons are discussed, including their activation functions, their combination into multilayer feedforward networks with hidden layers, and the use of bias neurons to reduce training time. The back propagation (of errors) paradigm for supervised training of feedforward networks is explained. Then, the architecture and mathematics of a Kohonen self organizing map for unsupervised learning are discussed. Two example problems are given. The first is for the application of a back propagation neural network to learn the correct response to an input vector using supervised training. The second is a classification problem using a self organizing map and unsupervised training.

  11. Astrocytes Modulate Neural Network Activity by Ca2+-Dependent Uptake of Extracellular K+

    PubMed Central

    Wang, Fushun; Smith, Nathan A.; Xu, Qiwu; Fujita, Takumi; Baba, Akemichi; Matsuda, Toshio; Takano, Takahiro; Bekar, Lane; Nedergaard, Maiken

    2012-01-01

    Astrocytes are electrically nonexcitable cells that display increases in cytosolic calcium ion (Ca2+) in response to various neurotransmitters and neuromodulators. However, the physiological role of astrocytic Ca2+ signaling remains controversial. We show here that astrocytic Ca2+ signaling ex vivo and in vivo stimulated the Na+,K+-ATPase (Na+- and K+-dependent adenosine triphosphatase), leading to a transient decrease in the extracellular potassium ion (K+) concentration. This in turn led to neuronal hyperpolarization and suppressed baseline excitatory synaptic activity, detected as a reduced frequency of excitatory postsynaptic currents. Synaptic failures decreased in parallel, leading to an increase in synaptic fidelity. The net result was that astrocytes, through active uptake of K+, improved the signal-to-noise ratio of synaptic transmission. Active control of the extracellular K+ concentration thus provides astrocytes with a simple yet powerful mechanism to rapidly modulate network activity. PMID:22472648

  12. Overview of artificial neural networks.

    PubMed

    Zou, Jinming; Han, Yi; So, Sung-Sau

    2008-01-01

    The artificial neural network (ANN), or simply neural network, is a machine learning method evolved from the idea of simulating the human brain. The data explosion in modem drug discovery research requires sophisticated analysis methods to uncover the hidden causal relationships between single or multiple responses and a large set of properties. The ANN is one of many versatile tools to meet the demand in drug discovery modeling. Compared to a traditional regression approach, the ANN is capable of modeling complex nonlinear relationships. The ANN also has excellent fault tolerance and is fast and highly scalable with parallel processing. This chapter introduces the background of ANN development and outlines the basic concepts crucially important for understanding more sophisticated ANN. Several commonly used learning methods and network setups are discussed briefly at the end of the chapter. PMID:19065803

  13. Altered Synchronizations among Neural Networks in Geriatric Depression

    PubMed Central

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G.; Steffens, David C.

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression. PMID:26180795

  14. Altered Synchronizations among Neural Networks in Geriatric Depression.

    PubMed

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression. PMID:26180795

  15. Multistability analysis of a general class of recurrent neural networks with non-monotonic activation functions and time-varying delays.

    PubMed

    Liu, Peng; Zeng, Zhigang; Wang, Jun

    2016-07-01

    This paper addresses the multistability for a general class of recurrent neural networks with time-varying delays. Without assuming the linearity or monotonicity of the activation functions, several new sufficient conditions are obtained to ensure the existence of (2K+1)(n) equilibrium points and the exponential stability of (K+1)(n) equilibrium points among them for n-neuron neural networks, where K is a positive integer and determined by the type of activation functions and the parameters of neural network jointly. The obtained results generalize and improve the earlier publications. Furthermore, the attraction basins of these exponentially stable equilibrium points are estimated. It is revealed that the attraction basins of these exponentially stable equilibrium points can be larger than their originally partitioned subsets. Finally, three illustrative numerical examples show the effectiveness of theoretical results. PMID:27136665

  16. Neural Networks For Visual Telephony

    NASA Astrophysics Data System (ADS)

    Gottlieb, A. M.; Alspector, J.; Huang, P.; Hsing, T. R.

    1988-10-01

    By considering how an image is processed by the eye and brain, we may find ways to simplify the task of transmitting complex video images over a telecommunication channel. Just as the retina and visual cortex reduce the amount of information sent to other areas of the brain, electronic systems can be designed to compress visual data, encode features, and adapt to new scenes for video transmission. In this talk, we describe a system inspired by models of neural computation that may, in the future, augment standard digital processing techniques for image compression. In the next few years it is expected that a compact low-cost full motion video telephone operating over an ISDN basic access line (144 KBits/sec) will be shown to be feasible. These systems will likely be based on a standard digital signal processing approach. In this talk, we discuss an alternative method that does not use standard digital signal processing but instead uses eletronic neural networks to realize the large compression necessary for a low bit-rate video telephone. This neural network approach is not being advocated as a near term solution for visual telephony. However, low bit rate visual telephony is an area where neural network technology may, in the future, find a significant application.

  17. Validation and regulation of medical neural networks.

    PubMed

    Rodvold, D M

    2001-01-01

    Using artificial neural networks (ANNs) in medical applications can be challenging because of the often-experimental nature of ANN construction and the "black box" label that is frequently attached to them. In the US, medical neural networks are regulated by the Food and Drug Administration. This article briefly discusses the documented FDA policy on neural networks and the various levels of formal acceptance that neural network development groups might pursue. To assist medical neural network developers in creating robust and verifiable software, this paper provides a development process model targeted specifically to ANNs for critical applications. PMID:11790274

  18. Controlling neural network responsiveness: tradeoffs and constraints

    PubMed Central

    Keren, Hanna; Marom, Shimon

    2014-01-01

    In recent years much effort is invested in means to control neural population responses at the whole brain level, within the context of developing advanced medical applications. The tradeoffs and constraints involved, however, remain elusive due to obvious complications entailed by studying whole brain dynamics. Here, we present effective control of response features (probability and latency) of cortical networks in vitro over many hours, and offer this approach as an experimental toy for studying controllability of neural networks in the wider context. Exercising this approach we show that enforcement of stable high activity rates by means of closed loop control may enhance alteration of underlying global input–output relations and activity dependent dispersion of neuronal pair-wise correlations across the network. PMID:24808860

  19. Tracing Activity Across the Whole Brain Neural Network with Optogenetic Functional Magnetic Resonance Imaging

    PubMed Central

    Lee, Jin Hyung

    2011-01-01

    Despite the overwhelming need, there has been a relatively large gap in our ability to trace network level activity across the brain. The complex dense wiring of the brain makes it extremely challenging to understand cell-type specific activity and their communication beyond a few synapses. Recent development of the optogenetic functional magnetic resonance imaging (ofMRI) provides a new impetus for the study of brain circuits by enabling causal tracing of activities arising from defined cell types and firing patterns across the whole brain. Brain circuit elements can be selectively triggered based on their genetic identity, cell body location, and/or their axonal projection target with temporal precision while the resulting network response is monitored non-invasively with unprecedented spatial and temporal accuracy. With further studies including technological innovations to bring ofMRI to its full potential, ofMRI is expected to play an important role in our system-level understanding of the brain circuit mechanism. PMID:22046160

  20. A Novel Higher Order Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Xu, Shuxiang

    2010-05-01

    In this paper a new Higher Order Neural Network (HONN) model is introduced and applied in several data mining tasks. Data Mining extracts hidden patterns and valuable information from large databases. A hyperbolic tangent function is used as the neuron activation function for the new HONN model. Experiments are conducted to demonstrate the advantages and disadvantages of the new HONN model, when compared with several conventional Artificial Neural Network (ANN) models: Feedforward ANN with the sigmoid activation function; Feedforward ANN with the hyperbolic tangent activation function; and Radial Basis Function (RBF) ANN with the Gaussian activation function. The experimental results seem to suggest that the new HONN holds higher generalization capability as well as abilities in handling missing data.

  1. An efficient neural network approach to dynamic robot motion planning.

    PubMed

    Yang, S X; Meng, M

    2000-03-01

    In this paper, a biologically inspired neural network approach to real-time collision-free motion planning of mobile robots or robot manipulators in a nonstationary environment is proposed. Each neuron in the topologically organized neural network has only local connections, whose neural dynamics is characterized by a shunting equation. Thus the computational complexity linearly depends on the neural network size. The real-time robot motion is planned through the dynamic activity landscape of the neural network without any prior knowledge of the dynamic environment, without explicitly searching over the free workspace or the collision paths, and without any learning procedures. Therefore it is computationally efficient. The global stability of the neural network is guaranteed by qualitative analysis and the Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies. PMID:10935758

  2. The use of neural networks for approximation of nuclear data

    SciTech Connect

    Korovin, Yu. A.; Maksimushkina, A. V.

    2015-12-15

    The article discusses the possibility of using neural networks for approximation or reconstruction of data such as the reaction cross sections. The quality of the approximation using fitting criteria is also evaluated. The activity of materials under irradiation is calculated from data obtained using neural networks.

  3. Hourly photosynthetically active radiation estimation in Midwestern United States from artificial neural networks and conventional regressions models.

    PubMed

    Yu, Xiaolei; Guo, Xulin

    2016-08-01

    The relationship between hourly photosynthetically active radiation (PAR) and the global solar radiation (R s ) was analyzed from data gathered over 3 years at Bondville, IL, and Sioux Falls, SD, Midwestern USA. These data were used to determine temporal variability of the PAR fraction and its dependence on different sky conditions, which were defined by the clearness index. Meanwhile, models based on artificial neural networks (ANNs) were established for predicting hourly PAR. The performance of the proposed models was compared with four existing conventional regression models in terms of the normalized root mean square error (NRMSE), the coefficient of determination (r (2)), the mean percentage error (MPE), and the relative standard error (RSE). From the overall analysis, it shows that the ANN model can predict PAR accurately, especially for overcast sky and clear sky conditions. Meanwhile, the parameters related to water vapor do not improve the prediction result significantly. PMID:26715137

  4. Hourly photosynthetically active radiation estimation in Midwestern United States from artificial neural networks and conventional regressions models

    NASA Astrophysics Data System (ADS)

    Yu, Xiaolei; Guo, Xulin

    2016-08-01

    The relationship between hourly photosynthetically active radiation (PAR) and the global solar radiation ( R s ) was analyzed from data gathered over 3 years at Bondville, IL, and Sioux Falls, SD, Midwestern USA. These data were used to determine temporal variability of the PAR fraction and its dependence on different sky conditions, which were defined by the clearness index. Meanwhile, models based on artificial neural networks (ANNs) were established for predicting hourly PAR. The performance of the proposed models was compared with four existing conventional regression models in terms of the normalized root mean square error (NRMSE), the coefficient of determination ( r 2), the mean percentage error (MPE), and the relative standard error (RSE). From the overall analysis, it shows that the ANN model can predict PAR accurately, especially for overcast sky and clear sky conditions. Meanwhile, the parameters related to water vapor do not improve the prediction result significantly.

  5. Hourly photosynthetically active radiation estimation in Midwestern United States from artificial neural networks and conventional regressions models

    NASA Astrophysics Data System (ADS)

    Yu, Xiaolei; Guo, Xulin

    2015-12-01

    The relationship between hourly photosynthetically active radiation (PAR) and the global solar radiation (R s ) was analyzed from data gathered over 3 years at Bondville, IL, and Sioux Falls, SD, Midwestern USA. These data were used to determine temporal variability of the PAR fraction and its dependence on different sky conditions, which were defined by the clearness index. Meanwhile, models based on artificial neural networks (ANNs) were established for predicting hourly PAR. The performance of the proposed models was compared with four existing conventional regression models in terms of the normalized root mean square error (NRMSE), the coefficient of determination (r 2), the mean percentage error (MPE), and the relative standard error (RSE). From the overall analysis, it shows that the ANN model can predict PAR accurately, especially for overcast sky and clear sky conditions. Meanwhile, the parameters related to water vapor do not improve the prediction result significantly.

  6. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  7. The LILARTI neural network system

    SciTech Connect

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  8. Patterns of Cortical Oscillations Organize Neural Activity into Whole-Brain Functional Networks Evident in the fMRI BOLD Signal

    PubMed Central

    Whitman, Jennifer C.; Ward, Lawrence M.; Woodward, Todd S.

    2013-01-01

    Recent findings from electrophysiology and multimodal neuroimaging have elucidated the relationship between patterns of cortical oscillations evident in EEG/MEG and the functional brain networks evident in the BOLD signal. Much of the existing literature emphasized how high-frequency cortical oscillations are thought to coordinate neural activity locally, while low-frequency oscillations play a role in coordinating activity between more distant brain regions. However, the assignment of different frequencies to different spatial scales is an oversimplification. A more informative approach is to explore the arrangements by which these low- and high-frequency oscillations work in concert, coordinating neural activity into whole-brain functional networks. When relating such networks to the BOLD signal, we must consider how the patterns of cortical oscillations change at the same speed as cognitive states, which often last less than a second. Consequently, the slower BOLD signal may often reflect the summed neural activity of several transient network configurations. This temporal mismatch can be circumvented if we use spatial maps to assess correspondence between oscillatory networks and BOLD networks. PMID:23504590

  9. Patterns of Cortical Oscillations Organize Neural Activity into Whole-Brain Functional Networks Evident in the fMRI BOLD Signal.

    PubMed

    Whitman, Jennifer C; Ward, Lawrence M; Woodward, Todd S

    2013-01-01

    Recent findings from electrophysiology and multimodal neuroimaging have elucidated the relationship between patterns of cortical oscillations evident in EEG/MEG and the functional brain networks evident in the BOLD signal. Much of the existing literature emphasized how high-frequency cortical oscillations are thought to coordinate neural activity locally, while low-frequency oscillations play a role in coordinating activity between more distant brain regions. However, the assignment of different frequencies to different spatial scales is an oversimplification. A more informative approach is to explore the arrangements by which these low- and high-frequency oscillations work in concert, coordinating neural activity into whole-brain functional networks. When relating such networks to the BOLD signal, we must consider how the patterns of cortical oscillations change at the same speed as cognitive states, which often last less than a second. Consequently, the slower BOLD signal may often reflect the summed neural activity of several transient network configurations. This temporal mismatch can be circumvented if we use spatial maps to assess correspondence between oscillatory networks and BOLD networks. PMID:23504590

  10. Membership generation using multilayer neural network

    NASA Technical Reports Server (NTRS)

    Kim, Jaeseok

    1992-01-01

    There has been intensive research in neural network applications to pattern recognition problems. Particularly, the back-propagation network has attracted many researchers because of its outstanding performance in pattern recognition applications. In this section, we describe a new method to generate membership functions from training data using a multilayer neural network. The basic idea behind the approach is as follows. The output values of a sigmoid activation function of a neuron bear remarkable resemblance to membership values. Therefore, we can regard the sigmoid activation values as the membership values in fuzzy set theory. Thus, in order to generate class membership values, we first train a suitable multilayer network using a training algorithm such as the back-propagation algorithm. After the training procedure converges, the resulting network can be treated as a membership generation network, where the inputs are feature values and the outputs are membership values in the different classes. This method allows fairly complex membership functions to be generated because the network is highly nonlinear in general. Also, it is to be noted that the membership functions are generated from a classification point of view. For pattern recognition applications, this is highly desirable, although the membership values may not be indicative of the degree of typicality of a feature value in a particular class.

  11. Load forecasting using artificial neural networks

    SciTech Connect

    Pham, K.D.

    1995-12-31

    Artificial neural networks, modeled after their biological counterpart, have been successfully applied in many diverse areas including speech and pattern recognition, remote sensing, electrical power engineering, robotics and stock market forecasting. The most commonly used neural networks are those that gained knowledge from experience. Experience is presented to the network in form of the training data. Once trained, the neural network can recognized data that it has not seen before. This paper will present a fundamental introduction to the manner in which neural networks work and how to use them in load forecasting.

  12. Nonlinear PLS modeling using neural networks

    SciTech Connect

    Qin, S.J.; McAvoy, T.J.

    1994-12-31

    This paper discusses the embedding of neural networks into the framework of the PLS (partial least squares) modeling method resulting in a neural net PLS modeling approach. By using the universal approximation property of neural networks, the PLS modeling method is genealized to a nonlinear framework. The resulting model uses neural networks to capture the nonlinearity and keeps the PLS projection to attain robust generalization property. In this paper, the standard PLS modeling method is briefly reviewed. Then a neural net PLS (NNPLS) modeling approach is proposed which incorporates feedforward networks into the PLS modeling. A multi-input-multi-output nonlinear modeling task is decomposed into linear outer relations and simple nonlinear inner relations which are performed by a number of single-input-single-output networks. Since only a small size network is trained at one time, the over-parametrized problem of the direct neural network approach is circumvented even when the training data are very sparse. A conjugate gradient learning method is employed to train the network. It is shown that, by analyzing the NNPLS algorithm, the global NNPLS model is equivalent to a multilayer feedforward network. Finally, applications of the proposed NNPLS method are presented with comparison to the standard linear PLS method and the direct neural network approach. The proposed neural net PLS method gives better prediction results than the PLS modeling method and the direct neural network approach.

  13. Tumor Diagnosis Using Backpropagation Neural Network Method

    NASA Astrophysics Data System (ADS)

    Ma, Lixing; Looney, Carl; Sukuta, Sydney; Bruch, Reinhard; Afanasyeva, Natalia

    1998-05-01

    For characterization of skin cancer, an artificial neural network (ANN) method has been developed to diagnose normal tissue, benign tumor and melanoma. The pattern recognition is based on a three-layer neural network fuzzy learning system. In this study, the input neuron data set is the Fourier Transform infrared (FT-IR)spectrum obtained by a new Fiberoptic Evanescent Wave Fourier Transform Infrared (FEW-FTIR) spectroscopy method in the range of 1480 to 1850 cm-1. Ten input features are extracted from the absorbency values in this region. A single hidden layer of neural nodes with sigmoids activation functions clusters the feature space into small subclasses and the output nodes are separated in different nonconvex classes to permit nonlinear discrimination of disease states. The output is classified as three classes: normal tissue, benign tumor and melanoma. The results obtained from the neural network pattern recognition are shown to be consistent with traditional medical diagnosis. Input features have also been extracted from the absorbency spectra using chemical factor analysis. These abstract features or factors are also used in the classification.

  14. Neural network modeling of emotion

    NASA Astrophysics Data System (ADS)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  15. Artificial neural network--based analysis of high-throughput screening data for improved prediction of active compounds.

    PubMed

    Chakrabarti, Swapan; Svojanovsky, Stan R; Slavik, Romana; Georg, Gunda I; Wilson, George S; Smith, Peter G

    2009-12-01

    Artificial neural networks (ANNs) are trained using high-throughput screening (HTS) data to recover active compounds from a large data set. Improved classification performance was obtained on combining predictions made by multiple ANNs. The HTS data, acquired from a methionine aminopeptidases inhibition study, consisted of a library of 43,347 compounds, and the ratio of active to nonactive compounds, R(A/N), was 0.0321. Back-propagation ANNs were trained and validated using principal components derived from the physicochemical features of the compounds. On selecting the training parameters carefully, an ANN recovers one-third of all active compounds from the validation set with a 3-fold gain in R(A/N) value. Further gains in R(A/N) values were obtained upon combining the predictions made by a number of ANNs. The generalization property of the back-propagation ANNs was used to train those ANNs with the same training samples, after being initialized with different sets of random weights. As a result, only 10% of all available compounds were needed for training and validation, and the rest of the data set was screened with more than a 10-fold gain of the original R(A/N) value. Thus, ANNs trained with limited HTS data might become useful in recovering active compounds from large data sets. PMID:19940083

  16. Neural networks for aircraft system identification

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.

    1991-01-01

    Artificial neural networks offer some interesting possibilities for use in control. Our current research is on the use of neural networks on an aircraft model. The model can then be used in a nonlinear control scheme. The effectiveness of network training is demonstrated.

  17. Neural networks and MIMD-multiprocessors

    NASA Technical Reports Server (NTRS)

    Vanhala, Jukka; Kaski, Kimmo

    1990-01-01

    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.

  18. Neural-Network Computer Transforms Coordinates

    NASA Technical Reports Server (NTRS)

    Josin, Gary M.

    1990-01-01

    Numerical simulation demonstrated ability of conceptual neural-network computer to generalize what it has "learned" from few examples. Ability to generalize achieved with even simple neural network (relatively few neurons) and after exposure of network to only few "training" examples. Ability to obtain fairly accurate mappings after only few training examples used to provide solutions to otherwise intractable mapping problems.

  19. Neural Networks in Nonlinear Aircraft Control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.

    1990-01-01

    Recent research indicates that artificial neural networks offer interesting learning or adaptive capabilities. The current research focuses on the potential for application of neural networks in a nonlinear aircraft control law. The current work has been to determine which networks are suitable for such an application and how they will fit into a nonlinear control law.

  20. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  1. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  2. Automatic attention orienting by social and symbolic cues activates different neural networks: an fMRI study.

    PubMed

    Hietanen, Jari K; Nummenmaa, Lauri; Nyman, Mikko J; Parkkola, Riitta; Hämäläinen, Heikki

    2006-10-15

    Visual attention can be automatically re-oriented by another person's non-predictive gaze as well as by symbolic arrow cues. We investigated whether the shifts of attention triggered by biologically relevant gaze cues and biologically non-relevant arrow cues rely on the same neural systems by comparing the effects of gaze-cued and arrow-cued orienting on blood oxygenation level-dependent (BOLD) signal in humans. Participants detected laterally presented reaction signals preceded by centrally presented non-predictive gaze and arrow cues. Directional gaze cues and arrow cues were presented in separate blocks. Furthermore, two separate control blocks were run in which non-directional cues (straight gaze or segment of a line) were used. The BOLD signals during the control blocks were subtracted from those during the respective blocks with directional cues. Behavioral data showed that, for both cue types, reaction times were shorter on congruent than incongruent trials. Imaging data revealed three foci of activation for gaze-cued orienting: in the left inferior occipital gyrus and right medial and inferior occipital gyri. For arrow-cued orienting, a much more extensive network was activated. There were large postcentral activations bilaterally including areas in the medial/inferior occipital gyri and medial temporal gyri and in the left intraparietal area. Interestingly, arrow cuing also activated the right frontal eye field and supplementary eye field. The results suggest that attention orienting by gaze cues and attention orienting by arrow cues are not supported by the same cortical network and that attention orienting by symbolic arrow cues relies on mechanisms associated with voluntary shifts of attention. PMID:16949306

  3. Adaptive optimization and control using neural networks

    SciTech Connect

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  4. A cardiac electrical activity model based on a cellular automata system in comparison with neural network model.

    PubMed

    Khan, Muhammad Sadiq Ali; Yousuf, Sidrah

    2016-03-01

    Cardiac Electrical Activity is commonly distributed into three dimensions of Cardiac Tissue (Myocardium) and evolves with duration of time. The indicator of heart diseases can occur randomly at any time of a day. Heart rate, conduction and each electrical activity during cardiac cycle should be monitor non-invasively for the assessment of "Action Potential" (regular) and "Arrhythmia" (irregular) rhythms. Many heart diseases can easily be examined through Automata model like Cellular Automata concepts. This paper deals with the different states of cardiac rhythms using cellular automata with the comparison of neural network also provides fast and highly effective stimulation for the contraction of cardiac muscles on the Atria in the result of genesis of electrical spark or wave. The specific formulated model named as "States of automaton Proposed Model for CEA (Cardiac Electrical Activity)" by using Cellular Automata Methodology is commonly shows the three states of cardiac tissues conduction phenomena (i) Resting (Relax and Excitable state), (ii) ARP (Excited but Absolutely refractory Phase i.e. Excited but not able to excite neighboring cells) (iii) RRP (Excited but Relatively Refractory Phase i.e. Excited and able to excite neighboring cells). The result indicates most efficient modeling with few burden of computation and it is Action Potential during the pumping of blood in cardiac cycle. PMID:27087101

  5. Correcting wave predictions with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Makarynskyy, O.; Makarynska, D.

    2003-04-01

    The predictions of wind waves with different lead times are necessary in a large scope of coastal and open ocean activities. Numerical wave models, which usually provide this information, are based on deterministic equations that do not entirely account for the complexity and uncertainty of the wave generation and dissipation processes. An attempt to improve wave parameters short-term forecasts based on artificial neural networks is reported. In recent years, artificial neural networks have been used in a number of coastal engineering applications due to their ability to approximate the nonlinear mathematical behavior without a priori knowledge of interrelations among the elements within a system. The common multilayer feed-forward networks, with a nonlinear transfer functions in the hidden layers, were developed and employed to forecast the wave characteristics over one hour intervals starting from one up to 24 hours, and to correct these predictions. Three non-overlapping data sets of wave characteristics, both from a buoy, moored roughly 60 miles west of the Aran Islands, west coast of Ireland, were used to train and validate the neural nets involved. The networks were trained with error back propagation algorithm. Time series plots and scatterplots of the wave characteristics as well as tables with statistics show an improvement of the results achieved due to the correction procedure employed.

  6. Complexity matching in neural networks

    NASA Astrophysics Data System (ADS)

    Usefie Mafahim, Javad; Lambert, David; Zare, Marzieh; Grigolini, Paolo

    2015-01-01

    In the wide literature on the brain and neural network dynamics the notion of criticality is being adopted by an increasing number of researchers, with no general agreement on its theoretical definition, but with consensus that criticality makes the brain very sensitive to external stimuli. We adopt the complexity matching principle that the maximal efficiency of communication between two complex networks is realized when both of them are at criticality. We use this principle to establish the value of the neuronal interaction strength at which criticality occurs, yielding a perfect agreement with the adoption of temporal complexity as criticality indicator. The emergence of a scale-free distribution of avalanche size is proved to occur in a supercritical regime. We use an integrate-and-fire model where the randomness of each neuron is only due to the random choice of a new initial condition after firing. The new model shares with that proposed by Izikevich the property of generating excessive periodicity, and with it the annihilation of temporal complexity at supercritical values of the interaction strength. We find that the concentration of inhibitory links can be used as a control parameter and that for a sufficiently large concentration of inhibitory links criticality is recovered again. Finally, we show that the response of a neural network at criticality to a harmonic stimulus is very weak, in accordance with the complexity matching principle.

  7. Advances in neural networks research: an introduction.

    PubMed

    Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar

    2009-01-01

    The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications. PMID:19632811

  8. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  9. Neural network based system for equipment surveillance

    DOEpatents

    Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.

    1998-01-01

    A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.

  10. Neural network modeling of distillation columns

    SciTech Connect

    Baratti, R.; Vacca, G.; Servida, A.

    1995-06-01

    Neural network modeling (NNM) was implemented for monitoring and control applications on two actual distillation columns: the butane splitter tower and the gasoline stabilizer. The two distillation columns are in operation at the SARAS refinery. Results show that with proper implementation techniques NNM can significantly improve column operation. The common belief that neural networks can be used as black-box process models is not completely true. Effective implementation always requires a minimum degree of process knowledge to identify the relevant inputs to the net. After background and generalities on neural network modeling, the paper describes efforts on the development of neural networks for the two distillation units.

  11. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  12. Aerodynamic Design Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.

    2003-01-01

    The design of aerodynamic components of aircraft, such as wings or engines, involves a process of obtaining the most optimal component shape that can deliver the desired level of component performance, subject to various constraints, e.g., total weight or cost, that the component must satisfy. Aerodynamic design can thus be formulated as an optimization problem that involves the minimization of an objective function subject to constraints. A new aerodynamic design optimization procedure based on neural networks and response surface methodology (RSM) incorporates the advantages of both traditional RSM and neural networks. The procedure uses a strategy, denoted parameter-based partitioning of the design space, to construct a sequence of response surfaces based on both neural networks and polynomial fits to traverse the design space in search of the optimal solution. Some desirable characteristics of the new design optimization procedure include the ability to handle a variety of design objectives, easily impose constraints, and incorporate design guidelines and rules of thumb. It provides an infrastructure for variable fidelity analysis and reduces the cost of computation by using less-expensive, lower fidelity simulations in the early stages of the design evolution. The initial or starting design can be far from optimal. The procedure is easy and economical to use in large-dimensional design space and can be used to perform design tradeoff studies rapidly. Designs involving multiple disciplines can also be optimized. Some practical applications of the design procedure that have demonstrated some of its capabilities include the inverse design of an optimal turbine airfoil starting from a generic shape and the redesign of transonic turbines to improve their unsteady aerodynamic characteristics.

  13. On analog implementations of discrete neural networks

    SciTech Connect

    Beiu, V.; Moore, K.R.

    1998-12-01

    The paper will show that in order to obtain minimum size neural networks (i.e., size-optimal) for implementing any Boolean function, the nonlinear activation function of the neutrons has to be the identity function. The authors shall shortly present many results dealing with the approximation capabilities of neural networks, and detail several bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions they will show that implementing Boolean functions can be done using neurons having an identity nonlinear function. It follows that size-optimal solutions can be obtained only using analog circuitry. Conclusions, and several comments on the required precision are ending the paper.

  14. Neural networks for nuclear spectroscopy

    SciTech Connect

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T.

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  15. Character Recognition Using Genetically Trained Neural Networks

    SciTech Connect

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the amount of

  16. Neural Network Classifies Teleoperation Data

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Giancaspro, Antonio; Losito, Sergio; Pasquariello, Guido

    1994-01-01

    Prototype artificial neural network, implemented in software, identifies phases of telemanipulator tasks in real time by analyzing feedback signals from force sensors on manipulator hand. Prototype is early, subsystem-level product of continuing effort to develop automated system that assists in training and supervising human control operator: provides symbolic feedback (e.g., warnings of impending collisions or evaluations of performance) to operator in real time during successive executions of same task. Also simplifies transition between teleoperation and autonomous modes of telerobotic system.

  17. Application of artificial neural networks for the soil moisture retrieval from active and passive microwave spaceborne sensors

    NASA Astrophysics Data System (ADS)

    Santi, Emanuele; Paloscia, Simonetta; Pettinato, Simone; Fontanelli, Giacomo

    2016-06-01

    Among the algorithms used for the retrieval of SMC from microwave sensors (both active, such as Synthetic Aperture Radar-SAR, and passive, radiometers), the artificial neural networks (ANN) represent the best compromise between accuracy and computation speed. ANN based algorithms have been developed at IFAC, and adapted to several radar and radiometric satellite sensors, in order to generate SMC products at a resolution varying from hundreds of meters to tens of kilometers according to the spatial scale of each sensor. These algorithms, which are based on the ANN techniques for inverting theoretical and semi-empirical models, have been adapted to the C- to Ka- band acquisitions from spaceborne radiometers (AMSR-E/AMSR2), SAR (Envisat/ASAR, Cosmo-SkyMed) and real aperture radar (MetOP ASCAT). Large datasets of co-located satellite acquisitions and direct SMC measurements on several test sites worldwide have been used along with simulations derived from forward electromagnetic models for setting up, training and validating these algorithms. An overall quality assessment of the obtained results in terms of accuracy and computational cost was carried out, and the main advantages and limitations for an operational use of these algorithms were evaluated. This technique allowed the retrieval of SMC from both active and passive satellite systems, with accuracy values of about 0.05 m3/m3 of SMC or better, thus making these applications compliant with the usual accuracy requirements for SMC products from space.

  18. The Laplacian spectrum of neural networks

    PubMed Central

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  19. Ozone Modeling Using Neural Networks.

    NASA Astrophysics Data System (ADS)

    Narasimhan, Ramesh; Keller, Joleen; Subramaniam, Ganesh; Raasch, Eric; Croley, Brandon; Duncan, Kathleen; Potter, William T.

    2000-03-01

    Ozone models for the city of Tulsa were developed using neural network modeling techniques. The neural models were developed using meteorological data from the Oklahoma Mesonet and ozone, nitric oxide, and nitrogen dioxide (NO2) data from Environmental Protection Agency monitoring sites in the Tulsa area. An initial model trained with only eight surface meteorological input variables and NO2 was able to simulate ozone concentrations with a correlation coefficient of 0.77. The trained model was then used to evaluate the sensitivity to the primary variables that affect ozone concentrations. The most important variables (NO2, temperature, solar radiation, and relative humidity) showed response curves with strong nonlinear codependencies. Incorporation of ozone concentrations from the previous 3 days into the model increased the correlation coefficient to 0.82. As expected, the ozone concentrations correlated best with the most recent (1-day previous) values. The model's correlation coefficient was increased to 0.88 by the incorporation of upper-air data from the National Weather Service's Nested Grid Model. Sensitivity analysis for the upper-air variables indicated unusual positive correlations between ozone and the relative humidity from 500 hPa to the tropopause in addition to the other expected correlations with upper-air temperatures, vertical wind velocity, and 1000-500-hPa layer thickness. The neural model results are encouraging for the further use of these systems to evaluate complex parameter cosensitivities, and for the use of these systems in automated ozone forecast systems.

  20. Three dimensional living neural networks

    NASA Astrophysics Data System (ADS)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  1. Artificial neural networks in neurosurgery.

    PubMed

    Azimi, Parisa; Mohammadi, Hasan Reza; Benzel, Edward C; Shahzadi, Sohrab; Azhari, Shirzad; Montazeri, Ali

    2015-03-01

    Artificial neural networks (ANNs) effectively analyze non-linear data sets. The aimed was A review of the relevant published articles that focused on the application of ANNs as a tool for assisting clinical decision-making in neurosurgery. A literature review of all full publications in English biomedical journals (1993-2013) was undertaken. The strategy included a combination of key words 'artificial neural networks', 'prognostic', 'brain', 'tumor tracking', 'head', 'tumor', 'spine', 'classification' and 'back pain' in the title and abstract of the manuscripts using the PubMed search engine. The major findings are summarized, with a focus on the application of ANNs for diagnostic and prognostic purposes. Finally, the future of ANNs in neurosurgery is explored. A total of 1093 citations were identified and screened. In all, 57 citations were found to be relevant. Of these, 50 articles were eligible for inclusion in this review. The synthesis of the data showed several applications of ANN in neurosurgery, including: (1) diagnosis and assessment of disease progression in low back pain, brain tumours and primary epilepsy; (2) enhancing clinically relevant information extraction from radiographic images, intracranial pressure processing, low back pain and real-time tumour tracking; (3) outcome prediction in epilepsy, brain metastases, lumbar spinal stenosis, lumbar disc herniation, childhood hydrocephalus, trauma mortality, and the occurrence of symptomatic cerebral vasospasm in patients with aneurysmal subarachnoid haemorrhage; (4) the use in the biomechanical assessments of spinal disease. ANNs can be effectively employed for diagnosis, prognosis and outcome prediction in neurosurgery. PMID:24987050

  2. Computational acceleration using neural networks

    NASA Astrophysics Data System (ADS)

    Cadaret, Paul

    2008-04-01

    The author's recent participation in the Small Business Innovative Research (SBIR) program has resulted in the development of a patent pending technology that enables the construction of very large and fast artificial neural networks. Through the use of UNICON's CogniMax pattern recognition technology we believe that systems can be constructed that exploit the power of "exhaustive learning" for the benefit of certain types of complex and slow computational problems. This paper presents a theoretical study that describes one potentially beneficial application of exhaustive learning. It describes how a very large and fast Radial Basis Function (RBF) artificial Neural Network (NN) can be used to implement a useful computational system. Viewed another way, it presents an unusual method of transforming a complex, always-precise, and slow computational problem into a fuzzy pattern recognition problem where other methods are available to effectively improve computational performance. The method described recognizes that the need for computational precision in a problem domain sometimes varies throughout the domain's Feature Space (FS) and high precision may only be needed in limited areas. These observations can then be exploited to the benefit of overall computational performance. Addressing computational reliability, we describe how existing always-precise computational methods can be used to reliably train the NN to perform the computational interpolation function. The author recognizes that the method described is not applicable to every situation, but over the last 8 months we have been surprised at how often this method can be applied to enable interesting and effective solutions.

  3. A new formulation for feedforward neural networks.

    PubMed

    Razavi, Saman; Tolson, Bryan A

    2011-10-01

    Feedforward neural network is one of the most commonly used function approximation techniques and has been applied to a wide variety of problems arising from various disciplines. However, neural networks are black-box models having multiple challenges/difficulties associated with training and generalization. This paper initially looks into the internal behavior of neural networks and develops a detailed interpretation of the neural network functional geometry. Based on this geometrical interpretation, a new set of variables describing neural networks is proposed as a more effective and geometrically interpretable alternative to the traditional set of network weights and biases. Then, this paper develops a new formulation for neural networks with respect to the newly defined variables; this reformulated neural network (ReNN) is equivalent to the common feedforward neural network but has a less complex error response surface. To demonstrate the learning ability of ReNN, in this paper, two training methods involving a derivative-based (a variation of backpropagation) and a derivative-free optimization algorithms are employed. Moreover, a new measure of regularization on the basis of the developed geometrical interpretation is proposed to evaluate and improve the generalization ability of neural networks. The value of the proposed geometrical interpretation, the ReNN approach, and the new regularization measure are demonstrated across multiple test problems. Results show that ReNN can be trained more effectively and efficiently compared to the common neural networks and the proposed regularization measure is an effective indicator of how a network would perform in terms of generalization. PMID:21859600

  4. Modeling Aircraft Wing Loads from Flight Data Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Allen, Michael J.; Dibley, Ryan P.

    2003-01-01

    Neural networks were used to model wing bending-moment loads, torsion loads, and control surface hinge-moments of the Active Aeroelastic Wing (AAW) aircraft. Accurate loads models are required for the development of control laws designed to increase roll performance through wing twist while not exceeding load limits. Inputs to the model include aircraft rates, accelerations, and control surface positions. Neural networks were chosen to model aircraft loads because they can account for uncharacterized nonlinear effects while retaining the capability to generalize. The accuracy of the neural network models was improved by first developing linear loads models to use as starting points for network training. Neural networks were then trained with flight data for rolls, loaded reversals, wind-up-turns, and individual control surface doublets for load excitation. Generalization was improved by using gain weighting and early stopping. Results are presented for neural network loads models of four wing loads and four control surface hinge moments at Mach 0.90 and an altitude of 15,000 ft. An average model prediction error reduction of 18.6 percent was calculated for the neural network models when compared to the linear models. This paper documents the input data conditioning, input parameter selection, structure, training, and validation of the neural network models.

  5. Drift chamber tracking with neural networks

    SciTech Connect

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  6. Extrapolation limitations of multilayer feedforward neural networks

    NASA Technical Reports Server (NTRS)

    Haley, Pamela J.; Soloway, Donald

    1992-01-01

    The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.

  7. Coherence resonance in bursting neural networks

    NASA Astrophysics Data System (ADS)

    Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.

    2015-10-01

    Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.

  8. From Classical Neural Networks to Quantum Neural Networks

    NASA Astrophysics Data System (ADS)

    Tirozzi, B.

    2013-09-01

    First I give a brief description of the classical Hopfield model introducing the fundamental concepts of patterns, retrieval, pattern recognition, neural dynamics, capacity and describe the fundamental results obtained in this field by Amit, Gutfreund and Sompolinsky,1 using the non rigorous method of replica and the rigorous version given by Pastur, Shcherbina, Tirozzi2 using the cavity method. Then I give a formulation of the theory of Quantum Neural Networks (QNN) in terms of the XY model with Hebbian interaction. The problem of retrieval and storage is discussed. The retrieval states are the states of the minimum energy. I apply the estimates found by Lieb3 which give lower and upper bound of the free-energy and expectation of the observables of the quantum model. I discuss also some experiment and the search of ground state using Monte Carlo Dynamics applied to the equivalent classical two dimensional Ising model constructed by Suzuki et al.6 At the end there is a list of open problems.

  9. Neural Network Algorithm for Particle Loading

    SciTech Connect

    J. L. V. Lewandowski

    2003-04-25

    An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.

  10. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  11. Radiation Behavior of Analog Neural Network Chip

    NASA Technical Reports Server (NTRS)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  12. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. PMID:20713305

  13. Creativity in design and artificial neural networks

    SciTech Connect

    Neocleous, C.C.; Esat, I.I.; Schizas, C.N.

    1996-12-31

    The creativity phase is identified as an integral part of the design phase. The characteristics of creative persons which are relevant to designing artificial neural networks manifesting aspects of creativity, are identified. Based on these identifications, a general framework of artificial neural network characteristics to implement such a goal are proposed.

  14. Advanced telerobotic control using neural networks

    NASA Technical Reports Server (NTRS)

    Pap, Robert M.; Atkins, Mark; Cox, Chadwick; Glover, Charles; Kissel, Ralph; Saeks, Richard

    1993-01-01

    Accurate Automation is designing and developing adaptive decentralized joint controllers using neural networks. We are then implementing these in hardware for the Marshall Space Flight Center PFMA as well as to be usable for the Remote Manipulator System (RMS) robot arm. Our design is being realized in hardware after completion of the software simulation. This is implemented using a Functional-Link neural network.

  15. Neural network based architectures for aerospace applications

    NASA Technical Reports Server (NTRS)

    Ricart, Richard

    1987-01-01

    A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed.

  16. Applications of Neural Networks in Finance.

    ERIC Educational Resources Information Center

    Crockett, Henry; Morrison, Ronald

    1994-01-01

    Discusses research with neural networks in the area of finance. Highlights include bond pricing, theoretical exposition of primary bond pricing, bond pricing regression model, and an example that created networks with corporate bonds and NeuralWare Neuralworks Professional H software using the back-propagation technique. (LRW)

  17. A Survey of Neural Network Publications.

    ERIC Educational Resources Information Center

    Vijayaraman, Bindiganavale S.; Osyk, Barbara

    This paper is a survey of publications on artificial neural networks published in business journals for the period ending July 1996. Its purpose is to identify and analyze trends in neural network research during that period. This paper shows which topics have been heavily researched, when these topics were researched, and how that research has…

  18. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  19. Beneficial role of noise in artificial neural networks

    SciTech Connect

    Monterola, Christopher; Saloma, Caesar; Zapotocky, Martin

    2008-06-18

    We demonstrate enhancement of neural networks efficacy to recognize frequency encoded signals and/or to categorize spatial patterns of neural activity as a result of noise addition. For temporal information recovery, noise directly added to the receiving neurons allow instantaneous improvement of signal-to-noise ratio [Monterola and Saloma, Phys. Rev. Lett. 2002]. For spatial patterns however, recurrence is necessary to extend and homogenize the operating range of a feed-forward neural network [Monterola and Zapotocky, Phys. Rev. E 2005]. Finally, using the size of the basin of attraction of the networks learned patterns (dynamical fixed points), a procedure for estimating the optimal noise is demonstrated.

  20. Marginalization in Random Nonlinear Neural Networks

    NASA Astrophysics Data System (ADS)

    Vasudeva Raju, Rajkumar; Pitkow, Xaq

    2015-03-01

    Computations involved in tasks like causal reasoning in the brain require a type of probabilistic inference known as marginalization. Marginalization corresponds to averaging over irrelevant variables to obtain the probability of the variables of interest. This is a fundamental operation that arises whenever input stimuli depend on several variables, but only some are task-relevant. Animals often exhibit behavior consistent with marginalizing over some variables, but the neural substrate of this computation is unknown. It has been previously shown (Beck et al. 2011) that marginalization can be performed optimally by a deterministic nonlinear network that implements a quadratic interaction of neural activity with divisive normalization. We show that a simpler network can perform essentially the same computation. These Random Nonlinear Networks (RNN) are feedforward networks with one hidden layer, sigmoidal activation functions, and normally-distributed weights connecting the input and hidden layers. We train the output weights connecting the hidden units to an output population, such that the output model accurately represents a desired marginal probability distribution without significant information loss compared to optimal marginalization. Simulations for the case of linear coordinate transformations show that the RNN model has good marginalization performance, except for highly uncertain inputs that have low amplitude population responses. Behavioral experiments, based on these results, could then be used to identify if this model does indeed explain how the brain performs marginalization.

  1. Enhancing neural-network performance via assortativity.

    PubMed

    de Franciscis, Sebastiano; Johnson, Samuel; Torres, Joaquín J

    2011-03-01

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations--assortativity--on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information. PMID:21517565

  2. Enhancing neural-network performance via assortativity

    SciTech Connect

    Franciscis, Sebastiano de; Johnson, Samuel; Torres, Joaquin J.

    2011-03-15

    The performance of attractor neural networks has been shown to depend crucially on the heterogeneity of the underlying topology. We take this analysis a step further by examining the effect of degree-degree correlations - assortativity - on neural-network behavior. We make use of a method recently put forward for studying correlated networks and dynamics thereon, both analytically and computationally, which is independent of how the topology may have evolved. We show how the robustness to noise is greatly enhanced in assortative (positively correlated) neural networks, especially if it is the hub neurons that store the information.

  3. Neural network and letter recognition

    SciTech Connect

    Lee, Hue Yeon.

    1989-01-01

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C-layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the Gabor transform. Pattern dependent choice of center and wavelengths of Gabor filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets.

  4. A decade of neural networks: Practical applications and prospects

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina (Editor); Thakoor, Anil (Editor)

    1994-01-01

    On May 11-13, 1994, JPL's Center for Space Microelectronics Technology (CSMT) hosted a neural network workshop entitled, 'A Decade of Neural Networks: Practical Applications and Prospects,' sponsored by DOD and NASA. The past ten years of renewed activity in neural network research has brought the technology to a crossroads regarding the overall scope of its future practical applicability. The purpose of the workshop was to bring together the sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and development prospects, with emphasis on practical applications. Of the 93 participants, roughly 15% were from government agencies, 30% were from industry, 20% were from universities, and 35% were from Federally Funded Research and Development Centers (FFRDC's).

  5. Block-based neural networks.

    PubMed

    Moon, S W; Kong, S G

    2001-01-01

    This paper presents a novel block-based neural network (BBNN) model and the optimization of its structure and weights based on a genetic algorithm. The architecture of the BBNN consists of a 2D array of fundamental blocks with four variable input/output nodes and connection weights. Each block can have one of four different internal configurations depending on the structure settings, The BBNN model includes some restrictions such as 2D array and integer weights in order to allow easier implementation with reconfigurable hardware such as field programmable logic arrays (FPGA). The structure and weights of the BBNN are encoded with bit strings which correspond to the configuration bits of FPGA. The configuration bits are optimized globally using a genetic algorithm with 2D encoding and modified genetic operators. Simulations show that the optimized BBNN can solve engineering problems such as pattern classification and mobile robot control. PMID:18244385

  6. Introduction to artificial neural networks.

    PubMed

    Grossi, Enzo; Buscema, Massimo

    2007-12-01

    The coupling of computer science and theoretical bases such as nonlinear dynamics and chaos theory allows the creation of 'intelligent' agents, such as artificial neural networks (ANNs), able to adapt themselves dynamically to problems of high complexity. ANNs are able to reproduce the dynamic interaction of multiple factors simultaneously, allowing the study of complexity; they can also draw conclusions on individual basis and not as average trends. These tools can offer specific advantages with respect to classical statistical techniques. This article is designed to acquaint gastroenterologists with concepts and paradigms related to ANNs. The family of ANNs, when appropriately selected and used, permits the maximization of what can be derived from available data and from complex, dynamic, and multidimensional phenomena, which are often poorly predictable in the traditional 'cause and effect' philosophy. PMID:17998827

  7. Neural networks for damage identification

    SciTech Connect

    Paez, T.L.; Klenke, S.E.

    1997-11-01

    Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.

  8. Neural-Network Control Of Prosthetic And Robotic Hands

    NASA Technical Reports Server (NTRS)

    Buckley, Theresa M.

    1991-01-01

    Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.

  9. VLSI Cells Placement Using the Neural Networks

    SciTech Connect

    Azizi, Hacene; Zouaoui, Lamri; Mokhnache, Salah

    2008-06-12

    The artificial neural networks have been studied for several years. Their effectiveness makes it possible to expect high performances. The privileged fields of these techniques remain the recognition and classification. Various applications of optimization are also studied under the angle of the artificial neural networks. They make it possible to apply distributed heuristic algorithms. In this article, a solution to placement problem of the various cells at the time of the realization of an integrated circuit is proposed by using the KOHONEN network.

  10. Neural networks and orbit control in accelerators

    SciTech Connect

    Bozoki, E.; Friedman, A.

    1994-07-01

    An overview of the architecture, workings and training of Neural Networks is given. We stress the aspects which are important for the use of Neural Networks for orbit control in accelerators and storage rings, especially its ability to cope with the nonlinear behavior of the orbit response to `kicks` and the slow drift in the orbit response during long-term operation. Results obtained for the two NSLS storage rings with several network architectures and various training methods for each architecture are given.

  11. Sparse coding for layered neural networks

    NASA Astrophysics Data System (ADS)

    Katayama, Katsuki; Sakata, Yasuo; Horiguchi, Tsuyoshi

    2002-07-01

    We investigate storage capacity of two types of fully connected layered neural networks with sparse coding when binary patterns are embedded into the networks by a Hebbian learning rule. One of them is a layered network, in which a transfer function of even layers is different from that of odd layers. The other is a layered network with intra-layer connections, in which the transfer function of inter-layer is different from that of intra-layer, and inter-layered neurons and intra-layered neurons are updated alternately. We derive recursion relations for order parameters by means of the signal-to-noise ratio method, and then apply the self-control threshold method proposed by Dominguez and Bollé to both layered networks with monotonic transfer functions. We find that a critical value αC of storage capacity is about 0.11|a ln a| -1 ( a≪1) for both layered networks, where a is a neuronal activity. It turns out that the basin of attraction is larger for both layered networks when the self-control threshold method is applied.

  12. Devices and circuits for nanoelectronic implementation of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Turel, Ozgur

    Biological neural networks perform complicated information processing tasks at speeds better than conventional computers based on conventional algorithms. This has inspired researchers to look into the way these networks function, and propose artificial networks that mimic their behavior. Unfortunately, most artificial neural networks, either software or hardware, do not provide either the speed or the complexity of a human brain. Nanoelectronics, with high density and low power dissipation that it provides, may be used in developing more efficient artificial neural networks. This work consists of two major contributions in this direction. First is the proposal of the CMOL concept, hybrid CMOS-molecular hardware [1-8]. CMOL may circumvent most of the problems in posed by molecular devices, such as low yield, vet provide high active device density, ˜1012/cm 2. The second contribution is CrossNets, artificial neural networks that are based on CMOL. We showed that CrossNets, with their fault tolerance, exceptional speed (˜ 4 to 6 orders of magnitude faster than biological neural networks) can perform any task any artificial neural network can perform. Moreover, there is a hope that if their integration scale is increased to that of human cerebral cortex (˜ 1010 neurons and ˜ 1014 synapses), they may be capable of performing more advanced tasks.

  13. Neural network regulation driven by autonomous neural firings

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  14. Coronary Artery Diagnosis Aided by Neural Network

    NASA Astrophysics Data System (ADS)

    Stefko, Kamil

    2007-01-01

    Coronary artery disease is due to atheromatous narrowing and subsequent occlusion of the coronary vessel. Application of optimised feed forward multi-layer back propagation neural network (MLBP) for detection of narrowing in coronary artery vessels is presented in this paper. The research was performed using 580 data records from traditional ECG exercise test confirmed by coronary arteriography results. Each record of training database included description of the state of a patient providing input data for the neural network. Level and slope of ST segment of a 12 lead ECG signal recorded at rest and after effort (48 floating point values) was the main component of input data for neural network was. Coronary arteriography results (verified the existence or absence of more than 50% stenosis of the particular coronary vessels) were used as a correct neural network training output pattern. More than 96% of cases were correctly recognised by especially optimised and a thoroughly verified neural network. Leave one out method was used for neural network verification so 580 data records could be used for training as well as for verification of neural network.

  15. Optimizing stabilization of waste-activated sludge using Fered-Fenton process and artificial neural network modeling (KSOFM, MLP).

    PubMed

    Badalians Gholikandi, Gagik; Masihi, Hamidreza; Azimipour, Mohammad; Abrishami, Ali; Mirabi, Maryam

    2014-01-01

    Sludge management is a fundamental activity in accordance with wastewater treatment aims. Sludge stabilization is always considered as a significant step of wastewater sludge handling. There has been a progressive development observed in the approach to the novel solutions in this regard. In this research, based on own initially experimental results in lab-scale regarding Fered-Fenton processes in view of organic loading (volatile-suspended solids, VSS) removal efficiency, a combination of both methods towards proper improving of excess biological sludge stabilization was investigated. Firstly, VSS removal efficiency has been experimentally studied in lab-scale under different operational conditions taking into consideration pH [Fe(2+)]/[H2O2], detention time [H2O2], and current density parameters. Therefore, the correlations of the same parameters have been determined by utilizing Kohonen self-organizing feature maps (KSOFM). In addition, multi-layer perceptron (MLP) has been employed afterwards for a comprehensive evaluation of investigating parameters correlation and prediction aims. The findings indicated that the best proportion of iron to hydrogen peroxide and the optimum pH were 0.58 and 3.1, respectively. Furthermore, maximum retention time about 6 h with a hydrogen peroxide concentration of 1,568 mg/l and a current density of 650-750 mA results to the optimum VSS removal (efficiency equals to 81 %). The performance of KSOFM and MLP models is found to be magnificent, with correlation ranging (R) from 0.873 to 0.998 for the process simulation and prediction. Finally, it can be concluded that the Fered-Fenton reactor is a suitable efficient process to reduce considerably sludge organic load and mathematical modeling tools as artificial neural networks are impressive methods of process simulation and prediction accordingly. PMID:24562454

  16. A Decade of Neural Networks: Practical Applications and Prospects

    NASA Technical Reports Server (NTRS)

    Kemeny, Sabrina E.

    1994-01-01

    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization.

  17. Data compression using artificial neural networks

    SciTech Connect

    Watkins, B.E.

    1991-09-01

    This thesis investigates the application of artificial neural networks for the compression of image data. An algorithm is developed using the competitive learning paradigm which takes advantage of the parallel processing and classification capability of neural networks to produce an efficient implementation of vector quantization. Multi-Stage, tree searched, and classification vector quantization codebook design are adapted to the neural network design to reduce the computational cost and hardware requirements. The results show that the new algorithm provides a substantial reduction in computational costs and an improvement in performance.

  18. Description of interatomic interactions with neural networks

    NASA Astrophysics Data System (ADS)

    Hajinazar, Samad; Shao, Junping; Kolmogorov, Aleksey N.

    Neural networks are a promising alternative to traditional classical potentials for describing interatomic interactions. Recent research in the field has demonstrated how arbitrary atomic environments can be represented with sets of general functions which serve as an input for the machine learning tool. We have implemented a neural network formalism in the MAISE package and developed a protocol for automated generation of accurate models for multi-component systems. Our tests illustrate the performance of neural networks and known classical potentials for a range of chemical compositions and atomic configurations. Supported by NSF Grant DMR-1410514.

  19. Multispectral-image fusion using neural networks

    NASA Astrophysics Data System (ADS)

    Kagel, Joseph H.; Platt, C. A.; Donaven, T. W.; Samstad, Eric A.

    1990-08-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard a circuit card assembly and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations results and a description of the prototype system are presented. 1.

  20. Multispectral image fusion using neural networks

    NASA Technical Reports Server (NTRS)

    Kagel, J. H.; Platt, C. A.; Donaven, T. W.; Samstad, E. A.

    1990-01-01

    A prototype system is being developed to demonstrate the use of neural network hardware to fuse multispectral imagery. This system consists of a neural network IC on a motherboard, a circuit card assembly, and a set of software routines hosted by a PC-class computer. Research in support of this consists of neural network simulations fusing 4 to 7 bands of Landsat imagery and fusing (separately) multiple bands of synthetic imagery. The simulations, results, and a description of the prototype system are presented.

  1. Stock market index prediction using neural networks

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  2. A neural network prototyping package within IRAF

    NASA Technical Reports Server (NTRS)

    Bazell, D.; Bankman, I.

    1992-01-01

    We outline our plans for incorporating a Neural Network Prototyping Package into the IRAF environment. The package we are developing will allow the user to choose between different types of networks and to specify the details of the particular architecture chosen. Neural networks consist of a highly interconnected set of simple processing units. The strengths of the connections between units are determined by weights which are adaptively set as the network 'learns'. In some cases, learning can be a separate phase of the user cycle of the network while in other cases the network learns continuously. Neural networks have been found to be very useful in pattern recognition and image processing applications. They can form very general 'decision boundaries' to differentiate between objects in pattern space and they can be used for associative recall of patterns based on partial cures and for adaptive filtering. We discuss the different architectures we plan to use and give examples of what they can do.

  3. Facial expression recognition using constructive neural networks

    NASA Astrophysics Data System (ADS)

    Ma, Liying; Khorasani, Khashayar

    2001-08-01

    The computer-based recognition of facial expressions has been an active area of research for quite a long time. The ultimate goal is to realize intelligent and transparent communications between human beings and machines. The neural network (NN) based recognition methods have been found to be particularly promising, since NN is capable of implementing mapping from the feature space of face images to the facial expression space. However, finding a proper network size has always been a frustrating and time consuming experience for NN developers. In this paper, we propose to use the constructive one-hidden-layer feed forward neural networks (OHL-FNNs) to overcome this problem. The constructive OHL-FNN will obtain in a systematic way a proper network size which is required by the complexity of the problem being considered. Furthermore, the computational cost involved in network training can be considerably reduced when compared to standard back- propagation (BP) based FNNs. In our proposed technique, the 2-dimensional discrete cosine transform (2-D DCT) is applied over the entire difference face image for extracting relevant features for recognition purpose. The lower- frequency 2-D DCT coefficients obtained are then used to train a constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive learning process to reduce the network size without sacrificing the performance of the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having 5 facial expression images (neutral, smile, anger, sadness, and surprise). Images of 40 men are used for network training, and the remaining images are used for generalization and

  4. Nonequilibrium landscape theory of neural networks

    PubMed Central

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  5. Phase diagram of spiking neural networks

    PubMed Central

    Seyed-allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters – excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli. PMID:25788885

  6. Neural network mechanisms underlying stimulus driven variability reduction.

    PubMed

    Deco, Gustavo; Hugues, Etienne

    2012-01-01

    It is well established that the variability of the neural activity across trials, as measured by the Fano factor, is elevated. This fact poses limits on information encoding by the neural activity. However, a series of recent neurophysiological experiments have changed this traditional view. Single cell recordings across a variety of species, brain areas, brain states and stimulus conditions demonstrate a remarkable reduction of the neural variability when an external stimulation is applied and when attention is allocated towards a stimulus within a neuron's receptive field, suggesting an enhancement of information encoding. Using an heterogeneously connected neural network model whose dynamics exhibits multiple attractors, we demonstrate here how this variability reduction can arise from a network effect. In the spontaneous state, we show that the high degree of neural variability is mainly due to fluctuation-driven excursions from attractor to attractor. This occurs when, in the parameter space, the network working point is around the bifurcation allowing multistable attractors. The application of an external excitatory drive by stimulation or attention stabilizes one specific attractor, eliminating in this way the transitions between the different attractors and resulting in a net decrease in neural variability over trials. Importantly, non-responsive neurons also exhibit a reduction of variability. Finally, this reduced variability is found to arise from an increased regularity of the neural spike trains. In conclusion, these results suggest that the variability reduction under stimulation and attention is a property of neural circuits. PMID:22479168

  7. An Introduction to Neural Networks for Hearing Aid Noise Recognition.

    ERIC Educational Resources Information Center

    Kim, Jun W.; Tyler, Richard S.

    1995-01-01

    This article introduces the use of multilayered artificial neural networks in hearing aid noise recognition. It reviews basic principles of neural networks, and offers an example of an application in which a neural network is used to identify the presence or absence of noise in speech. The ability of neural networks to "learn" the characteristics…

  8. Results of the neural network investigation

    NASA Astrophysics Data System (ADS)

    Uvanni, Lee A.

    1992-04-01

    Rome Laboratory has designed and implemented a neural network based automatic target recognition (ATR) system under contract F30602-89-C-0079 with Booz, Allen & Hamilton (BAH), Inc., of Arlington, Virginia. The system utilizes a combination of neural network paradigms and conventional image processing techniques in a parallel environment on the IE- 2000 SUN 4 workstation at Rome Laboratory. The IE-2000 workstation was designed to assist the Air Force and Department of Defense to derive the needs for image exploitation and image exploitation support for the late 1990s - year 2000 time frame. The IE-2000 consists of a developmental testbed and an applications testbed, both with the goal of solving real world problems on real-world facilities for image exploitation. To fully exploit the parallel nature of neural networks, 18 Inmos T800 transputers were utilized, in an attempt to provide a near- linear speed-up for each subsystem component implemented on them. The initial design contained three well-known neural network paradigms, each modified by BAH to some extent: the Selective Attention Neocognitron (SAN), the Binary Contour System/Feature Contour System (BCS/FCS), and Adaptive Resonance Theory 2 (ART-2), and one neural network designed by BAH called the Image Variance Exploitation Network (IVEN). Through rapid prototyping, the initial system evolved into a completely different final design, called the Neural Network Image Exploitation System (NNIES), where the final system consists of two basic components: the Double Variance (DV) layer and the Multiple Object Detection And Location System (MODALS). A rapid prototyping neural network CAD Tool, designed by Booz, Allen & Hamilton, was used to rapidly build and emulate the neural network paradigms. Evaluation of the completed ATR system included probability of detections and probability of false alarms among other measures.

  9. Parameter extraction with neural networks

    NASA Astrophysics Data System (ADS)

    Cazzanti, Luca; Khan, Mumit; Cerrina, Franco

    1998-06-01

    In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs

  10. Healthy human CSF promotes glial differentiation of hESC-derived neural cells while retaining spontaneous activity in existing neuronal networks

    PubMed Central

    Kiiski, Heikki; Äänismaa, Riikka; Tenhunen, Jyrki; Hagman, Sanna; Ylä-Outinen, Laura; Aho, Antti; Yli-Hankala, Arvi; Bendel, Stepani; Skottman, Heli; Narkilahti, Susanna

    2013-01-01

    Summary The possibilities of human pluripotent stem cell-derived neural cells from the basic research tool to a treatment option in regenerative medicine have been well recognized. These cells also offer an interesting tool for in vitro models of neuronal networks to be used for drug screening and neurotoxicological studies and for patient/disease specific in vitro models. Here, as aiming to develop a reductionistic in vitro human neuronal network model, we tested whether human embryonic stem cell (hESC)-derived neural cells could be cultured in human cerebrospinal fluid (CSF) in order to better mimic the in vivo conditions. Our results showed that CSF altered the differentiation of hESC-derived neural cells towards glial cells at the expense of neuronal differentiation. The proliferation rate was reduced in CSF cultures. However, even though the use of CSF as the culture medium altered the glial vs. neuronal differentiation rate, the pre-existing spontaneous activity of the neuronal networks persisted throughout the study. These results suggest that it is possible to develop fully human cell and culture-based environments that can further be modified for various in vitro modeling purposes. PMID:23789111

  11. Imbibition well stimulation via neural network design

    DOEpatents

    Weiss, William

    2007-08-14

    A method for stimulation of hydrocarbon production via imbibition by utilization of surfactants. The method includes use of fuzzy logic and neural network architecture constructs to determine surfactant use.

  12. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  13. Constructive Autoassociative Neural Network for Facial Recognition

    PubMed Central

    Fernandes, Bruno J. T.; Cavalcanti, George D. C.; Ren, Tsang I.

    2014-01-01

    Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature. PMID:25542018

  14. Radar signal categorization using a neural network

    NASA Technical Reports Server (NTRS)

    Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.

    1991-01-01

    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.

  15. Neural network activation during a stop-signal task discriminates cocaine-dependent from non-drug-abusing men

    PubMed Central

    Elton, Amanda; Young, Jonathan; Smitherman, Sonet; Gross, Robin E.; Mletzko, Tanja; Kilts, Clinton D.

    2012-01-01

    Cocaine dependence is defined by a loss of inhibitory control over drug use behaviors, mirrored by measurable impairments in laboratory tasks of inhibitory control. The current study tested the hypothesis that deficits in multiple sub-processes of behavioral control are associated with reliable neural processing alterations that define cocaine addiction. While undergoing fMRI, 38 cocaine-dependent men and 27 healthy control men performed a stop-signal task of motor inhibition. An independent component analysis (ICA) on fMRI time courses identified task-related neural networks attributed to motor, visual, cognitive and affective processes. The statistical associations of these components with five different stop-signal task conditions were selected for use in a linear discriminant analysis to define a classifier for cocaine addiction from a subsample of 26 cocaine-dependent men and 18 controls. Leave-one-out cross validation accurately classified 89.5% (39/44; chance accuracy = 26/44 = 59.1%) of subjects (with 84.6% (22/26) sensitivity and 94.4% (17/18) specificity. The remaining 12 cocaine-dependent and 9 control men formed an independent test sample, for which accuracy of the classifier was 81.9% (17/21; chance accuracy = 12/21 = 57.1%) with 75% (9/12) sensitivity and 88.9% (8/9) specificity. The cocaine addiction classification score was significantly correlated with a measure of impulsiveness as well as the duration of cocaine use for cocaine-dependent men. The results of this study support the ability of a pattern of multiple neural network alterations associated with inhibitory motor control to define a binary classifier for cocaine addiction. PMID:23231419

  16. Using neural networks in software repositories

    NASA Technical Reports Server (NTRS)

    Eichmann, David (Editor); Srinivas, Kankanahalli; Boetticher, G.

    1992-01-01

    The first topic is an exploration of the use of neural network techniques to improve the effectiveness of retrieval in software repositories. The second topic relates to a series of experiments conducted to evaluate the feasibility of using adaptive neural networks as a means of deriving (or more specifically, learning) measures on software. Taken together, these two efforts illuminate a very promising mechanism supporting software infrastructures - one based upon a flexible and responsive technology.

  17. Limitations of opto-electronic neural networks

    NASA Technical Reports Server (NTRS)

    Yu, Jeffrey; Johnston, Alan; Psaltis, Demetri; Brady, David

    1989-01-01

    Consideration is given to the limitations of implementing neurons, weights, and connections in neural networks for electronics and optics. It is shown that the advantages of each technology are utilized when electronically fabricated neurons are included and a combination of optics and electronics are employed for the weights and connections. The relationship between the types of neural networks being constructed and the choice of technologies to implement the weights and connections is examined.

  18. Neural network simulations of the nervous system.

    PubMed

    van Leeuwen, J L

    1990-01-01

    Present knowledge of brain mechanisms is mainly based on anatomical and physiological studies. Such studies are however insufficient to understand the information processing of the brain. The present new focus on neural network studies is the most likely candidate to fill this gap. The present paper reviews some of the history and current status of neural network studies. It signals some of the essential problems for which answers have to be found before substantial progress in the field can be made. PMID:2245130

  19. Neural-Network Controller For Vibration Suppression

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Wang, Shyh Jong

    1995-01-01

    Neural-network-based adaptive-control system proposed for vibration suppression of flexible space structures. Controller features three-layer neural network and utilizes output feedback. Measurements generated by various sensors on structure. Feed forward path also included to speed up response in case plant exhibits predominantly linear dynamic behavior. System applicable to single-input single-output systems. Work extended to multiple-input multiple-output systems as well.

  20. Neural Networks for Signal Processing and Control

    NASA Astrophysics Data System (ADS)

    Hesselroth, Ted Daniel

    Neural networks are developed for controlling a robot-arm and camera system and for processing images. The networks are based upon computational schemes that may be found in the brain. In the first network, a neural map algorithm is employed to control a five-joint pneumatic robot arm and gripper through feedback from two video cameras. The pneumatically driven robot arm employed shares essential mechanical characteristics with skeletal muscle systems. To control the position of the arm, 200 neurons formed a network representing the three-dimensional workspace embedded in a four-dimensional system of coordinates from the two cameras, and learned a set of pressures corresponding to the end effector positions, as well as a set of Jacobian matrices for interpolating between these positions. Because of the properties of the rubber-tube actuators of the arm, the position as a function of supplied pressure is nonlinear, nonseparable, and exhibits hysteresis. Nevertheless, through the neural network learning algorithm the position could be controlled to an accuracy of about one pixel (~3 mm) after two hundred learning steps. Applications of repeated corrections in each step via the Jacobian matrices leads to a very robust control algorithm since the Jacobians learned by the network have to satisfy the weak requirement that they yield a reduction of the distance between gripper and target. The second network is proposed as a model for the mammalian vision system in which backward connections from the primary visual cortex (V1) to the lateral geniculate nucleus play a key role. The application of hebbian learning to the forward and backward connections causes the formation of receptive fields which are sensitive to edges, bars, and spatial frequencies of preferred orientations. The receptive fields are learned in such a way as to maximize the rate of transfer of information from the LGN to V1. Orientational preferences are organized into a feature map in the primary visual

  1. Optimization neural network for solving flow problems.

    PubMed

    Perfetti, R

    1995-01-01

    This paper describes a neural network for solving flow problems, which are of interest in many areas of application as in fuel, hydro, and electric power scheduling. The neural network consist of two layers: a hidden layer and an output layer. The hidden units correspond to the nodes of the flow graph. The output units represent the branch variables. The network has a linear order of complexity, it is easily programmable, and it is suited for analog very large scale integration (VLSI) realization. The functionality of the proposed network is illustrated by a simulation example concerning the maximal flow problem. PMID:18263420

  2. A neural network simulation package in CLIPS

    NASA Technical Reports Server (NTRS)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  3. Neural Synchrony in Schizophrenia: From Networks to New Treatments

    PubMed Central

    Ford, Judith M.; Krystal, John H.; Mathalon, Daniel H.

    2007-01-01

    Evidence is accumulating that brain regions communicate with each other in the temporal domain, relying on coincidence of neural activity to detect phasic relationships among neurons and neural assemblies. This coordination between neural populations has been described as “self-organizing,” an “emergent property” of neural networks arising from the temporal synchrony between synaptic transmission and firing of distinct neuronal populations. Evidence is also accumulating that communication and coordination failures between different brain regions may account for a wide range of problems in schizophrenia, from psychosis to cognitive dysfunction. We review the knowledge about the functional neuroanatomy and neurochemistry of neural oscillations and oscillation abnormalities in schizophrenia. Based on this, we argue that we can begin to use oscillations, across frequencies, to do translational studies to understand the neural basis of schizophrenia. PMID:17567628

  4. Antagonistic neural networks underlying differentiated leadership roles

    PubMed Central

    Boyatzis, Richard E.; Rochford, Kylie; Jack, Anthony I.

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks – the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success. PMID:24624074

  5. Antagonistic neural networks underlying differentiated leadership roles.

    PubMed

    Boyatzis, Richard E; Rochford, Kylie; Jack, Anthony I

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks - the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success. PMID:24624074

  6. Adaptive control of nonlinear systems using multistage dynamic neural networks

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Rao, Dandina H.

    1992-11-01

    In this paper we present a new architecture of neuron, called the dynamic neural unit (DNU). The topology of the proposed neuronal model embodies delay elements, feedforward and feedback signals weighted by the synaptic weights and a time-varying nonlinear activation function, and is thus different from the conventionally and assumed architecture of neurons. The learning algorithm for the proposed neuronal structure and the corresponding implementation scheme are presented. A multi-stage dynamic neural network is developed using the DNU as the basic processing element. The performance evaluation of the dynamic neural network is presented for nonlinear dynamic systems under various situations. The capabilities of the proposed neural network model not only account for the learning and control actions emulating some of the biological control functions, but also provide a promising parallel-distributed intelligent control scheme for large-scale complex dynamic systems.

  7. Speech synthesis with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Weijters, Ton; Thole, Johan

    1992-10-01

    The application of neural nets to speech synthesis is considered. In speech synthesis, the main efforts so far have been to master the grapheme to phoneme conversion. During this conversion symbols (graphemes) are converted into other symbols (phonemes). Neural networks, however, are especially competitive for tasks in which complex nonlinear transformations are needed and sufficient domain specific knowledge is not available. The conversion of text into speech parameters appropriate as input for a speech generator seems such a task. Results of a pilot study in which an attempt is made to train a neural network for this conversion are presented.

  8. A neural network for visual pattern recognition

    SciTech Connect

    Fukushima, K.

    1988-03-01

    A modeling approach, which is a synthetic approach using neural network models, continues to gain importance. In the modeling approach, the authors study how to interconnect neurons to synthesize a brain model, which is a network with the same functions and abilities as the brain. The relationship between modeling neutral networks and neurophysiology resembles that between theoretical physics and experimental physics. Modeling takes synthetic approach, while neurophysiology or psychology takes an analytical approach. Modeling neural networks is useful in explaining the brain and also in engineering applications. It brings the results of neurophysiological and psychological research to engineering applications in the most direct way possible. This article discusses a neural network model thus obtained, a model with selective attention in visual pattern recognition.

  9. Unsupervised classification of neural spikes with a hybrid multilayer artificial neural network.

    PubMed

    García, P; Suárez, C P; Rodríguez, J; Rodríguez, M

    1998-07-01

    The understanding of the brain structure and function and its computational style is one of the biggest challenges both in Neuroscience and Neural Computation. In order to reach this and to test the predictions of neural network modeling, it is necessary to observe the activity of neural populations. In this paper we propose a hybrid modular computational system for the spike classification of multiunits recordings. It works with no knowledge about the waveform, and it consists of two moduli: a Preprocessing (Segmentation) module, which performs the detection and centering of spike vectors using programmed computation; and a Processing (Classification) module, which implements the general approach of neural classification: feature extraction, clustering and discrimination, by means of a hybrid unsupervised multilayer artificial neural network (HUMANN). The operations of this artificial neural network on the spike vectors are: (i) compression with a Sanger Layer from 70 points vector to five principal component vector; (ii) their waveform is analyzed by a Kohonen layer; (iii) the electrical noise and overlapping spikes are rejected by a previously unreported artificial neural network named Tolerance layer; and (iv) finally the spikes are labeled into spike classes by a Labeling layer. Each layer of the system has a specific unsupervised learning rule that progressively modifies itself until the performance of the layer has been automatically optimized. The procedure showed a high sensitivity and specificity also when working with signals containing four spike types. PMID:10223516

  10. The H1 neural network trigger project

    NASA Astrophysics Data System (ADS)

    Kiesling, C.; Denby, B.; Fent, J.; Fröchtenicht, W.; Garda, P.; Granado, B.; Grindhammer, G.; Haberer, W.; Janauschek, L.; Kobler, T.; Koblitz, B.; Nellen, G.; Prevotet, J.-C.; Schmidt, S.; Tzamariudaki, E.; Udluft, S.

    2001-08-01

    We present a short overview of neuromorphic hardware and some of the physics projects making use of such devices. As a concrete example we describe an innovative project within the H1-Experiment at the electron-proton collider HERA, instrumenting hardwired neural networks as pattern recognition machines to discriminate between wanted physics and uninteresting background at the trigger level. The decision time of the system is less than 20 microseconds, typical for a modern second level trigger. The neural trigger has been successfully running for the past four years and has turned out new physics results from H1 unobtainable so far with other triggering schemes. We describe the concepts and the technical realization of the neural network trigger system, present the most important physics results, and motivate an upgrade of the system for the future high luminosity running at HERA. The upgrade concentrates on "intelligent preprocessing" of the neural inputs which help to strongly improve the networks' discrimination power.

  11. Fuzzy logic and neural networks

    SciTech Connect

    Loos, J.R.

    1994-11-01

    Combine fuzzy logic`s fuzzy sets, fuzzy operators, fuzzy inference, and fuzzy rules - like defuzzification - with neural networks and you can arrive at very unfuzzy real-time control. Fuzzy logic, cursed with a very whimsical title, simply means multivalued logic, which includes not only the conventional two-valued (true/false) crisp logic, but also the logic of three or more values. This means one can assign logic values of true, false, and somewhere in between. This is where fuzziness comes in. Multi-valued logic avoids the black-and-white, all-or-nothing assignment of true or false to an assertion. Instead, it permits the assignment of shades of gray. When assigning a value of true or false to an assertion, the numbers typically used are {open_quotes}1{close_quotes} or {open_quotes}0{close_quotes}. This is the case for programmed systems. If {open_quotes}0{close_quotes} means {open_quotes}false{close_quotes} and {open_quotes}1{close_quotes} means {open_quotes}true,{close_quotes} then {open_quotes}shades of gray{close_quotes} are any numbers between 0 and 1. Therefore, {open_quotes}nearly true{close_quotes} may be represented by 0.8 or 0.9, {open_quotes}nearly false{close_quotes} may be represented by 0.1 or 0.2, and {close_quotes}your guess is as good as mine{close_quotes} may be represented by 0.5. The flexibility available to one is limitless. One can associate any meaning, such as {open_quotes}nearly true{close_quotes}, to any value of any granularity, such as 0.9999. 2 figs.

  12. Amyloid Beta-Protein and Neural Network Dysfunction

    PubMed Central

    Peña-Ortega, Fernando

    2013-01-01

    Understanding the neural mechanisms underlying brain dysfunction induced by amyloid beta-protein (Aβ) represents one of the major challenges for Alzheimer's disease (AD) research. The most evident symptom of AD is a severe decline in cognition. Cognitive processes, as any other brain function, arise from the activity of specific cell assemblies of interconnected neurons that generate neural network dynamics based on their intrinsic and synaptic properties. Thus, the origin of Aβ-induced cognitive dysfunction, and possibly AD-related cognitive decline, must be found in specific alterations in properties of these cells and their consequences in neural network dynamics. The well-known relationship between AD and alterations in the activity of several neural networks is reflected in the slowing of the electroencephalographic (EEG) activity. Some features of the EEG slowing observed in AD, such as the diminished generation of different network oscillations, can be induced in vivo and in vitro upon Aβ application or by Aβ overproduction in transgenic models. This experimental approach offers the possibility to study the mechanisms involved in cognitive dysfunction produced by Aβ. This type of research may yield not only basic knowledge of neural network dysfunction associated with AD, but also novel options to treat this modern epidemic. PMID:26316994

  13. Spontaneous Neural Dynamics and Multi-scale Network Organization

    PubMed Central

    Foster, Brett L.; He, Biyu J.; Honey, Christopher J.; Jerbi, Karim; Maier, Alexander; Saalmann, Yuri B.

    2016-01-01

    Spontaneous neural activity has historically been viewed as task-irrelevant noise that should be controlled for via experimental design, and removed through data analysis. However, electrophysiology and functional MRI studies of spontaneous activity patterns, which have greatly increased in number over the past decade, have revealed a close correspondence between these intrinsic patterns and the structural network architecture of functional brain circuits. In particular, by analyzing the large-scale covariation of spontaneous hemodynamics, researchers are able to reliably identify functional networks in the human brain. Subsequent work has sought to identify the corresponding neural signatures via electrophysiological measurements, as this would elucidate the neural origin of spontaneous hemodynamics and would reveal the temporal dynamics of these processes across slower and faster timescales. Here we survey common approaches to quantifying spontaneous neural activity, reviewing their empirical success, and their correspondence with the findings of neuroimaging. We emphasize invasive electrophysiological measurements, which are amenable to amplitude- and phase-based analyses, and which can report variations in connectivity with high spatiotemporal precision. After summarizing key findings from the human brain, we survey work in animal models that display similar multi-scale properties. We highlight that, across many spatiotemporal scales, the covariance structure of spontaneous neural activity reflects structural properties of neural networks and dynamically tracks their functional repertoire. PMID:26903823

  14. A stereo-compound hybrid microscope for combined intracellular and optical recording of invertebrate neural network activity.

    PubMed

    Frost, William N; Wang, Jean; Brandon, Christopher J

    2007-05-15

    Optical recording studies of invertebrate neural networks with voltage-sensitive dyes seldom employ conventional intracellular electrodes. This may in part be due to the traditional reliance on compound microscopes for such work. While such microscopes have high light-gathering power, they do not provide depth of field, making working with sharp electrodes difficult. Here we describe a hybrid microscope design, with switchable compound and stereo objectives, that eases the use of conventional intracellular electrodes in optical recording experiments. We use it, in combination with a voltage-sensitive dye and photodiode array, to identify neurons participating in the swim motor program of the marine mollusk Tritonia. This microscope design should be applicable to optical recording studies in many preparations. PMID:17306887

  15. On sparsely connected optimal neural networks

    SciTech Connect

    Beiu, V.; Draghici, S.

    1997-10-01

    This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.

  16. The neural network for tool-related cognition: An activation likelihood estimation meta-analysis of 70 neuroimaging contrasts

    PubMed Central

    Ishibashi, Ryo; Pobric, Gorana; Saito, Satoru; Lambon Ralph, Matthew A.

    2016-01-01

    ABSTRACT The ability to recognize and use a variety of tools is an intriguing human cognitive function. Multiple neuroimaging studies have investigated neural activations with various types of tool-related tasks. In the present paper, we reviewed tool-related neural activations reported in 70 contrasts from 56 neuroimaging studies and performed a series of activation likelihood estimation (ALE) meta-analyses to identify tool-related cortical circuits dedicated either to general tool knowledge or to task-specific processes. The results indicate the following: (a) Common, task-general processing regions for tools are located in the left inferior parietal lobule (IPL) and ventral premotor cortex; and (b) task-specific regions are located in superior parietal lobule (SPL) and dorsal premotor area for imagining/executing actions with tools and in bilateral occipito-temporal cortex for recognizing/naming tools. The roles of these regions in task-general and task-specific activities are discussed with reference to evidence from neuropsychology, experimental psychology and other neuroimaging studies. PMID:27362967

  17. Artificial Neural Networks and Instructional Technology.

    ERIC Educational Resources Information Center

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  18. Neural-Network Modeling Of Arc Welding

    NASA Technical Reports Server (NTRS)

    Anderson, Kristinn; Barnett, Robert J.; Springfield, James F.; Cook, George E.; Strauss, Alvin M.; Bjorgvinsson, Jon B.

    1994-01-01

    Artificial neural networks considered for use in monitoring and controlling gas/tungsten arc-welding processes. Relatively simple network, using 4 welding equipment parameters as inputs, estimates 2 critical weld-bead paramaters within 5 percent. Advantage is computational efficiency.

  19. Higher-Order Neural Networks Recognize Patterns

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly; Ochoa, Ellen

    1996-01-01

    Networks of higher order have enhanced capabilities to distinguish between different two-dimensional patterns and to recognize those patterns. Also enhanced capabilities to "learn" patterns to be recognized: "trained" with far fewer examples and, therefore, in less time than necessary to train comparable first-order neural networks.

  20. Orthogonal Patterns In A Binary Neural Network

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1991-01-01

    Report presents some recent developments in theory of binary neural networks. Subject matter relevant to associate (content-addressable) memories and to recognition of patterns - both of considerable importance in advancement of robotics and artificial intelligence. When probed by any pattern, network converges to one of stored patterns.

  1. Target detection using multilayer feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Scherf, Alan V.; Scott, Peter A.

    1991-08-01

    Multilayer feedforward neural networks have been integrated with conventional image processing techniques to form a hybrid target detection algorithm for use in the F/A-18 FLIR pod advanced air-to-air track-while-scan mode. The network has been trained to detect and localize small targets in infrared imagery. Comparative performance between this target detection technique is evaluated.

  2. Comparing artificial and biological dynamical neural networks

    NASA Astrophysics Data System (ADS)

    McAulay, Alastair D.

    2006-05-01

    Modern computers can be made more friendly and otherwise improved by making them behave more like humans. Perhaps we can learn how to do this from biology in which human brains evolved over a long period of time. Therefore, we first explain a commonly used biological neural network (BNN) model, the Wilson-Cowan neural oscillator, that has cross-coupled excitatory (positive) and inhibitory (negative) neurons. The two types of neurons are used for frequency modulation communication between neurons which provides immunity to electromagnetic interference. We then evolve, for the first time, an artificial neural network (ANN) to perform the same task. Two dynamical feed-forward artificial neural networks use cross-coupling feedback (like that in a flip-flop) to form an ANN nonlinear dynamic neural oscillator with the same equations as the Wilson-Cowan neural oscillator. Finally we show, through simulation, that the equations perform the basic neural threshold function, switching between stable zero output and a stable oscillation, that is a stable limit cycle. Optical implementation with an injected laser diode and future research are discussed.

  3. An overview on development of neural network technology

    NASA Technical Reports Server (NTRS)

    Lin, Chun-Shin

    1993-01-01

    The study has been to obtain a bird's-eye view of the current neural network technology and the neural network research activities in NASA. The purpose was two fold. One was to provide a reference document for NASA researchers who want to apply neural network techniques to solve their problems. Another one was to report out survey results regarding NASA research activities and provide a view on what NASA is doing, what potential difficulty exists and what NASA can/should do. In a ten week study period, we interviewed ten neural network researchers in the Langley Research Center and sent out 36 survey forms to researchers at the Johnson Space Center, Lewis Research Center, Ames Research Center and Jet Propulsion Laboratory. We also sent out 60 similar forms to educators and corporation researchers to collect general opinions regarding this field. Twenty-eight survey forms, 11 from NASA researchers and 17 from outside, were returned. Survey results were reported in our final report. In the final report, we first provided an overview on the neural network technology. We reviewed ten neural network structures, discussed the applications in five major areas, and compared the analog, digital and hybrid electronic implementation of neural networks. In the second part, we summarized known NASA neural network research studies and reported the results of the questionnaire survey. Survey results show that most studies are still in the development and feasibility study stage. We compared the techniques, application areas, researchers' opinions on this technology, and many aspects between NASA and non-NASA groups. We also summarized their opinions on difficulties encountered. Applications are considered the top research priority by most researchers. Hardware development and learning algorithm improvement are the next. The lack of financial and management support is among the difficulties in research study. All researchers agree that the use of neural networks could result in

  4. Electronic device aspects of neural network memories

    NASA Technical Reports Server (NTRS)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  5. Stimulus-dependent suppression of chaos in recurrent neural networks

    SciTech Connect

    Rajan, Kanaka; Abbott, L. F.; Sompolinsky, Haim

    2010-07-15

    Neuronal activity arises from an interaction between ongoing firing generated spontaneously by neural circuits and responses driven by external stimuli. Using mean-field analysis, we ask how a neural network that intrinsically generates chaotic patterns of activity can remain sensitive to extrinsic input. We find that inputs not only drive network responses, but they also actively suppress ongoing activity, ultimately leading to a phase transition in which chaos is completely eliminated. The critical input intensity at the phase transition is a nonmonotonic function of stimulus frequency, revealing a 'resonant' frequency at which the input is most effective at suppressing chaos even though the power spectrum of the spontaneous activity peaks at zero and falls exponentially. A prediction of our analysis is that the variance of neural responses should be most strongly suppressed at frequencies matching the range over which many sensory systems operate.

  6. Analysis of optical neural stimulation effects on neural networks affected by neurodegenerative diseases

    NASA Astrophysics Data System (ADS)

    Zverev, M.; Fanjul-Vélez, F.; Salas-García, I.; Ortega-Quijano, N.; Arce-Diego, J. L.

    2016-03-01

    The number of people in risk of developing a neurodegenerative disease increases as the life expectancy grows due to medical advances. Multiple techniques have been developed to improve patient's condition, from pharmacological to invasive electrodes approaches, but no definite cure has yet been discovered. In this work Optical Neural Stimulation (ONS) has been studied. ONS stimulates noninvasively the outer regions of the brain, mainly the neocortex. The relationship between the stimulation parameters and the therapeutic response is not totally clear. In order to find optimal ONS parameters to treat a particular neurodegenerative disease, mathematical modeling is necessary. Neural networks models have been employed to study the neural spiking activity change induced by ONS. Healthy and pathological neocortical networks have been considered to study the required stimulation to restore the normal activity. The network consisted of a group of interconnected neurons, which were assigned 2D spatial coordinates. The optical stimulation spatial profile was assumed to be Gaussian. The stimulation effects were modeled as synaptic current increases in the affected neurons, proportional to the stimulation fluence. Pathological networks were defined as the healthy ones with some neurons being inactivated, which presented no synaptic conductance. Neurons' electrical activity was also studied in the frequency domain, focusing specially on the changes of the spectral bands corresponding to brain waves. The complete model could be used to determine the optimal ONS parameters in order to achieve the specific neural spiking patterns or the required local neural activity increase to treat particular neurodegenerative pathologies.

  7. Learning and diagnosing faults using neural networks

    NASA Technical Reports Server (NTRS)

    Whitehead, Bruce A.; Kiech, Earl L.; Ali, Moonis

    1990-01-01

    Neural networks have been employed for learning fault behavior from rocket engine simulator parameters and for diagnosing faults on the basis of the learned behavior. Two problems in applying neural networks to learning and diagnosing faults are (1) the complexity of the sensor data to fault mapping to be modeled by the neural network, which implies difficult and lengthy training procedures; and (2) the lack of sufficient training data to adequately represent the very large number of different types of faults which might occur. Methods are derived and tested in an architecture which addresses these two problems. First, the sensor data to fault mapping is decomposed into three simpler mappings which perform sensor data compression, hypothesis generation, and sensor fusion. Efficient training is performed for each mapping separately. Secondly, the neural network which performs sensor fusion is structured to detect new unknown faults for which training examples were not presented during training. These methods were tested on a task of fault diagnosis by employing rocket engine simulator data. Results indicate that the decomposed neural network architecture can be trained efficiently, can identify faults for which it has been trained, and can detect the occurrence of faults for which it has not been trained.

  8. A neural network approach to cloud classification

    NASA Technical Reports Server (NTRS)

    Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.

    1990-01-01

    It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.

  9. Neural network technologies for image classification

    NASA Astrophysics Data System (ADS)

    Korikov, A. M.; Tungusova, A. V.

    2015-11-01

    We analyze the classes of problems with an objective necessity to use neural network technologies, i.e. representation and resolution problems in the neural network logical basis. Among these problems, image recognition takes an important place, in particular the classification of multi-dimensional data based on information about textural characteristics. These problems occur in aerospace and seismic monitoring, materials science, medicine and other. We reviewed different approaches for the texture description: statistical, structural, and spectral. We developed a neural network technology for resolving a practical problem of cloud image classification for satellite snapshots from the spectroradiometer MODIS. The cloud texture is described by the statistical characteristics of the GLCM (Gray Level Co- Occurrence Matrix) method. From the range of neural network models that might be applied for image classification, we chose the probabilistic neural network model (PNN) and developed an implementation which performs the classification of the main types and subtypes of clouds. Also, we chose experimentally the optimal architecture and parameters for the PNN model which is used for image classification.

  10. Using Neural Networks to Describe Tracer Correlations

    NASA Technical Reports Server (NTRS)

    Lary, D. J.; Mueller, M. D.; Mussa, H. Y.

    2003-01-01

    Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). In this study a neural network using Quickprop learning and one hidden layer with eight nodes was able to reproduce the CH4-N2O correlation with a correlation co- efficient of 0.9995. Such an accurate representation of tracer-tracer correlations allows more use to be made of long-term datasets to constrain chemical models. Such as the dataset from the Halogen Occultation Experiment (HALOE) which has continuously observed CH4, (but not N2O) from 1991 till the present. The neural network Fortran code used is available for download.

  11. Dose-dependent effects of isoflurane on regional activity and neural network function: A resting-state fMRI study of 14 rhesus monkeys: An observational study.

    PubMed

    Lv, Peilin; Xiao, Yuan; Liu, Bin; Wang, Yuqing; Zhang, Xiang; Sun, Huaiqiang; Li, Fei; Yao, Li; Zhang, Wenjing; Liu, Lu; Gao, Xin; Wu, Min; Tang, Yingying; Chen, Qin; Gong, Qiyong; Lui, Su

    2016-01-12

    The dose-dependent effect of isoflurane on cerebral regional activity and functional connectivity (FC) in 14 rhesus monkeys was investigated using resting-state functional MRI. Amplitude of low-frequency fluctuations (ALFF) decreased in the cerebellum, visual cortex, and cortico-subcortical network when the isoflurane dose changed from 1.0 to 1.3 MAC. ALFF decreased in the arousal system, cerebellum, sensory, visual areas, cortico-subcortical network and default mode network and increased in the bilateral dorsal prefrontal cortices, frontal eye fields and motor-related areas from 1.0 to 1.6 MAC. FC of the default mode network, frontal-parietal, cortico-subcortical, motor, sensory, auditory and visual areas was reduced when isoflurane increased from 1.0 to 1.3 MAC. FC decreased in more widespread areas, especially in regions of cortico-subcortical networks and limbic systems, when isoflurane further increased from 1.0 to 1.6 MAC. Both dose-dependent decreased and increased ALFF were separately observed, while FC deteriorated as the anesthesia deepened. These results suggest that changes continue to occur past the loss of consciousness, and the dose-dependent effects of isoflurane are different with regard to regional function and neural network integration. PMID:26633103

  12. Neural network training with global optimization techniques.

    PubMed

    Yamazaki, Akio; Ludermir, Teresa B

    2003-04-01

    This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks. PMID:12923920

  13. Fuzzy neural network with fast backpropagation learning

    NASA Astrophysics Data System (ADS)

    Wang, Zhiling; De Sario, Marco; Guerriero, Andrea; Mugnuolo, Raffaele

    1995-03-01

    Neural filters with multilayer backpropagation network have been proved to be able to define mostly all linear or non-linear filters. Because of the slowness of the networks' convergency, however, the applicable fields have been limited. In this paper, fuzzy logic is introduced to adjust learning rate and momentum parameter depending upon output errors and training times. This makes the convergency of the network greatly improved. Test curves are shown to prove the fast filters' performance.

  14. Stability of Stochastic Neutral Cellular Neural Networks

    NASA Astrophysics Data System (ADS)

    Chen, Ling; Zhao, Hongyong

    In this paper, we study a class of stochastic neutral cellular neural networks. By constructing a suitable Lyapunov functional and employing the nonnegative semi-martingale convergence theorem we give some sufficient conditions ensuring the almost sure exponential stability of the networks. The results obtained are helpful to design stability of networks when stochastic noise is taken into consideration. Finally, two examples are provided to show the correctness of our analysis.

  15. Fire detection from hyperspectral data using neural network approach

    NASA Astrophysics Data System (ADS)

    Piscini, Alessandro; Amici, Stefania

    2015-10-01

    This study describes an application of artificial neural networks for the recognition of flaming areas using hyper- spectral remote sensed data. Satellite remote sensing is considered an effective and safe way to monitor active fires for environmental and people safeguarding. Neural networks are an effective and consolidated technique for the classification of satellite images. Moreover, once well trained, they prove to be very fast in the application stage for a rapid response. At flaming temperature, thanks to its low excitation energy (about 4.34 eV), potassium (K) ionize with a unique doublet emission features. This emission features can be detected remotely providing a detection map of active fire which allows in principle to separate flaming from smouldering areas of vegetation even in presence of smoke. For this study a normalised Advanced K Band Difference (AKBD) has been applied to airborne hyper spectral sensor covering a range of 400-970 nm with resolution 2.9 nm. A back propagation neural network was used for the recognition of active fires affecting the hyperspectral image. The network was trained using all channels of sensor as inputs, and the corresponding AKBD indexes as target output. In order to evaluate its generalization capabilities, the neural network was validated on two independent data sets of hyperspectral images, not used during neural network training phase. The validation results for the independent data-sets had an overall accuracy round 100% for both image and a few commission errors (0.1%), therefore demonstrating the feasibility of estimating the presence of active fires using a neural network approach. Although the validation of the neural network classifier had a few commission errors, the producer accuracies were lower due to the presence of omission errors. Image analysis revealed that those false negatives lie in "smoky" portion fire fronts, and due to the low intensity of the signal. The proposed method can be considered

  16. Flexible body control using neural networks

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  17. Ca^2+ Dynamics and Propagating Waves in Neural Networks with Excitatory and Inhibitory Neurons.

    NASA Astrophysics Data System (ADS)

    Bondarenko, Vladimir E.

    2008-03-01

    Dynamics of neural spikes, intracellular Ca^2+, and Ca^2+ in intracellular stores was investigated both in isolated Chay's neurons and in the neurons coupled in networks. Three types of neural networks were studied: a purely excitatory neural network, with only excitatory (AMPA) synapses; a purely inhibitory neural network with only inhibitory (GABA) synapses; and a hybrid neural network, with both AMPA and GABA synapses. In the hybrid neural network, the ratio of excitatory to inhibitory neurons was 4:1. For each case, we considered two types of connections, ``all-with-all" and 20 connections per neuron. Each neural network contained 100 neurons with randomly distributed connection strengths. In the neural networks with ``all-with-all" connections and AMPA/GABA synapses an increase in average synaptic strength yielded bursting activity with increased/decreased number of spikes per burst. The neural bursts and Ca^2+ transients were synchronous at relatively large connection strengths despite random connection strengths. Simulations of the neural networks with 20 connections per neuron and with only AMPA synapses showed synchronous oscillations, while the neural networks with GABA or hybrid synapses generated propagating waves of membrane potential and Ca^2+ transients.

  18. Continuous neural network with windowed Hebbian learning.

    PubMed

    Fotouhi, M; Heidari, M; Sharifitabar, M

    2015-06-01

    We introduce an extension of the classical neural field equation where the dynamics of the synaptic kernel satisfies the standard Hebbian type of learning (synaptic plasticity). Here, a continuous network in which changes in the weight kernel occurs in a specified time window is considered. A novelty of this model is that it admits synaptic weight decrease as well as the usual weight increase resulting from correlated activity. The resulting equation leads to a delay-type rate model for which the existence and stability of solutions such as the rest state, bumps, and traveling fronts are investigated. Some relations between the length of the time window and the bump width is derived. In addition, the effect of the delay parameter on the stability of solutions is shown. Also numerical simulations for solutions and their stability are presented. PMID:25677526

  19. Neural networks in support of manned space

    NASA Technical Reports Server (NTRS)

    Werbos, Paul J.

    1989-01-01

    Many lobbyists in Washington have argued that artificial intelligence (AI) is an alternative to manned space activity. In actuality, this is the opposite of the truth, especially as regards artificial neural networks (ANNs), that form of AI which has the greatest hope of mimicking human abilities in learning, ability to interface with sensors and actuators, flexibility and balanced judgement. ANNs and their relation to expert systems (the more traditional form of AI), and the limitations of both technologies are briefly reviewed. A Few highlights of recent work on ANNs, including an NSF-sponsored workshop on ANNs for control applications are given. Current thinking on ANNs for use in certain key areas (the National Aerospace Plane, teleoperation, the control of large structures, fault diagnostics, and docking) which may be crucial to the long term future of man in space is discussed.

  20. Identification of the connections in biologically inspired neural networks

    NASA Technical Reports Server (NTRS)

    Demuth, H.; Leung, K.; Beale, M.; Hicklin, J.

    1990-01-01

    We developed an identification method to find the strength of the connections between neurons from their behavior in small biologically-inspired artificial neural networks. That is, given the network external inputs and the temporal firing pattern of the neurons, we can calculate a solution for the strengths of the connections between neurons and the initial neuron activations if a solution exists. The method determines directly if there is a solution to a particular neural network problem. No training of the network is required. It should be noted that this is a first pass at the solution of a difficult problem. The neuron and network models chosen are related to biology but do not contain all of its complexities, some of which we hope to add to the model in future work. A variety of new results have been obtained. First, the method has been tailored to produce connection weight matrix solutions for networks with important features of biological neural (bioneural) networks. Second, a computationally efficient method of finding a robust central solution has been developed. This later method also enables us to find the most consistent solution in the presence of noisy data. Prospects of applying our method to identify bioneural network connections are exciting because such connections are almost impossible to measure in the laboratory. Knowledge of such connections would facilitate an understanding of bioneural networks and would allow the construction of the electronic counterparts of bioneural networks on very large scale integrated (VLSI) circuits.

  1. Prediction of molecular-dynamics simulation results using feedforward neural networks: Reaction of a C2 dimer with an activated diamond (100) surface

    NASA Astrophysics Data System (ADS)

    Agrawal, Paras M.; Samadh, Abdul N. A.; Raff, Lionel M.; Hagan, Martin T.; Bukkapatnam, Satish T.; Komanduri, Ranga

    2005-12-01

    A new approach involving neural networks combined with molecular dynamics has been used for the determination of reaction probabilities as a function of various input parameters for the reactions associated with the chemical-vapor deposition of carbon dimers on a diamond (100) surface. The data generated by the simulations have been used to train and test neural networks. The probabilities of chemisorption, scattering, and desorption as a function of input parameters, such as rotational energy, translational energy, and direction of the incident velocity vector of the carbon dimer, have been considered. The very good agreement obtained between the predictions of neural networks and those provided by molecular dynamics and the fact that, after training the network, the determination of the interpolated probabilities as a function of various input parameters involves only the evaluation of simple analytical expressions rather than computationally intensive algorithms show that neural networks are extremely powerful tools for interpolating the probabilities and rates of chemical reactions. We also find that a neural network fits the underlying trends in the data rather than the statistical variations present in the molecular-dynamics results. Consequently, neural networks can also provide a computationally convenient means of averaging the statistical variations inherent in molecular-dynamics calculations. In the present case the application of this method is found to reduce the statistical uncertainty in the molecular-dynamics results by about a factor of 3.5.

  2. Can neural networks compete with process calculations

    SciTech Connect

    Blaesi, J.; Jensen, B.

    1992-12-01

    Neural networks have been called a real alternative to rigorous theoretical models. A theoretical model for the calculation of refinery coker naphtha end point and coker furnace oil 90% point already was in place on the combination tower of a coking unit. Considerable data had been collected on the theoretical model during the commissioning phase and benefit analysis of the project. A neural net developed for the coker fractionator has equalled the accuracy of theoretical models, and shown the capability to handle normal operating conditions. One disadvantage of a neural network is the amount of data needed to create a good model. Anywhere from 100 to thousands of cases are needed to create a neural network model. Overall, the correlation between theoretical and neural net models for both the coker naphtha end point and the coker furnace oil 90% point was about .80; the average deviation was about 4 degrees. This indicates that the neural net model was at least as capable as the theoretical model in calculating inferred properties. 3 figs.

  3. Artificial neural networks for small dataset analysis.

    PubMed

    Pasini, Antonello

    2015-05-01

    Artificial neural networks (ANNs) are usually considered as tools which can help to analyze cause-effect relationships in complex systems within a big-data framework. On the other hand, health sciences undergo complexity more than any other scientific discipline, and in this field large datasets are seldom available. In this situation, I show how a particular neural network tool, which is able to handle small datasets of experimental or observational data, can help in identifying the main causal factors leading to changes in some variable which summarizes the behaviour of a complex system, for instance the onset of a disease. A detailed description of the neural network tool is given and its application to a specific case study is shown. Recommendations for a correct use of this tool are also supplied. PMID:26101654

  4. Classification of radar clutter using neural networks.

    PubMed

    Haykin, S; Deng, C

    1991-01-01

    A classifier that incorporates both preprocessing and postprocessing procedures as well as a multilayer feedforward network (based on the back-propagation algorithm) in its design to distinguish between several major classes of radar returns including weather, birds, and aircraft is described. The classifier achieves an average classification accuracy of 89% on generalization for data collected during a single scan of the radar antenna. The procedures of feature selection for neural network training, the classifier design considerations, the learning algorithm development, the implementation, and the experimental results of the neural clutter classifier, which is simulated on a Warp systolic computer, are discussed. A comparative evaluation of the multilayer neural network with a traditional Bayes classifier is presented. PMID:18282874

  5. Web traffic prediction with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Gluszek, Adam; Kekez, Michal; Rudzinski, Filip

    2005-02-01

    The main aim of the paper is to present application of the artificial neural network in the web traffic prediction. First, the general problem of time series modelling and forecasting is shortly described. Next, the details of building of dynamic processes models with the neural networks are discussed. At this point determination of the model structure in terms of its inputs and outputs is the most important question because this structure is a rough approximation of the dynamics of the modelled process. The following section of the paper presents the results obtained applying artificial neural network (classical multilayer perceptron trained with backpropagation algorithm) to the real-world web traffic prediction. Finally, we discuss the results, describe weak points of presented method and propose some alternative approaches.

  6. Artificial neural networks for small dataset analysis

    PubMed Central

    2015-01-01

    Artificial neural networks (ANNs) are usually considered as tools which can help to analyze cause-effect relationships in complex systems within a big-data framework. On the other hand, health sciences undergo complexity more than any other scientific discipline, and in this field large datasets are seldom available. In this situation, I show how a particular neural network tool, which is able to handle small datasets of experimental or observational data, can help in identifying the main causal factors leading to changes in some variable which summarizes the behaviour of a complex system, for instance the onset of a disease. A detailed description of the neural network tool is given and its application to a specific case study is shown. Recommendations for a correct use of this tool are also supplied. PMID:26101654

  7. Creation of a tablet database containing several active ingredients and prediction of their pharmaceutical characteristics based on ensemble artificial neural networks.

    PubMed

    Takagaki, Keisuke; Arai, Hiroaki; Takayama, Kozo

    2010-10-01

    A tablet database containing several active ingredients for a standard tablet formulation was created. Tablet tensile strength (TS) and disintegration time (DT) were measured before and after storage for 30 days at 40 degrees C and 75% relative humidity. An ensemble artificial neural network (EANN) was used to predict responses to differences in quantities of excipients and physical-chemical properties of active ingredients in tablets. Most classical neural networks involve a tedious trial and error approach, but EANNs automatically determine basal key parameters, which ensure that an optimal structure is rapidly obtained. We compared the predictive abilities of EANNs in which the following kinds of training algorithms were used: linear, radial basis function, general regression (GR), and multilayer perceptron. The GR EANN predicted pharmaceutical responses such as TS and DT most accurately, as evidenced by high correlation coefficients in a leave-some-out cross-validation procedure. When used in conjunction with a tablet database, the GR EANN is capable of identifying acceptable candidate tablet formulations. PMID:20310024

  8. Numerical analysis of modeling based on improved Elman neural network.

    PubMed

    Jie, Shao; Li, Wang; WeiSong, Zhao; YaQin, Zhong; Malekian, Reza

    2014-01-01

    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172

  9. Dynamical analysis of uncertain neural networks with multiple time delays

    NASA Astrophysics Data System (ADS)

    Arik, Sabri

    2016-02-01

    This paper investigates the robust stability problem for dynamical neural networks in the presence of time delays and norm-bounded parameter uncertainties with respect to the class of non-decreasing, non-linear activation functions. By employing the Lyapunov stability and homeomorphism mapping theorems together, a new delay-independent sufficient condition is obtained for the existence, uniqueness and global asymptotic stability of the equilibrium point for the delayed uncertain neural networks. The condition obtained for robust stability establishes a matrix-norm relationship between the network parameters of the neural system, which can be easily verified by using properties of the class of the positive definite matrices. Some constructive numerical examples are presented to show the applicability of the obtained result and its advantages over the previously published corresponding literature results.

  10. Neural Network Control of a Magnetically Suspended Rotor System

    NASA Technical Reports Server (NTRS)

    Choi, Benjamin; Brown, Gerald; Johnson, Dexter

    1997-01-01

    Abstract Magnetic bearings offer significant advantages because of their noncontact operation, which can reduce maintenance. Higher speeds, no friction, no lubrication, weight reduction, precise position control, and active damping make them far superior to conventional contact bearings. However, there are technical barriers that limit the application of this technology in industry. One of them is the need for a nonlinear controller that can overcome the system nonlinearity and uncertainty inherent in magnetic bearings. This paper discusses the use of a neural network as a nonlinear controller that circumvents system nonlinearity. A neural network controller was well trained and successfully demonstrated on a small magnetic bearing rig. This work demonstrated the feasibility of using a neural network to control nonlinear magnetic bearings and systems with unknown dynamics.

  11. Circuit design and exponential stabilization of memristive neural networks.

    PubMed

    Wen, Shiping; Huang, Tingwen; Zeng, Zhigang; Chen, Yiran; Li, Peng

    2015-03-01

    This paper addresses the problem of circuit design and global exponential stabilization of memristive neural networks with time-varying delays and general activation functions. Based on the Lyapunov-Krasovskii functional method and free weighting matrix technique, a delay-dependent criteria for the global exponential stability and stabilization of memristive neural networks are derived in form of linear matrix inequalities (LMIs). Two numerical examples are elaborated to illustrate the characteristics of the results. It is noteworthy that the traditional assumptions on the boundness of the derivative of the time-varying delays are removed. PMID:25481670

  12. Autonomous robot behavior based on neural networks

    NASA Astrophysics Data System (ADS)

    Grolinger, Katarina; Jerbic, Bojan; Vranjes, Bozo

    1997-04-01

    The purpose of autonomous robot is to solve various tasks while adapting its behavior to the variable environment, expecting it is able to navigate much like a human would, including handling uncertain and unexpected obstacles. To achieve this the robot has to be able to find solution to unknown situations, to learn experienced knowledge, that means action procedure together with corresponding knowledge on the work space structure, and to recognize working environment. The planning of the intelligent robot behavior presented in this paper implements the reinforcement learning based on strategic and random attempts for finding solution and neural network approach for memorizing and recognizing work space structure (structural assignment problem). Some of the well known neural networks based on unsupervised learning are considered with regard to the structural assignment problem. The adaptive fuzzy shadowed neural network is developed. It has the additional shadowed hidden layer, specific learning rule and initialization phase. The developed neural network combines advantages of networks based on the Adaptive Resonance Theory and using shadowed hidden layer provides ability to recognize lightly translated or rotated obstacles in any direction.

  13. Slow dynamics in features of synchronized neural network responses

    PubMed Central

    Haroush, Netta; Marom, Shimon

    2015-01-01

    In this report trial-to-trial variations in the synchronized responses of neural networks are explored over time scales of minutes, in ex-vivo large scale cortical networks. We show that sub-second measures of the individual synchronous response, namely—its latency and decay duration, are related to minutes-scale network response dynamics. Network responsiveness is reflected as residency in, or shifting amongst, areas of the latency-decay plane. The different sensitivities of latency and decay durations to synaptic blockers imply that these two measures reflect aspects of inhibitory and excitatory activities. Taken together, the data suggest that trial-to-trial variations in the synchronized responses of neural networks might be related to effective excitation-inhibition ratio being a dynamic variable over time scales of minutes. PMID:25926787

  14. Experimental fault characterization of a neural network

    NASA Technical Reports Server (NTRS)

    Tan, Chang-Huong

    1990-01-01

    The effects of a variety of faults on a neural network is quantified via simulation. The neural network consists of a single-layered clustering network and a three-layered classification network. The percentage of vectors mistagged by the clustering network, the percentage of vectors misclassified by the classification network, the time taken for the network to stabilize, and the output values are all measured. The results show that both transient and permanent faults have a significant impact on the performance of the measured network. The corresponding mistag and misclassification percentages are typically within 5 to 10 percent of each other. The average mistag percentage and the average misclassification percentage are both about 25 percent. After relearning, the percentage of misclassifications is reduced to 9 percent. In addition, transient faults are found to cause the network to be increasingly unstable as the duration of a transient is increased. The impact of link faults is relatively insignificant in comparison with node faults (1 versus 19 percent misclassified after relearning). There is a linear increase in the mistag and misclassification percentages with decreasing hardware redundancy. In addition, the mistag and misclassification percentages linearly decrease with increasing network size.

  15. Artificial Neural Networks for Modeling Knowing and Learning in Science.

    ERIC Educational Resources Information Center

    Roth, Wolff-Michael

    2000-01-01

    Advocates artificial neural networks as models for cognition and development. Provides an example of how such models work in the context of a well-known Piagetian developmental task and school science activity: balance beam problems. (Contains 59 references.) (Author/WRM)

  16. Successful neural network projects at the Idaho National Engineering Laboratory

    SciTech Connect

    Cordes, G.A.

    1991-01-01

    This paper presents recent and current projects at the Idaho National Engineering Laboratory (INEL) that research and apply neural network technology. The projects are summarized in the paper and their direct application to space reactor power and propulsion systems activities is discussed. 9 refs., 10 figs., 3 tabs.

  17. Neural network guided search control in partial order planning

    SciTech Connect

    Zimmerman, T.

    1996-12-31

    The development of efficient search control methods is an active research topic in the field of planning. Investigation of a planning program integrated with a neural network (NN) that assists in search control is underway, and has produced promising preliminary results.

  18. A neural network with modular hierarchical learning

    NASA Technical Reports Server (NTRS)

    Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)

    1994-01-01

    This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.

  19. The use of neural networks for the determination of the signal modulation frequency from the firing activity pattern in the auditory neurons of the frog

    NASA Astrophysics Data System (ADS)

    Bibikov, N. G.; Grigor'ev, D. Yu.

    2007-11-01

    A two-layer backward propagation neural network was used for the determination of the modulation frequency of tonal signals from the firing patterns of single neurons located in the cochlear nuclei and the torus semicircularis of the grass frog ( Rana t. temporaria). As an input of the neural network, a sum of several single responses of a neuron to an amplitude-modulated stimulus was used. The number of inputs corresponded to the number of the time readouts of the summarized response (usually 60), and the number of output elements corresponded to the number of modulation frequencies to be distinguished (from 3 to 15). In the case of a good synchronization of the input firing activity with the signal envelope, the classification was successful even if the training and classification were performed with the use of individual responses. An increase in the number of summed responses to 10 20 lead to a simplification of the training procedure. The results of the study were discussed in the context of the problem of the formation of periodicity detectors at the upper levels of the auditory pathway in vertebrates.

  20. Development of programmable artificial neural networks

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J.

    1993-01-01

    Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.

  1. Auto-associative nanoelectronic neural network

    SciTech Connect

    Nogueira, C. P. S. M.; Guimarães, J. G.

    2014-05-15

    In this paper, an auto-associative neural network using single-electron tunneling (SET) devices is proposed and simulated at low temperature. The nanoelectronic auto-associative network is able to converge to a stable state, previously stored during training. The recognition of the pattern involves decreasing the energy of the input state until it achieves a point of local minimum energy, which corresponds to one of the stored patterns.

  2. Digital Neural Networks for New Media

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Lambert; Malki, Suleyman

    Neural Networks perform computationally intensive tasks offering smart solutions for many new media applications. A number of analog and mixed digital/analog implementations have been proposed to smooth the algorithmic gap. But gradually, the digital implementation has become feasible, and the dedicated neural processor is on the horizon. A notable example is the Cellular Neural Network (CNN). The analog direction has matured for low-power, smart vision sensors; the digital direction is gradually being shaped into an IP-core for algorithm acceleration, especially for use in FPGA-based high-performance systems. The chapter discusses the next step towards a flexible and scalable multi-core engine using Application-Specific Integrated Processors (ASIP). This topographic engine can serve many new media tasks, as illustrated by novel applications in Homeland Security. We conclude with a view on the CNN kaleidoscope for the year 2020.

  3. Optoelectronic Integrated Circuits For Neural Networks

    NASA Technical Reports Server (NTRS)

    Psaltis, D.; Katz, J.; Kim, Jae-Hoon; Lin, S. H.; Nouhi, A.

    1990-01-01

    Many threshold devices placed on single substrate. Integrated circuits containing optoelectronic threshold elements developed for use as planar arrays of artificial neurons in research on neural-network computers. Mounted with volume holograms recorded in photorefractive crystals serving as dense arrays of variable interconnections between neurons.

  4. Psychometric Measurement Models and Artificial Neural Networks

    ERIC Educational Resources Information Center

    Sese, Albert; Palmer, Alfonso L.; Montano, Juan J.

    2004-01-01

    The study of measurement models in psychometrics by means of dimensionality reduction techniques such as Principal Components Analysis (PCA) is a very common practice. In recent times, an upsurge of interest in the study of artificial neural networks apt to computing a principal component extraction has been observed. Despite this interest, the…

  5. Neural network application to comprehensive engine diagnostics

    NASA Technical Reports Server (NTRS)

    Marko, Kenneth A.

    1994-01-01

    We have previously reported on the use of neural networks for detection and identification of faults in complex microprocessor controlled powertrain systems. The data analyzed in those studies consisted of the full spectrum of signals passing between the engine and the real-time microprocessor controller. The specific task of the classification system was to classify system operation as nominal or abnormal and to identify the fault present. The primary concern in earlier work was the identification of faults, in sensors or actuators in the powertrain system as it was exercised over its full operating range. The use of data from a variety of sources, each contributing some potentially useful information to the classification task, is commonly referred to as sensor fusion and typifies the type of problems successfully addressed using neural networks. In this work we explore the application of neural networks to a different diagnostic problem, the diagnosis of faults in newly manufactured engines and the utility of neural networks for process control.

  6. Nonlinear Time Series Analysis via Neural Networks

    NASA Astrophysics Data System (ADS)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  7. Negative transfer problem in neural networks

    NASA Astrophysics Data System (ADS)

    Abunawass, Adel M.

    1992-07-01

    Harlow, 1949, observed that when human subjects were trained to perform simple discrimination tasks over a sequence of successive training sessions (trials), their performance improved as a function of the successive sessions. Harlow called this phenomena `learning-to- learn.' The subjects acquired knowledge and improved their ability to learn in future training sessions. It seems that previous training sessions contribute positively to the current one. Abunawass & Maki, 1989, observed that when a neural network (using the back-propagation model) is trained over successive sessions, the performance and learning ability of the network degrade as a function of the training sessions. In some cases this leads to a complete paralysis of the network. Abunawass & Maki called this phenomena the `negative transfer' problem, since previous training sessions contribute negatively to the current one. The effect of the negative transfer problem is in clear contradiction to that reported by Harlow on human subjects. Since the ability to model human cognition and learning is one of the most important goals (and claims) of neural networks, the negative transfer problem represents a clear limitation to this ability. This paper describes a new neural network sequential learning model known as Adaptive Memory Consolidation. In this model the network uses its past learning experience to enhance its future learning ability. Adaptive Memory Consolidation has led to the elimination and reversal of the effect of the negative transfer problem. Thus producing a `positive transfer' effect similar to Harlow's learning-to-learn phenomena.

  8. Foetal ECG recovery using dynamic neural networks.

    PubMed

    Camps-Valls, Gustavo; Martínez-Sober, Marcelino; Soria-Olivas, Emilio; Magdalena-Benedito, Rafael; Calpe-Maravilla, Javier; Guerrero-Martínez, Juan

    2004-07-01

    Non-invasive electrocardiography has proven to be a very interesting method for obtaining information about the foetus state and thus to assure its well-being during pregnancy. One of the main applications in this field is foetal electrocardiogram (ECG) recovery by means of automatic methods. Evident problems found in the literature are the limited number of available registers, the lack of performance indicators, and the limited use of non-linear adaptive methods. In order to circumvent these problems, we first introduce the generation of synthetic registers and discuss the influence of different kinds of noise to the modelling. Second, a method which is based on numerical (correlation coefficient) and statistical (analysis of variance, ANOVA) measures allows us to select the best recovery model. Finally, finite impulse response (FIR) and gamma neural networks are included in the adaptive noise cancellation (ANC) scheme in order to provide highly non-linear, dynamic capabilities to the recovery model. Neural networks are benchmarked with classical adaptive methods such as the least mean squares (LMS) and the normalized LMS (NLMS) algorithms in simulated and real registers and some conclusions are drawn. For synthetic registers, the most determinant factor in the identification of the models is the foetal-maternal signal-to-noise ratio (SNR). In addition, as the electromyogram contribution becomes more relevant, neural networks clearly outperform the LMS-based algorithm. From the ANOVA test, we found statistical differences between LMS-based models and neural models when complex situations (high foetal-maternal and foetal-noise SNRs) were present. These conclusions were confirmed after doing robustness tests on synthetic registers, visual inspection of the recovered signals and calculation of the recognition rates of foetal R-peaks for real situations. Finally, the best compromise between model complexity and outcomes was provided by the FIR neural network. Both

  9. Optimal input sizes for neural network de-interlacing

    NASA Astrophysics Data System (ADS)

    Choi, Hyunsoo; Seo, Guiwon; Lee, Chulhee

    2009-02-01

    Neural network de-interlacing has shown promising results among various de-interlacing methods. In this paper, we investigate the effects of input size for neural networks for various video formats when the neural networks are used for de-interlacing. In particular, we investigate optimal input sizes for CIF, VGA and HD video formats.

  10. [Application of artificial neural networks in infectious diseases].

    PubMed

    Xu, Jun-fang; Zhou, Xiao-nong

    2011-02-28

    With the development of information technology, artificial neural networks has been applied to many research fields. Due to the special features such as nonlinearity, self-adaptation, and parallel processing, artificial neural networks are applied in medicine and biology. This review summarizes the application of artificial neural networks in the relative factors, prediction and diagnosis of infectious diseases in recent years. PMID:21823326

  11. Algorithm For A Self-Growing Neural Network

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.

    1996-01-01

    CID3 algorithm simulates self-growing neural network. Constructs decision trees equivalent to hidden layers of neural network. Based on ID3 algorithm, which dynamically generates decision tree while minimizing entropy of information. CID3 algorithm generates feedforward neural network by use of either crisp or fuzzy measure of entropy.

  12. Field-theoretic approach to fluctuation effects in neural networks

    SciTech Connect

    Buice, Michael A.; Cowan, Jack D.

    2007-05-15

    A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governed by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience.

  13. Classifying multispectral data by neural networks

    NASA Technical Reports Server (NTRS)

    Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.

    1993-01-01

    Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.

  14. Color control of printers by neural networks

    NASA Astrophysics Data System (ADS)

    Tominaga, Shoji

    1998-07-01

    A method is proposed for solving the mapping problem from the 3D color space to the 4D CMYK space of printer ink signals by means of a neural network. The CIE-L*a*b* color system is used as the device-independent color space. The color reproduction problem is considered as the problem of controlling an unknown static system with four inputs and three outputs. A controller determines the CMYK signals necessary to produce the desired L*a*b* values with a given printer. Our solution method for this control problem is based on a two-phase procedure which eliminates the need for UCR and GCR. The first phase determines a neural network as a model of the given printer, and the second phase determines the combined neural network system by combining the printer model and the controller in such a way that it represents an identity mapping in the L*a*b* color space. Then the network of the controller part realizes the mapping from the L*a*b* space to the CMYK space. Practical algorithms are presented in the form of multilayer feedforward networks. The feasibility of the proposed method is shown in experiments using a dye sublimation printer and an ink jet printer.

  15. A Topological Perspective of Neural Network Structure

    NASA Astrophysics Data System (ADS)

    Sizemore, Ann; Giusti, Chad; Cieslak, Matthew; Grafton, Scott; Bassett, Danielle

    The wiring patterns of white matter tracts between brain regions inform functional capabilities of the neural network. Indeed, densely connected and cyclically arranged cognitive systems may communicate and thus perform distinctly. However, previously employed graph theoretical statistics are local in nature and thus insensitive to such global structure. Here we present an investigation of the structural neural network in eight healthy individuals using persistent homology. An extension of homology to weighted networks, persistent homology records both circuits and cliques (all-to-all connected subgraphs) through a repetitive thresholding process, thus perceiving structural motifs. We report structural features found across patients and discuss brain regions responsible for these patterns, finally considering the implications of such motifs in relation to cognitive function.

  16. Fuzzy logic and neural network technologies

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  17. a Heterosynaptic Learning Rule for Neural Networks

    NASA Astrophysics Data System (ADS)

    Emmert-Streib, Frank

    In this article we introduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.

  18. Neural networks: Application to medical imaging

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  19. Linear and nonlinear modeling of antifungal activity of some heterocyclic ring derivatives using multiple linear regression and Bayesian-regularized neural networks.

    PubMed

    Caballero, Julio; Fernández, Michael

    2006-01-01

    Antifungal activity was modeled for a set of 96 heterocyclic ring derivatives (2,5,6-trisubstituted benzoxazoles, 2,5-disubstituted benzimidazoles, 2-substituted benzothiazoles and 2-substituted oxazolo(4,5-b)pyridines) using multiple linear regression (MLR) and Bayesian-regularized artificial neural network (BRANN) techniques. Inhibitory activity against Candida albicans (log(1/C)) was correlated with 3D descriptors encoding the chemical structures of the heterocyclic compounds. Training and test sets were chosen by means of k-Means Clustering. The most appropriate variables for linear and nonlinear modeling were selected using a genetic algorithm (GA) approach. In addition to the MLR equation (MLR-GA), two nonlinear models were built, model BRANN employing the linear variable subset and an optimum model BRANN-GA obtained by a hybrid method that combined BRANN and GA approaches (BRANN-GA). The linear model fit the training set (n = 80) with r2 = 0.746, while BRANN and BRANN-GA gave higher values of r2 = 0.889 and r2 = 0.937, respectively. Beyond the improvement of training set fitting, the BRANN-GA model was superior to the others by being able to describe 87% of test set (n = 16) variance in comparison with 78 and 81% the MLR-GA and BRANN models, respectively. Our quantitative structure-activity relationship study suggests that the distributions of atomic mass, volume and polarizability have relevant relationships with the antifungal potency of the compounds studied. Furthermore, the ability of the six variables selected nonlinearly to differentiate the data was demonstrated when the total data set was well distributed in a Kohonen self-organizing neural network (KNN). PMID:16205958

  20. Computationally Efficient Neural Network Intrusion Security Awareness

    SciTech Connect

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  1. Neural network construction via back-propagation

    SciTech Connect

    Burwick, T.T.

    1994-06-01

    A method is presented that combines back-propagation with multi-layer neural network construction. Back-propagation is used not only to adjust the weights but also the signal functions. Going from one network to an equivalent one that has additional linear units, the non-linearity of these units and thus their effective presence is then introduced via back-propagation (weight-splitting). The back-propagated error causes the network to include new units in order to minimize the error function. We also show how this formalism allows to escape local minima.

  2. Multiscale Modeling of Cortical Neural Networks

    NASA Astrophysics Data System (ADS)

    Torben-Nielsen, Benjamin; Stiefel, Klaus M.

    2009-09-01

    In this study, we describe efforts at modeling the electrophysiological dynamics of cortical networks in a multi-scale manner. Specifically, we describe the implementation of a network model composed of simple single-compartmental neuron models, in which a single complex multi-compartmental model of a pyramidal neuron is embedded. The network is capable of generating Δ (2 Hz, observed during deep sleep states) and γ (40 Hz, observed during wakefulness) oscillations, which are then imposed onto the multi-compartmental model, thus providing realistic, dynamic boundary conditions. We furthermore discuss the challenges and chances involved in multi-scale modeling of neural function.

  3. Perspective: network-guided pattern formation of neural dynamics.

    PubMed

    Hütt, Marc-Thorsten; Kaiser, Marcus; Hilgetag, Claus C

    2014-10-01

    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings and lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatio-temporal pattern formation and propose a novel perspective for analysing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics. PMID:25180302

  4. Continuous Attractor Neural Networks: Candidate of a Canonical Model for Neural Information Representation

    PubMed Central

    Wu, Si; Wong, K Y Michael; Fung, C C Alan; Mi, Yuanyuan; Zhang, Wenhao

    2016-01-01

    Owing to its many computationally desirable properties, the model of continuous attractor neural networks (CANNs) has been successfully applied to describe the encoding of simple continuous features in neural systems, such as orientation, moving direction, head direction, and spatial location of objects. Recent experimental and computational studies revealed that complex features of external inputs may also be encoded by low-dimensional CANNs embedded in the high-dimensional space of neural population activity. The new experimental data also confirmed the existence of the M-shaped correlation between neuronal responses, which is a correlation structure associated with the unique dynamics of CANNs. This body of evidence, which is reviewed in this report, suggests that CANNs may serve as a canonical model for neural information representation. PMID:26937278

  5. Orbit-centered atmospheric density prediction using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Pérez, David; Wohlberg, Brendt; Lovell, Thomas Alan; Shoemaker, Michael; Bevilacqua, Riccardo

    2014-05-01

    At low Earth orbits, drag force is a significant source of error for propagating the motion of a spacecraft. The main factor driving the changes on the drag force is neutral density. Global atmospheric models provide estimates for the density which are significantly affected by bias due to misrepresentations of the underlying physics and limitations on the statistical models. In this work a localized predictor based on artificial neural networks is presented. Localized refers to the focus being on a specific orbit, rather than a global prediction. The predictor uses density measurements or estimates on a given orbit and a set of proxies for solar and geomagnetic activities to predict the value of the density along the future orbit of the spacecraft. The performance of the localized predictor is studied for different neural network structures, testing periods of high and low solar and geomagnetic activities and different prediction windows. Comparison with previously developed methods show substantial benefits in using artificial neural networks, both in prediction accuracy and in the potential for spacecraft onboard implementation. In fact, the proposed neural networks are computationally efficient and would be straightforward to integrate into onboard software.

  6. Neural networks in the process industries

    SciTech Connect

    Ben, L.R.; Heavner, L.

    1996-12-01

    Neural networks, or more precisely, artificial neural networks (ANNs), are rapidly gaining in popularity. They first began to appear on the process-control scene in the early 1990s, but have been a research focus for more than 30 years. Neural networks are really empirical models that approximate the way man thinks neurons in the human brain work. Neural-net technology is not trying to produce computerized clones, but to model nature in an effort to mimic some of the brain`s capabilities. Modeling, for the purposes of this article, means developing a mathematical description of physical phenomena. The physics and chemistry of industrial processes are usually quite complex and sometimes poorly understood. Our process understanding, and our imperfect ability to describe complexity in mathematical terms, limit fidelity of first-principle models. Computational requirements for executing these complex models are a further limitation. It is often not possible to execute first-principle model algorithms at the high rate required for online control. Nevertheless, rigorous first principle models are commonplace design tools. Process control is another matter. Important model inputs are often not available as process measurements, making real-time application difficult. In fact, engineers often use models to infer unavailable measurements. 5 figs.

  7. Neural Networks for Beat Perception in Musical Rhythm

    PubMed Central

    Large, Edward W.; Herrera, Jorge A.; Velasco, Marc J.

    2015-01-01

    Entrainment of cortical rhythms to acoustic rhythms has been hypothesized to be the neural correlate of pulse and meter perception in music. Dynamic attending theory first proposed synchronization of endogenous perceptual rhythms nearly 40 years ago, but only recently has the pivotal role of neural synchrony been demonstrated. Significant progress has since been made in understanding the role of neural oscillations and the neural structures that support synchronized responses to musical rhythm. Synchronized neural activity has been observed in auditory and motor networks, and has been linked with attentional allocation and movement coordination. Here we describe a neurodynamic model that shows how self-organization of oscillations in interacting sensory and motor networks could be responsible for the formation of the pulse percept in complex rhythms. In a pulse synchronization study, we test the model's key prediction that pulse can be perceived at a frequency for which no spectral energy is present in the amplitude envelope of the acoustic rhythm. The result shows that participants perceive the pulse at the theoretically predicted frequency. This model is one of the few consistent with neurophysiological evidence on the role of neural oscillation, and it explains a phenomenon that other computational models fail to explain. Because it is based on a canonical model, the predictions hold for an entire family of dynamical systems, not only a specific one. Thus, this model provides a theoretical link between oscillatory neurodynamics and the induction of pulse and meter in musical rhythm. PMID:26635549

  8. Neural Networks for Beat Perception in Musical Rhythm.

    PubMed

    Large, Edward W; Herrera, Jorge A; Velasco, Marc J

    2015-01-01

    Entrainment of cortical rhythms to acoustic rhythms has been hypothesized to be the neural correlate of pulse and meter perception in music. Dynamic attending theory first proposed synchronization of endogenous perceptual rhythms nearly 40 years ago, but only recently has the pivotal role of neural synchrony been demonstrated. Significant progress has since been made in understanding the role of neural oscillations and the neural structures that support synchronized responses to musical rhythm. Synchronized neural activity has been observed in auditory and motor networks, and has been linked with attentional allocation and movement coordination. Here we describe a neurodynamic model that shows how self-organization of oscillations in interacting sensory and motor networks could be responsible for the formation of the pulse percept in complex rhythms. In a pulse synchronization study, we test the model's key prediction that pulse can be perceived at a frequency for which no spectral energy is present in the amplitude envelope of the acoustic rhythm. The result shows that participants perceive the pulse at the theoretically predicted frequency. This model is one of the few consistent with neurophysiological evidence on the role of neural oscillation, and it explains a phenomenon that other computational models fail to explain. Because it is based on a canonical model, the predictions hold for an entire family of dynamical systems, not only a specific one. Thus, this model provides a theoretical link between oscillatory neurodynamics and the induction of pulse and meter in musical rhythm. PMID:26635549

  9. Adaptive Neural Networks for Automatic Negotiation

    SciTech Connect

    Sakas, D. P.; Vlachos, D. S.; Simos, T. E.

    2007-12-26

    The use of fuzzy logic and fuzzy neural networks has been found effective for the modelling of the uncertain relations between the parameters of a negotiation procedure. The problem with these configurations is that they are static, that is, any new knowledge from theory or experiment lead to the construction of entirely new models. To overcome this difficulty, we apply in this work, an adaptive neural topology to model the negotiation process. Finally a simple simulation is carried in order to test the new method.

  10. Pruning Neural Networks with Distribution Estimation Algorithms

    SciTech Connect

    Cantu-Paz, E

    2003-01-15

    This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than the original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.

  11. The relevance of network micro-structure for neural dynamics.

    PubMed

    Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan

    2013-01-01

    The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits. PMID:23761758

  12. Forecasting solar proton event with artificial neural network

    NASA Astrophysics Data System (ADS)

    Gong, J.; Wang, J.; Xue, B.; Liu, S.; Zou, Z.

    Solar proton event (SPE), relatively rare but popular in solar maximum, can bring hazard situation to spacecraft. As a special event, SPE always accompanies flare, which is also called proton flare. To produce such an eruptive event, large amount energy must be accumulated within the active region. So we can investigate the character of the active region and its evolving trend, together with other such as cm radio emission and soft X-ray background to evaluate the potential of SEP in chosen area. In order to summarize the omen of SPEs in the active regions behind the observed parameters, we employed AI technology. Full connecting neural network was chosen to fulfil this job. After constructing the network, we train it with 13 parameters that was able to exhibit the character of active regions and their evolution trend. More than 80 sets of event parameter were defined to teach the neural network to identify whether an active region was potential of SPE. Then we test this model with a data base consisting SPE and non-SPE cases that was not used to train the neural network. The result showed that 75% of the choice by the model was right.

  13. Computational capabilities of recurrent NARX neural networks.

    PubMed

    Siegelmann, H T; Horne, B G; Giles, C L

    1997-01-01

    Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=Psi(u(t-n(u)), ..., u(t-1), u(t), y(t-n(y)), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n(u) and n(y) are the input and output order, and the function Psi is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power. PMID:18255858

  14. A solution to neural field equations by a recurrent neural network method

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2012-09-01

    Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.

  15. Generalization of features in the assembly neural networks.

    PubMed

    Goltsev, Alexander; Wunsch, Donald C

    2004-02-01

    The purpose of the paper is an experimental study of the formation of class descriptions, taking place during learning, in assembly neural networks. The assembly neural network is artificially partitioned into several sub-networks according to the number of classes that the network has to recognize. The features extracted from input data are represented in neural column structures of the sub-networks. Hebbian neural assemblies are formed in the column structure of the sub-networks by weight adaptation. A specific class description is formed in each sub-network of the assembly neural network due to intersections between the neural assemblies. The process of formation of class descriptions in the sub-networks is interpreted as feature generalization. A set of special experiments is performed to study this process, on a task of character recognition using the MNIST database. PMID:15034946

  16. VLSI implementable neural networks for target tracking

    NASA Astrophysics Data System (ADS)

    Himes, Glenn S.; Inigo, Rafael M.; Narathong, Chiewcharn

    1991-08-01

    This paper describes part of an integrated system for target tracking. The image is acquired, edge detected, and segmented by a subsystem not discussed in this paper. Algorithms to determine the centroid of a windowed target using neural networks are developed. Further, once the target centroid is determined, it is continuously updated in order to track the trajectory, since the centroid location is not dependent on scaling or rotation on the optical axis. The image is then mapped to a log-spiral grid. A conformal transformation is used to map the log-spiral grid to a computation plane in which rotations and scalings are transformed to displacements along the vertical and horizonal axes, respectively. The images in this plane are used for recognition. The recognition algorithms are the subject of another paper. A second neural network, also described in this paper, is then used to determine object rotation and scaling. The algorithm used by this network is an original line correlator tracker which, as the name indicates, uses linear instead of 2D correlations. Simulation results using ICBM images are presented for both the centroid neural net and the rotation-scaling detection network.

  17. Functional expansion representations of artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1992-01-01

    In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.

  18. Convolutional Neural Network Based dem Super Resolution

    NASA Astrophysics Data System (ADS)

    Chen, Zixuan; Wang, Xuewen; Xu, Zekai; Hou, Wenguang

    2016-06-01

    DEM super resolution is proposed in our previous publication to improve the resolution for a DEM on basis of some learning examples. Meanwhile, the nonlocal algorithm is introduced to deal with it and lots of experiments show that the strategy is feasible. In our publication, the learning examples are defined as the partial original DEM and their related high measurements due to this way can avoid the incompatibility between the data to be processed and the learning examples. To further extent the applications of this new strategy, the learning examples should be diverse and easy to obtain. Yet, it may cause the problem of incompatibility and unrobustness. To overcome it, we intend to investigate a convolutional neural network based method. The input of the convolutional neural network is a low resolution DEM and the output is expected to be its high resolution one. A three layers model will be adopted. The first layer is used to detect some features from the input, the second integrates the detected features to some compressed ones and the final step transforms the compressed features as a new DEM. According to this designed structure, some learning DEMs will be taken to train it. Specifically, the designed network will be optimized by minimizing the error of the output and its expected high resolution DEM. In practical applications, a testing DEM will be input to the convolutional neural network and a super resolution will be obtained. Many experiments show that the CNN based method can obtain better reconstructions than many classic interpolation methods.

  19. Neural networks as a control methodology

    NASA Technical Reports Server (NTRS)

    Mccullough, Claire L.

    1990-01-01

    While conventional computers must be programmed in a logical fashion by a person who thoroughly understands the task to be performed, the motivation behind neural networks is to develop machines which can train themselves to perform tasks, using available information about desired system behavior and learning from experience. There are three goals of this fellowship program: (1) to evaluate various neural net methods and generate computer software to implement those deemed most promising on a personal computer equipped with Matlab; (2) to evaluate methods currently in the professional literature for system control using neural nets to choose those most applicable to control of flexible structures; and (3) to apply the control strategies chosen in (2) to a computer simulation of a test article, the Control Structures Interaction Suitcase Demonstrator, which is a portable system consisting of a small flexible beam driven by a torque motor and mounted on springs tuned to the first flexible mode of the beam. Results of each are discussed.

  20. On lateral competition in dynamic neural networks

    SciTech Connect

    Bellyustin, N.S.

    1995-02-01

    Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.

  1. Neural network for tsunami and runup forecast

    NASA Astrophysics Data System (ADS)

    Namekar, Shailesh; Yamazaki, Yoshiki; Cheung, Kwok Fai

    2009-04-01

    This paper examines the use of neural network to model nonlinear tsunami processes for forecasting of coastal waveforms and runup. The three-layer network utilizes a radial basis function in the hidden, middle layer for nonlinear transformation of input waveforms near the tsunami source. Events based on the 2006 Kuril Islands tsunami demonstrate the implementation and capability of the network. Division of the Kamchatka-Kuril subduction zone into a number of subfaults facilitates development of a representative tsunami dataset using a nonlinear long-wave model. The computed waveforms near the tsunami source serve as the input and the far-field waveforms and runup provide the target output for training of the network through a back-propagation algorithm. The trained network reproduces the resonance of tsunami waves and the topography-dominated runup patterns at Hawaii's coastlines from input water-level data off the Aleutian Islands.

  2. A classifier neural network for rotordynamic systems

    NASA Astrophysics Data System (ADS)

    Ganesan, R.; Jionghua, Jin; Sankar, T. S.

    1995-07-01

    A feedforward backpropagation neural network is formed to identify the stability characteristic of a high speed rotordynamic system. The principal focus resides in accounting for the instability due to the bearing clearance effects. The abnormal operating condition of 'normal-loose' Coulomb rub, that arises in units supported by hydrodynamic bearings or rolling element bearings, is analysed in detail. The multiple-parameter stability problem is formulated and converted to a set of three-parameter algebraic inequality equations. These three parameters map the wider range of physical parameters of commonly-used rotordynamic systems into a narrow closed region, that is used in the supervised learning of the neural network. A binary-type state of the system is expressed through these inequalities that are deduced from the analytical simulation of the rotor system. Both the hidden layer as well as functional-link networks are formed and the superiority of the functional-link network is established. Considering the real time interpretation and control of the rotordynamic system, the network reliability and the learning time are used as the evaluation criteria to assess the superiority of the functional-link network. This functional-link network is further trained using the parameter values of selected rotor systems, and the classifier network is formed. The success rate of stability status identification is obtained to assess the potentials of this classifier network. The classifier network is shown that it can also be used, for control purposes, as an 'advisory' system that suggests the optimum way of parameter adjustment.

  3. Model for a neural network structure and signal transmission

    NASA Astrophysics Data System (ADS)

    Kotsavasiloglou, C.; Kalampokis, A.; Argyrakis, P.; Baloyannis, S.

    1997-10-01

    We present a model of a neural network that is based on the diffusion-limited-aggregation (DLA) structure from fractal physics. A single neuron is one DLA cluster, while a large number of clusters, in an interconnected fashion, make up the neural network. Using simulation techniques, a signal is randomly generated and traced through its transmission inside the neuron and from neuron to neuron through the synapses. The activity of the entire neural network is monitored as a function of time. The characteristics included in the model contain, among others, the threshold for firing, the excitatory or inhibitory character of the synapse, the synaptic delay, and the refractory period. The system activity results in ``noisy'' time series that exhibit an oscillatory character. Standard power spectra are evaluated and fractal analyses performed, showing that the system is not chaotic, but the varying parameters can be associated with specific values of fractal dimensions. It is found that the network activity is not linear with the system parameters, e.g., with the numbers of active synapses. The details of this behavior may have interesting repercussions from the neurological point of view.

  4. Analysis of Stochastic Response of Neural Networks with Stochastic Input

    1996-10-10

    Software permits the user to extend capability of his/her neural network to include probablistic characteristics of input parameter. User inputs topology and weights associated with neural network along with distributional characteristics of input parameters. Network response is provided via a cumulative density function of network response variable.

  5. Neural dynamics in superconducting networks

    NASA Astrophysics Data System (ADS)

    Segall, Kenneth; Schult, Dan; Crotty, Patrick; Miller, Max

    2012-02-01

    We discuss the use of Josephson junction networks as analog models for simulating neuron behaviors. A single unit called a ``Josephson Junction neuron'' composed of two Josephson junctions [1] displays behavior that shows characteristics of single neurons such as action potentials, thresholds and refractory periods. Synapses can be modeled as passive filters and can be used to connect neurons together. The sign of the bias current to the Josephson neuron can be used to determine if the neuron is excitatory or inhibitory. Due to the intrinsic speed of Josephson junctions and their scaling properties as analog models, a large network of Josephson neurons measured over typical lab times contains dynamics which would essentially be impossible to calculate on a computer We discuss the operating principle of the Josephson neuron, coupling Josephson neurons together to make large networks, and the Kuramoto-like synchronization of a system of disordered junctions.[4pt] [1] ``Josephson junction simulation of neurons,'' P. Crotty, D. Schult and K. Segall, Physical Review E 82, 011914 (2010).

  6. Neural networks and logical reasoning systems: a translation table.

    PubMed

    Martins, J; Mendes, R V

    2001-04-01

    A correspondence is established between the basic elements of logic reasoning systems (knowledge bases, rules, inference and queries) and the structure and dynamical evolution laws of neural networks. The correspondence is pictured as a translation dictionary which might allow to go back and forth between symbolic and network formulations, a desirable step in learning-oriented systems and multicomputer networks. In the framework of Horn clause logics, it is found that atomic propositions with n arguments correspond to nodes with nth order synapses, rules to synaptic intensity constraints, forward chaining to synaptic dynamics and queries either to simple node activation or to a query tensor dynamics. PMID:14632170

  7. Neural network modeling of associative memory: Beyond the Hopfield model

    NASA Astrophysics Data System (ADS)

    Dasgupta, Chandan

    1992-07-01

    A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying dynamics are used to store and associatively recall information, are described. In the first class of models, a hierarchical structure is used to store an exponentially large number of strongly correlated memories. The second class of models uses limit cycles to store and retrieve individual memories. A neurobiologically plausible network that generates low-amplitude periodic variations of activity, similar to the oscillations observed in electroencephalographic recordings, is also described. Results obtained from analytic and numerical studies of the properties of these networks are discussed.

  8. Image texture segmentation using a neural network

    NASA Astrophysics Data System (ADS)

    Sayeh, Mohammed R.; Athinarayanan, Ragu; Dhali, Pushpuak

    1992-09-01

    In this paper we use a neural network called the Lyapunov associative memory (LYAM) system to segment image texture into different categories or clusters. The LYAM system is constructed by a set of ordinary differential equations which are simulated on a digital computer. The clustering can be achieved by using a single tuning parameter in the simplest model. Pattern classes are represented by the stable equilibrium states of the system. Design of the system is based on synthesizing two local energy functions, namely, the learning and recall energy functions. Before the implementation of the segmentation process, a Gauss-Markov random field (GMRF) model is applied to the raw image. This application suitably reduces the image data and prepares the texture information for the neural network process. We give a simple image example illustrating the capability of the technique. The GMRF-generated features are also used for a clustering, based on the Euclidean distance.

  9. Training neural networks with heterogeneous data.

    PubMed

    Drakopoulos, John A; Abdulkader, Ahmad

    2005-01-01

    Data pruning and ordered training are two methods and the results of a small theory that attempts to formalize neural network training with heterogeneous data. Data pruning is a simple process that attempts to remove noisy data. Ordered training is a more complex method that partitions the data into a number of categories and assigns training times to those assuming that data size and training time have a polynomial relation. Both methods derive from a set of premises that form the 'axiomatic' basis of our theory. Both methods have been applied to a time-delay neural network-which is one of the main learners in Microsoft's Tablet PC handwriting recognition system. Their effect is presented in this paper along with a rough estimate of their effect on the overall multi-learner system. The handwriting data and the chosen language are Italian. PMID:16095874

  10. Privacy-preserving backpropagation neural network learning.

    PubMed

    Chen, Tingting; Zhong, Sheng

    2009-10-01

    With the development of distributed computing environment , many learning problems now have to deal with distributed input data. To enhance cooperations in learning, it is important to address the privacy concern of each data holder by extending the privacy preservation notion to original learning algorithms. In this paper, we focus on preserving the privacy in an important learning model, multilayer neural networks. We present a privacy-preserving two-party distributed algorithm of backpropagation which allows a neural network to be trained without requiring either party to reveal her data to the other. We provide complete correctness and security analysis of our algorithms. The effectiveness of our algorithms is verified by experiments on various real world data sets. PMID:19709975

  11. Application of neural networks in space construction

    NASA Technical Reports Server (NTRS)

    Thilenius, Stephen C.; Barnes, Frank

    1990-01-01

    When trying to decide what task should be done by robots and what tasks should be done by humans with respect to space construction, there has been one decisive barrier which ultimately divides the tasks: can a computer do the job? Von Neumann type computers have great difficulty with problems that the human brain seems to do instantaneously and with little effort. Some of these problems are pattern recognition, speech recognition, content addressable memories, and command interpretation. In an attempt to simulate these talents of the human brain, much research was currently done into the operations and construction of artificial neural networks. The efficiency of the interface between man and machine, robots in particular, can therefore be greatly improved with the use of neural networks. For example, wouldn't it be easier to command a robot to 'fetch an object' rather then having to remotely control the entire operation with remote control?

  12. Automatic breast density classification using neural network

    NASA Astrophysics Data System (ADS)

    Arefan, D.; Talebpour, A.; Ahmadinejhad, N.; Kamali Asl, A.

    2015-12-01

    According to studies, the risk of breast cancer directly associated with breast density. Many researches are done on automatic diagnosis of breast density using mammography. In the current study, artifacts of mammograms are removed by using image processing techniques and by using the method presented in this study, including the diagnosis of points of the pectoral muscle edges and estimating them using regression techniques, pectoral muscle is detected with high accuracy in mammography and breast tissue is fully automatically extracted. In order to classify mammography images into three categories: Fatty, Glandular, Dense, a feature based on difference of gray-levels of hard tissue and soft tissue in mammograms has been used addition to the statistical features and a neural network classifier with a hidden layer. Image database used in this research is the mini-MIAS database and the maximum accuracy of system in classifying images has been reported 97.66% with 8 hidden layers in neural network.

  13. Toward modeling a dynamic biological neural network.

    PubMed

    Ross, M D; Dayhoff, J E; Mugler, D H

    1990-01-01

    Mammalian macular endorgans are linear bioaccelerometers located in the vestibular membranous labyrinth of the inner ear. In this paper, the organization of the endorgan is interpreted on physical and engineering principles. This is a necessary prerequisite to mathematical and symbolic modeling of information processing by the macular neural network. Mathematical notations that describe the functioning system were used to produce a novel, symbolic model. The model is six-tiered and is constructed to mimic the neural system. Initial simulations show that the network functions best when some of the detecting elements (type I hair cells) are excitatory and others (type II hair cells) are weakly inhibitory. The simulations also illustrate the importance of disinhibition of receptors located in the third tier in shaping nerve discharge patterns at the sixth tier in the model system. PMID:11538873

  14. Neural Flows in Hopfield Network Approach

    NASA Astrophysics Data System (ADS)

    Ionescu, Carmen; Panaitescu, Emilian; Stoicescu, Mihai

    2013-12-01

    In most of the applications involving neural networks, the main problem consists in finding an optimal procedure to reduce the real neuron to simpler models which still express the biological complexity but allow highlighting the main characteristics of the system. We effectively investigate a simple reduction procedure which leads from complex models of Hodgkin-Huxley type to very convenient binary models of Hopfield type. The reduction will allow to describe the neuron interconnections in a quite large network and to obtain information concerning its symmetry and stability. Both cases, on homogeneous voltage across the membrane and inhomogeneous voltage along the axon will be tackled out. Few numerical simulations of the neural flow based on the cable-equation will be also presented.

  15. HAWC Energy Reconstruction via Neural Network

    NASA Astrophysics Data System (ADS)

    Marinelli, Samuel; HAWC Collaboration

    2016-03-01

    The High-Altitude Water-Cherenkov (HAWC) γ-ray observatory is located at 4100 m above sea level on the Sierra Negra mountain in the state of Puebla, Mexico. Its 300 water-filled tanks are instrumented with PMTs that detect Cherenkov light produced by charged particles in atmospheric air showers induced by TeV γ-rays. The detector became fully operational in March of 2015. With a 2-sr field of view and duty cycle exceeding 90%, HAWC is a survey instrument sensitive to diverse γ-ray sources, including supernova remnants, pulsar wind nebulae, active galactic nuclei, and others. Particle-acceleration mechanisms at these sources can be inferred by studying their energy spectra, particularly at high energies. We have developed a technique for estimating primary- γ-ray energies using an artificial neural network (ANN). Input variables to the ANN are selected to characterize shower multiplicity in the detector, the fraction of the shower contained in the detector, and atmospheric attenuation of the shower. Monte Carlo simulations show that the new estimator has superior performance to the current estimator used in HAWC publications. This work was supported by the National Science Foundation.

  16. Neural network error correction for solving coupled ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  17. Associative Memory Neural Network with Low Temporal Spiking Rates

    NASA Astrophysics Data System (ADS)

    Amit, Daniel J.; Treves, A.

    1989-10-01

    We describe a modified attractor neural network in which neuronal dynamics takes place on a time scale of the absolute refractory period but the mean temporal firing rate of any neuron in the network is lower by an arbitrary factor that characterizes the strength of the effective inhibition. It operates by encoding information on the excitatory neurons only and assuming the inhibitory neurons to be faster and to inhibit the excitatory ones by an effective postsynaptic potential that is expressed in terms of the activity of the excitatory neurons themselves. Retrieval is identified as a nonergodic behavior of the network whose consecutive states have a significantly enhanced activity rate for the neurons that should be active in a stored pattern and a reduced activity rate for the neurons that are inactive in the memorized pattern. In contrast to the Hopfield model the network operates away from fixed points and under the strong influence of noise. As a consequence, of the neurons that should be active in a pattern, only a small fraction is active in any given time cycle and those are randomly distributed, leading to reduced temporal rates. We argue that this model brings neural network models much closer to biological reality. We present the results of detailed analysis of the model as well as simulations.

  18. Neural network with dynamically adaptable neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.

  19. Reconstructing irregularly sampled images by neural networks

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Yellott, John I., Jr.

    1989-01-01

    Neural-network-like models of receptor position learning and interpolation function learning are being developed as models of how the human nervous system might handle the problems of keeping track of the receptor positions and interpolating the image between receptors. These models may also be of interest to designers of image processing systems desiring the advantages of a retina-like image sampling array.

  20. Artificial neural network cardiopulmonary modeling and diagnosis

    DOEpatents

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  1. Artificial neural network cardiopulmonary modeling and diagnosis

    DOEpatents

    Kangas, L.J.; Keller, P.E.

    1997-10-28

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis. 12 figs.

  2. Analog hardware for learning neural networks

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P. (Inventor)

    1991-01-01

    This is a recurrent or feedforward analog neural network processor having a multi-level neuron array and a synaptic matrix for storing weighted analog values of synaptic connection strengths which is characterized by temporarily changing one connection strength at a time to determine its effect on system output relative to the desired target. That connection strength is then adjusted based on the effect, whereby the processor is taught the correct response to training examples connection by connection.

  3. Hybrid pyramid/neural network object recognition

    NASA Astrophysics Data System (ADS)

    Anandan, P.; Burt, Peter J.; Pearson, John C.; Spence, Clay D.

    1994-02-01

    This work concerns computationally efficient computer vision methods for the search for and identification of small objects in large images. The approach combines neural network pattern recognition with pyramid-based coarse-to-fine search, in a way that eliminates the drawbacks of each method when used by itself and, in addition, improves object identification through learning and exploiting the low-resolution image context associated with the objects. The presentation will describe the system architecture and the performance on illustrative problems.

  4. Nonvolatile Array Of Synapses For Neural Network

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Elements of array programmed with help of ultraviolet light. A 32 x 32 very-large-scale integrated-circuit array of electronic synapses serves as building-block chip for analog neural-network computer. Synaptic weights stored in nonvolatile manner. Makes information content of array invulnerable to loss of power, and, by eliminating need for circuitry to refresh volatile synaptic memory, makes architecture simpler and more compact.

  5. Diagnosing process faults using neural network models

    SciTech Connect

    Buescher, K.L.; Jones, R.D.; Messina, M.J.

    1993-11-01

    In order to be of use for realistic problems, a fault diagnosis method should have the following three features. First, it should apply to nonlinear processes. Second, it should not rely on extensive amounts of data regarding previous faults. Lastly, it should detect faults promptly. The authors present such a scheme for static (i.e., non-dynamic) systems. It involves using a neural network to create an associative memory whose fixed points represent the normal behavior of the system.

  6. Learning in Neural Networks: VLSI Implementation Strategies

    NASA Technical Reports Server (NTRS)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  7. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  8. Applying neural networks to optimize instrumentation performance

    SciTech Connect

    Start, S.E.; Peters, G.G.

    1995-06-01

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate.

  9. Neural network architectures to analyze OPAD data

    NASA Technical Reports Server (NTRS)

    Whitaker, Kevin W.

    1992-01-01

    A prototype Optical Plume Anomaly Detection (OPAD) system is now installed on the space shuttle main engine (SSME) Technology Test Bed (TTB) at MSFC. The OPAD system requirements dictate the need for fast, efficient data processing techniques. To address this need of the OPAD system, a study was conducted into how artificial neural networks could be used to assist in the analysis of plume spectral data.

  10. Neural Network Solves "Traveling-Salesman" Problem

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  11. Program PSNN (Plasma Spectroscopy Neural Network)

    SciTech Connect

    Morgan, W.L.; Larsen, J.T.

    1993-08-01

    This program uses the standard ``delta rule`` back-propagation supervised training algorithm for multi-layer neural networks. The inputs are line intensities in arbitrary units, which are then normalized within the program. The outputs are T{sub e}(eV), N{sub e}(cm{sup {minus}3}), and a fractional ionization, which in our testing using H- and He-like spectra, was N(He)/[N(H) + N(He)].

  12. A novel neural network based image reconstruction model with scale and rotation invariance for target identification and classification for Active millimetre wave imaging

    NASA Astrophysics Data System (ADS)

    Agarwal, Smriti; Bisht, Amit Singh; Singh, Dharmendra; Pathak, Nagendra Prasad

    2014-12-01

    Millimetre wave imaging (MMW) is gaining tremendous interest among researchers, which has potential applications for security check, standoff personal screening, automotive collision-avoidance, and lot more. Current state-of-art imaging techniques viz. microwave and X-ray imaging suffers from lower resolution and harmful ionizing radiation, respectively. In contrast, MMW imaging operates at lower power and is non-ionizing, hence, medically safe. Despite these favourable attributes, MMW imaging encounters various challenges as; still it is very less explored area and lacks suitable imaging methodology for extracting complete target information. Keeping in view of these challenges, a MMW active imaging radar system at 60 GHz was designed for standoff imaging application. A C-scan (horizontal and vertical scanning) methodology was developed that provides cross-range resolution of 8.59 mm. The paper further details a suitable target identification and classification methodology. For identification of regular shape targets: mean-standard deviation based segmentation technique was formulated and further validated using a different target shape. For classification: probability density function based target material discrimination methodology was proposed and further validated on different dataset. Lastly, a novel artificial neural network based scale and rotation invariant, image reconstruction methodology has been proposed to counter the distortions in the image caused due to noise, rotation or scale variations. The designed neural network once trained with sample images, automatically takes care of these deformations and successfully reconstructs the corrected image for the test targets. Techniques developed in this paper are tested and validated using four different regular shapes viz. rectangle, square, triangle and circle.

  13. 1991 IEEE International Joint Conference on Neural Networks, Singapore, Nov. 18-21, 1991, Proceedings. Vols. 1-3

    SciTech Connect

    Not Available

    1991-01-01

    The present conference the application of neural networks to associative memories, neurorecognition, hybrid systems, supervised and unsupervised learning, image processing, neurophysiology, sensation and perception, electrical neurocomputers, optimization, robotics, machine vision, sensorimotor control systems, and neurodynamics. Attention is given to such topics as optimal associative mappings in recurrent networks, self-improving associative neural network models, fuzzy activation functions, adaptive pattern recognition with sparse associative networks, efficient question-answering in a hybrid system, the use of abstractions by neural networks, remote-sensing pattern classification, speech recognition with guided propagation, inverse-step competitive learning, and rotational quadratic function neural networks. Also discussed are electrical load forecasting, evolutionarily stable and unstable strategies, the capacity of recurrent networks, neural net vs control theory, perceptrons for image recognition, storage capacity of bidirectional associative memories, associative random optimization for control, automatic synthesis of digital neural architectures, self-learning robot vision, and the associative dynamics of chaotic neural networks.

  14. Analysis of IMS spectra using neural networks

    SciTech Connect

    Bell, S.E.

    1992-09-01

    Ion mobility spectrometry (IMS) has been used for over 20 years, and IMS coupled to gas chromatography (GC/IMS) has been used for over 10 years. There still is no systematic approach to IMS spectral interpretation such as exists for mass spectrometry and infrared spectrometry. Neural networks, a form of adaptive pattern recognition, were examined as a method of data reduction for IMS and GC/IMS. A wide variety of volatile organics were analyzed using IMS and GC/IMS and submitted to different networks for identification. Several different networks and data preprocessing algorithms were studied. A network was linked to a simple rule-based expert system and analyzed. The expert system was used to filter out false positive identifications made by the network using retention indices. The various network configurations were compared to other pattern recognition techniques, including human experts. The network performance was comparable to human experts, but responded much faster. Preliminary comparison of the network to other pattern recognition showed comparable performance. Linkage of the network output to the rule-based retention index system yielded the best performance.

  15. Analysis of IMS spectra using neural networks

    SciTech Connect

    Bell, S.E.

    1992-01-01

    Ion mobility spectrometry (IMS) has been used for over 20 years, and IMS coupled to gas chromatography (GC/IMS) has been used for over 10 years. There still is no systematic approach to IMS spectral interpretation such as exists for mass spectrometry and infrared spectrometry. Neural networks, a form of adaptive pattern recognition, were examined as a method of data reduction for IMS and GC/IMS. A wide variety of volatile organics were analyzed using IMS and GC/IMS and submitted to different networks for identification. Several different networks and data preprocessing algorithms were studied. A network was linked to a simple rule-based expert system and analyzed. The expert system was used to filter out false positive identifications made by the network using retention indices. The various network configurations were compared to other pattern recognition techniques, including human experts. The network performance was comparable to human experts, but responded much faster. Preliminary comparison of the network to other pattern recognition showed comparable performance. Linkage of the network output to the rule-based retention index system yielded the best performance.

  16. Task induced modulation of neural oscillations in electrophysiological brain networks.

    PubMed

    Brookes, M J; Liddle, E B; Hale, J R; Woolrich, M W; Luckhoo, H; Liddle, P F; Morris, P G

    2012-12-01

    In recent years, one of the most important findings in systems neuroscience has been the identification of large scale distributed brain networks. These networks support healthy brain function and are perturbed in a number of neurological disorders (e.g. schizophrenia). Their study is therefore an important and evolving focus for neuroscience research. The majority of network studies are conducted using functional magnetic resonance imaging (fMRI) which relies on changes in blood oxygenation induced by neural activity. However recently, a small number of studies have begun to elucidate the electrical origin of fMRI networks by searching for correlations between neural oscillatory signals from spatially separate brain areas in magnetoencephalography (MEG) data. Here we advance this research area. We introduce two methodological extensions to previous independent component analysis (ICA) approaches to MEG network characterisation: 1) we show how to derive pan-spectral networks that combine independent components computed within individual frequency bands. 2) We show how to measure the temporal evolution of each network with millisecond temporal resolution. We apply our approach to ~10h of MEG data recorded in 28 experimental sessions during 3 separate cognitive tasks showing that a number of networks could be identified and were robust across time, task, subject and recording session. Further, we show that neural oscillations in those networks are modulated by memory load, and task relevance. This study furthers recent findings on electrodynamic brain networks and paves the way for future clinical studies in patients in which abnormal connectivity is thought to underlie core symptoms. PMID:22906787

  17. The next generation of neural network chips

    SciTech Connect

    Beiu, V.

    1997-08-01

    There have been many national and international neural networks research initiatives: USA (DARPA, NIBS), Canada (IRIS), Japan (HFSP) and Europe (BRAIN, GALA TEA, NERVES, ELENE NERVES 2) -- just to mention a few. Recent developments in the field of neural networks, cognitive science, bioengineering and electrical engineering have made it possible to understand more about the functioning of large ensembles of identical processing elements. There are more research papers than ever proposing solutions and hardware implementations are by no means an exception. Two fields (computing and neuroscience) are interacting in ways nobody could imagine just several years ago, and -- with the advent of new technologies -- researchers are focusing on trying to copy the Brain. Such an exciting confluence may quite shortly lead to revolutionary new computers and it is the aim of this invited session to bring to light some of the challenging research aspects dealing with the hardware realizability of future intelligent chips. Present-day (conventional) technology is (still) mostly digital and, thus, occupies wider areas and consumes much more power than the solutions envisaged. The innovative algorithmic and architectural ideals should represent important breakthroughs, paving the way towards making neural network chips available to the industry at competitive prices, in relatively small packages and consuming a fraction of the power required by equivalent digital solutions.

  18. CALIBRATION OF ONLINE ANALYZERS USING NEURAL NETWORKS

    SciTech Connect

    Rajive Ganguli; Daniel E. Walsh; Shaohai Yu

    2003-12-05

    Neural networks were used to calibrate an online ash analyzer at the Usibelli Coal Mine, Healy, Alaska, by relating the Americium and Cesium counts to the ash content. A total of 104 samples were collected from the mine, with 47 being from screened coal, and the rest being from unscreened coal. Each sample corresponded to 20 seconds of coal on the running conveyor belt. Neural network modeling used the quick stop training procedure. Therefore, the samples were split into training, calibration and prediction subsets. Special techniques, using genetic algorithms, were developed to representatively split the sample into the three subsets. Two separate approaches were tried. In one approach, the screened and unscreened coal was modeled separately. In another, a single model was developed for the entire dataset. No advantage was seen from modeling the two subsets separately. The neural network method performed very well on average but not individually, i.e. though each prediction was unreliable, the average of a few predictions was close to the true average. Thus, the method demonstrated that the analyzers were accurate at 2-3 minutes intervals (average of 6-9 samples), but not at 20 seconds (each prediction).

  19. Efficient implementation of neural network deinterlacing

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee

    2009-02-01

    Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.

  20. Shale Gas reservoirs characterization using neural network

    NASA Astrophysics Data System (ADS)

    Ouadfeul, Sid-Ali; Aliouane, Leila

    2014-05-01

    In this paper, a tentative of shale gas reservoirs characterization enhancement from well-logs data using neural network is established. The goal is to predict the Total Organic carbon (TOC) in boreholes where the TOC core rock or TOC well-log measurement does not exist. The Multilayer perceptron (MLP) neural network with three layers is established. The MLP input layer is constituted with five neurons corresponding to the Bulk density, Neutron porosity, sonic P wave slowness and photoelectric absorption coefficient. The hidden layer is forms with nine neurons and the output layer is formed with one neuron corresponding to the TOC log. Application to two boreholes located in Barnett shale formation where a well A is used as a pilot and a well B is used for propagation shows clearly the efficiency of the neural network method to improve the shale gas reservoirs characterization. The established formalism plays a high important role in the shale gas plays economy and long term gas energy production.

  1. File access prediction using neural networks.

    PubMed

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors. PMID:20421183

  2. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. . Dept. of Nuclear Engineering Oak Ridge National Lab., TN )

    1992-01-01

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  3. Analysis of complex systems using neural networks

    SciTech Connect

    Uhrig, R.E. |

    1992-12-31

    The application of neural networks, alone or in conjunction with other advanced technologies (expert systems, fuzzy logic, and/or genetic algorithms), to some of the problems of complex engineering systems has the potential to enhance the safety, reliability, and operability of these systems. Typically, the measured variables from the systems are analog variables that must be sampled and normalized to expected peak values before they are introduced into neural networks. Often data must be processed to put it into a form more acceptable to the neural network (e.g., a fast Fourier transformation of the time-series data to produce a spectral plot of the data). Specific applications described include: (1) Diagnostics: State of the Plant (2) Hybrid System for Transient Identification, (3) Sensor Validation, (4) Plant-Wide Monitoring, (5) Monitoring of Performance and Efficiency, and (6) Analysis of Vibrations. Although specific examples described deal with nuclear power plants or their subsystems, the techniques described can be applied to a wide variety of complex engineering systems.

  4. Multiresolution training of Kohonen neural networks

    NASA Astrophysics Data System (ADS)

    Tamir, Dan E.

    2007-09-01

    This paper analyses a trade-off between convergence rate and distortion obtained through a multi-resolution training of a Kohonen Competitive Neural Network. Empirical results show that a multi-resolution approach can improve the training stage of several unsupervised pattern classification algorithms including K-means clustering, LBG vector quantization, and competitive neural networks. While, previous research concentrated on convergence rate of on-line unsupervised training. New results, reported in this paper, show that the multi-resolution approach can be used to improve training quality (measured as a derivative of the rate distortion function) on the account of convergence speed. The probability of achieving a desired point in the quality/convergence-rate space of Kohonen Competitive Neural Networks (KCNN) is evaluated using a detailed Monte Carlo set of experiments. It is shown that multi-resolution can reduce the distortion by a factor of 1.5 to 6 while maintaining the convergence rate of traditional KCNN. Alternatively, the convergence rate can be improved without loss of quality. The experiments include a controlled set of synthetic data, as well as, image data. Experimental results are reported and evaluated.

  5. Vitality of Neural Networks under Reoccurring Catastrophic Failures

    PubMed Central

    Sardi, Shira; Goldental, Amir; Amir, Hamutal; Vardi, Roni; Kanter, Ido

    2016-01-01

    Catastrophic failures are complete and sudden collapses in the activity of large networks such as economics, electrical power grids and computer networks, which typically require a manual recovery process. Here we experimentally show that excitatory neural networks are governed by a non-Poissonian reoccurrence of catastrophic failures, where their repetition time follows a multimodal distribution characterized by a few tenths of a second and tens of seconds timescales. The mechanism underlying the termination and reappearance of network activity is quantitatively shown here to be associated with nodal time-dependent features, neuronal plasticity, where hyperactive nodes damage the response capability of their neighbors. It presents a complementary mechanism for the emergence of Poissonian catastrophic failures from damage conductivity. The effect that hyperactive nodes degenerate their neighbors represents a type of local competition which is a common feature in the dynamics of real-world complex networks, whereas their spontaneous recoveries represent a vitality which enhances reliable functionality. PMID:27530974

  6. Vitality of Neural Networks under Reoccurring Catastrophic Failures.

    PubMed

    Sardi, Shira; Goldental, Amir; Amir, Hamutal; Vardi, Roni; Kanter, Ido

    2016-01-01

    Catastrophic failures are complete and sudden collapses in the activity of large networks such as economics, electrical power grids and computer networks, which typically require a manual recovery process. Here we experimentally show that excitatory neural networks are governed by a non-Poissonian reoccurrence of catastrophic failures, where their repetition time follows a multimodal distribution characterized by a few tenths of a second and tens of seconds timescales. The mechanism underlying the termination and reappearance of network activity is quantitatively shown here to be associated with nodal time-dependent features, neuronal plasticity, where hyperactive nodes damage the response capability of their neighbors. It presents a complementary mechanism for the emergence of Poissonian catastrophic failures from damage conductivity. The effect that hyperactive nodes degenerate their neighbors represents a type of local competition which is a common feature in the dynamics of real-world complex networks, whereas their spontaneous recoveries represent a vitality which enhances reliable functionality. PMID:27530974

  7. Deep learning in neural networks: an overview.

    PubMed

    Schmidhuber, Jürgen

    2015-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks. PMID:25462637

  8. Neural network method for characterizing video cameras

    NASA Astrophysics Data System (ADS)

    Zhou, Shuangquan; Zhao, Dazun

    1998-08-01

    This paper presents a neural network method for characterizing color video camera. A multilayer feedforward network with the error back-propagation learning rule for training, is used as a nonlinear transformer to model a camera, which realizes a mapping from the CIELAB color space to RGB color space. With SONY video camera, D65 illuminant, Pritchard Spectroradiometer, 410 JIS color charts as training data and 36 charts as testing data, results show that the mean error of training data is 2.9 and that of testing data is 4.0 in a 2563 RGB space.

  9. A space-time neural network

    NASA Technical Reports Server (NTRS)

    Villarreal, James A.; Shelton, Robert O.

    1991-01-01

    Introduced here is a novel technique which adds the dimension of time to the well known back propagation neural network algorithm. Cited here are several reasons why the inclusion of automated spatial and temporal associations are crucial to effective systems modeling. An overview of other works which also model spatiotemporal dynamics is furnished. A detailed description is given of the processes necessary to implement the space-time network algorithm. Several demonstrations that illustrate the capabilities and performance of this new architecture are given.

  10. Desynchronization in diluted neural networks

    SciTech Connect

    Zillmer, Ruediger; Livi, Roberto; Politi, Antonio; Torcini, Alessandro

    2006-09-15

    The dynamical behavior of a weakly diluted fully inhibitory network of pulse-coupled spiking neurons is investigated. Upon increasing the coupling strength, a transition from regular to stochasticlike regime is observed. In the weak-coupling phase, a periodic dynamics is rapidly approached, with all neurons firing with the same rate and mutually phase locked. The strong-coupling phase is characterized by an irregular pattern, even though the maximum Lyapunov exponent is negative. The paradox is solved by drawing an analogy with the phenomenon of 'stable chaos', i.e., by observing that the stochasticlike behavior is 'limited' to an exponentially long (with the system size) transient. Remarkably, the transient dynamics turns out to be stationary.

  11. Analysis of the DWPF glass pouring system using neural networks

    SciTech Connect

    Calloway, T.B. Jr.; Jantzen, C.M.; Medich, L.; Spennato, N.

    1997-08-05

    Neural networks were used to determine the sensitivity of 39 selected Melter/Melter Off Gas and Melter Feed System process parameters as related to the Defense Waste Processing Facility (DWPF) Melter Pour Spout Pressure during the overall analysis and resolution of the DWPF glass production and pouring issues. Two different commercial neural network software packages were used for this analysis. Models were developed and used to determine the critical parameters which accurately describe the DWPF Pour Spout Pressure. The model created using a low-end software package has a root mean square error of {+-} 0.35 inwc (< 2% of the instrument`s measured range, R{sup 2} = 0.77) with respect to the plant data used to validate and test the model. The model created using a high-end software package has a R{sub 2} = 0.97 with respect to the plant data used to validate and test the model. The models developed for this application identified the key process parameters which contribute to the control of the DWPF Melter Pour Spout pressure during glass pouring operations. The relative contribution and ranking of the selected parameters was determined using the modeling software. Neural network computing software was determined to be a cost-effective software tool for process engineers performing troubleshooting and system performance monitoring activities. In remote high-level waste processing environments, neural network software is especially useful as a replacement for sensors which have failed and are costly to replace. The software can be used to accurately model critical remotely installed plant instrumentation. When the instrumentation fails, the software can be used to provide a soft sensor to replace the actual sensor, thereby decreasing the overall operating cost. Additionally, neural network software tools require very little training and are especially useful in mining or selecting critical variables from the vast amounts of data collected from process computers.

  12. Optical neural network system for pose determination of spinning satellites

    NASA Technical Reports Server (NTRS)

    Lee, Andrew; Casasent, David

    1990-01-01

    An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.

  13. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  14. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. PMID:26547244

  15. Neural predictive control for active buffet alleviation

    NASA Astrophysics Data System (ADS)

    Pado, Lawrence E.; Lichtenwalner, Peter F.; Liguore, Salvatore L.; Drouin, Donald

    1998-06-01

    The adaptive neural control of aeroelastic response (ANCAR) and the affordable loads and dynamics independent research and development (IRAD) programs at the Boeing Company jointly examined using neural network based active control technology for alleviating undesirable vibration and aeroelastic response in a scale model aircraft vertical tail. The potential benefits of adaptive control includes reducing aeroelastic response associated with buffet and atmospheric turbulence, increasing flutter margins, and reducing response associated with nonlinear phenomenon like limit cycle oscillations. By reducing vibration levels and thus loads, aircraft structures can have lower acquisition cost, reduced maintenance, and extended lifetimes. Wind tunnel tests were undertaken on a rigid 15% scale aircraft in Boeing's mini-speed wind tunnel, which is used for testing at very low air speeds up to 80 mph. The model included a dynamically scaled flexible fail consisting of an aluminum spar with balsa wood cross sections with a hydraulically powered rudder. Neural predictive control was used to actuate the vertical tail rudder in response to strain gauge feedback to alleviate buffeting effects. First mode RMS strain reduction of 50% was achieved. The neural predictive control system was developed and implemented by the Boeing Company to provide an intelligent, adaptive control architecture for smart structures applications with automated synthesis, self-optimization, real-time adaptation, nonlinear control, and fault tolerance capabilities. It is designed to solve complex control problems though a process of automated synthesis, eliminating costly control design and surpassing it in many instances by accounting for real world non-linearities.

  16. A neural network short-term forecast of significant thunderstorms

    SciTech Connect

    Mccann, D.W. )

    1992-09-01

    Neural networks, an artificial-intelligence tools that excels in pattern recognition, are reviewed, and a 3-7-h significant thunderstorm forecast developed with this technique is discussed. Two neural networks learned to forecast significant thunderstorms from fields of surface-based lifted index and surface moisture convergence. These networks are sensitive to the patterns that skilled forecasters recognize as occurring prior to strong thunderstorms. The two neural networks are combined operationally at the National Severe Storm Forecast Center into a single hourly product that enhances pattern-recognition skills. Examples of neural network products are shown, and their potential impact on significant thunderstorm forecasting is demonstrated. 22 refs.

  17. Automated brain segmentation using neural networks

    NASA Astrophysics Data System (ADS)

    Powell, Stephanie; Magnotta, Vincent; Johnson, Hans; Andreasen, Nancy

    2006-03-01

    Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like that being collected in several on going multi-center studies. We have previously reported on using artificial neural networks (ANN) to define subcortical brain structures such as the thalamus (0.825), caudate (0.745), and putamen (0.755). One of the inputs into the ANN is the apriori probability of a structure existing at a given location. In this previous work, the apriori probability information was generated in Talairach space using a piecewise linear registration. In this work we have increased the dimensionality of this registration using Thirion's demons registration algorithm. The input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. The output of the neural network determined if the voxel was defined as one of the N regions used for training. Training was performed using a standard back propagation algorithm. The ANN was trained on a set of 15 images for 750,000,000 iterations. The resulting ANN weights were then applied to 6 test images not part of the training set. Relative overlap calculated for each structure was 0.875 for the thalamus, 0.845 for the caudate, and 0.814 for the putamen. With the modifications on the neural net algorithm and the use of multi-dimensional registration, we found substantial improvement in the automated segmentation method. The resulting segmented structures are as reliable as manual raters and the output of the neural network can be used without additional rater intervention.

  18. Quantitative structure-property relationship (QSPR) for the adsorption of organic compounds onto activated carbon cloth: Comparison between multiple linear regression and neural network

    SciTech Connect

    Brasquet, C.; Bourges, B.; Le Cloirec, P.

    1999-12-01

    The adsorption of 55 organic compounds is carried out onto a recently discovered adsorbent, activated carbon cloth. Isotherms are modeled using the Freundlich classical model, and the large database generated allows qualitative assumptions about the adsorption mechanism. However, to confirm these assumptions, a quantitative structure-property relationship methodology is used to assess the correlations between an adsorbability parameter (expressed using the Freundlich parameter K) and topological indices related to the compounds molecular structure (molecular connectivity indices, MCI). This correlation is set up by mean of two different statistical tools, multiple linear regression (MLR) and neural network (NN). A principal component analysis is carried out to generate new and uncorrelated variables. It enables the relations between the MCI to be analyzed, but the multiple linear regression assessed using the principal components (PCs) has a poor statistical quality and introduces high order PCs, too inaccurate for an explanation of the adsorption mechanism. The correlations are thus set up using the original variables (MCI), and both statistical tools, multiple linear regression and neutral network, are compared from a descriptive and predictive point of view. To compare the predictive ability of both methods, a test database of 10 organic compounds is used.

  19. Detection of Wildfires with Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Umphlett, B.; Leeman, J.; Morrissey, M. L.

    2011-12-01

    Currently fire detection for the National Oceanic and Atmospheric Administration (NOAA) using satellite data is accomplished with algorithms and error checking human analysts. Artificial neural networks (ANNs) have been shown to be more accurate than algorithms or statistical methods for applications dealing with multiple datasets of complex observed data in the natural sciences. ANNs also deal well with multiple data sources that are not all equally reliable or equally informative to the problem. An ANN was tested to evaluate its accuracy in detecting wildfires utilizing polar orbiter numerical data from the Advanced Very High Resolution Radiometer (AVHRR). Datasets containing locations of known fires were gathered from the NOAA's polar orbiting satellites via the Comprehensive Large Array-data Stewardship System (CLASS). The data was then calibrated and navigation corrected using the Environment for Visualizing Images (ENVI). Fires were located with the aid of shapefiles generated via ArcGIS. Afterwards, several smaller ten pixel by ten pixel datasets were created for each fire (using the ENVI corrected data). Several datasets were created for each fire in order to vary fire position and avoid training the ANN to look only at fires in the center of an image. Datasets containing no fires were also created. A basic pattern recognition neural network was established with the MATLAB neural network toolbox. The datasets were then randomly separated into categories used to train, validate, and test the ANN. To prevent over fitting of the data, the mean squared error (MSE) of the network was monitored and training was stopped when the MSE began to rise. Networks were tested using each channel of the AVHRR data independently, channels 3a and 3b combined, and all six channels. The number of hidden neurons for each input set was also varied between 5-350 in steps of 5 neurons. Each configuration was run 10 times, totaling about 4,200 individual network evaluations. Thirty

  20. Persistent neural activity in head direction cells

    NASA Technical Reports Server (NTRS)

    Taube, Jeffrey S.; Bassett, Joshua P.; Oman, C. M. (Principal Investigator)

    2003-01-01

    Many neurons throughout the rat limbic system discharge in relation to the animal's directional heading with respect to its environment. These so-called head direction (HD) cells exhibit characteristics of persistent neural activity. This article summarizes where HD cells are found, their major properties, and some of the important experiments that have been conducted to elucidate how this signal is generated. The number of HD and angular head velocity cells was estimated for several brain areas involved in the generation of the HD signal, including the postsubiculum, anterior dorsal thalamus, lateral mammillary nuclei and dorsal tegmental nucleus. The HD cell signal has many features in common with what is known about how neural integration is accomplished in the oculomotor system. The nature of the HD cell signal makes it an attractive candidate for using neural network models to elucidate the signal's underlying mechanisms. The conditions that any network model must satisfy in order to accurately represent how the nervous system generates this signal are highlighted and areas where key information is missing are discussed.

  1. Tumor diagnosis using the backpropagation neural network method

    NASA Astrophysics Data System (ADS)

    Ma, Lixing; Sukuta, Sydney; Bruch, Reinhard F.; Afanasyeva, Natalia I.; Looney, Carl G.

    1998-04-01

    For characterization of skin cancer, an artificial neural network method has been developed to diagnose normal tissue, benign tumor and melanoma. The pattern recognition is based on a three-layer neural network fuzzy learning system. In this study, the input neuron data set is the Fourier transform IR spectrum obtained by a new fiberoptic evanescent wave Fourier transform IR spectroscopy method in the range of 1480 to 1850 cm-1. Ten input features are extracted from the absorbency values in this region. A single hidden layer of neural nodes with sigmoids activation functions clusters the feature space into small subclasses and the output nodes are separated in different nonconvex classes to permit nonlinear discrimination of disease states. The output is classified as three classes: normal tissue, benign tumor and melanoma. The results obtained from the neural network pattern recognition are shown to be consistent with traditional medical diagnosis. Input features have also been extracted from the absorbency spectra using chemical factor analysis. These abstract features or factors are also used in the classification.

  2. Neural network computer simulation of medical aerosols.

    PubMed

    Richardson, C J; Barlow, D J

    1996-06-01

    Preliminary investigations have been conducted to assess the potential for using artificial neural networks to simulate aerosol behaviour, with a view to employing this type of methodology in the evaluation and design of pulmonary drug-delivery systems. Details are presented of the general purpose software developed for these tasks; it implements a feed-forward back-propagation algorithm with weight decay and connection pruning, the user having complete run-time control of the network architecture and mode of training. A series of exploratory investigations is then reported in which different network structures and training strategies are assessed in terms of their ability to simulate known patterns of fluid flow in simple model systems. The first of these involves simulations of cellular automata-generated data for fluid flow through a partially obstructed two-dimensional pipe. The artificial neural networks are shown to be highly successful in simulating the behaviour of this simple linear system, but with important provisos relating to the information content of the training data and the criteria used to judge when the network is properly trained. A second set of investigations is then reported in which similar networks are used to simulate patterns of fluid flow through aerosol generation devices, using training data furnished through rigorous computational fluid dynamics modelling. These more complex three-dimensional systems are modelled with equal success. It is concluded that carefully tailored, well trained networks could provide valuable tools not just for predicting but also for analysing the spatial dynamics of pharmaceutical aerosols. PMID:8832491

  3. Neural Network Model of Memory Retrieval

    PubMed Central

    Recanatesi, Stefano; Katkov, Mikhail; Romani, Sandro; Tsodyks, Misha

    2015-01-01

    Human memory can store large amount of information. Nevertheless, recalling is often a challenging task. In a classical free recall paradigm, where participants are asked to repeat a briefly presented list of words, people make mistakes for lists as short as 5 words. We present a model for memory retrieval based on a Hopfield neural network where transition between items are determined by similarities in their long-term memory representations. Meanfield analysis of the model reveals stable states of the network corresponding (1) to single memory representations and (2) intersection between memory representations. We show that oscillating feedback inhibition in the presence of noise induces transitions between these states triggering the retrieval of different memories. The network dynamics qualitatively predicts the distribution of time intervals required to recall new memory items observed in experiments. It shows that items having larger number of neurons in their representation are statistically easier to recall and reveals possible bottlenecks in our ability of retrieving memories. Overall, we propose a neural network model of information retrieval broadly compatible with experimental observations and is consistent with our recent graphical model (Romani et al., 2013). PMID:26732491

  4. Anti AIDS drug design with the help of neural networks

    NASA Astrophysics Data System (ADS)

    Tetko, I. V.; Tanchuk, V. Yu.; Luik, A. I.

    1995-04-01

    Artificial neural networks were used to analyze and predict the human immunodefiency virus type 1 reverse transcriptase inhibitors. Training and control set included 44 molecules (most of them are well-known substances such as AZT, TIBO, dde, etc.) The biological activities of molecules were taken from literature and rated for two classes: active and inactive compounds according to their values. We used topological indices as molecular parameters. Four most informative parameters (out of 46) were chosen using cluster analysis and original input parameters' estimation procedure and were used to predict activities of both control and new (synthesized in our institute) molecules. We applied pruning network algorithm and network ensembles to obtain the final classifier and avoid chance correlation. The increasing of neural network generalization of the data from the control set was observed, when using the aforementioned methods. The prognosis of new molecules revealed one molecule as possibly active. It was confirmed by further biological tests. The compound was as active as AZT and in order less toxic. The active compound is currently being evaluated in pre clinical trials as possible drug for anti-AIDS therapy.

  5. Face Detection Using GPU-Based Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Nasse, Fabian; Thurau, Christian; Fink, Gernot A.

    In this paper, we consider the problem of face detection under pose variations. Unlike other contributions, a focus of this work resides within efficient implementation utilizing the computational powers of modern graphics cards. The proposed system consists of a parallelized implementation of convolutional neural networks (CNNs) with a special emphasize on also parallelizing the detection process. Experimental validation in a smart conference room with 4 active ceiling-mounted cameras shows a dramatic speed-gain under real-life conditions.

  6. Neural networks: Implementing associative memory models in neurocomputers

    SciTech Connect

    Miller, R.K.

    1987-01-01

    Neurocomputers are a new breed of computer based on models of the human brain. Applications include image processing, vision, speech recognition, fuzzy knowledge processing, data/sensor fusion, and coordination and control of robot motion. This report explains the workings of neural networks in non-theoretical terminology. Potential applications are explained. The activities of virtually every company and research group in the field are assessed. Bibliography contains over 400 citations.

  7. Neural Network Approach To Sensory Fusion

    NASA Astrophysics Data System (ADS)

    Pearson, John C.; Gelfand, Jack J.; Sullivan, W. E.; Peterson, Richard M.; Spence, Clay D.

    1988-08-01

    We present a neural network model for sensory fusion based on the design of the visual/acoustic target localiza-tion system of the barn owl. This system adaptively fuses its separate visual and acoustic representations of object position into a single joint representation used for head orientation. The building block in this system, as in much of the brain, is the neuronal map. Neuronal maps are large arrays of locally interconnected neurons that represent information in a map-like form, that is, parameter values are systematically encoded by the position of neural activation in the array. The computational load is distributed to a hierarchy of maps, and the computation is performed in stages by transforming the representation from map to map via the geometry of the projections between the maps and the local interactions within the maps. For example, azimuthal position is computed from the frequency and binaural phase information encoded in the signals of the acoustic sensors, while elevation is computed in a separate stream using binaural intensity information. These separate streams are merged in their joint projection onto the external nucleus of the inferior colliculus, a two dimensional array of cells which contains a map of acoustic space. This acoustic map, and the visual map of the retina, jointly project onto the optic tectum, creating a fused visual/acoustic representation of position in space that is used for object localization. In this paper we describe our mathematical model of the stage of visual/acoustic fusion in the optic tectum. The model assumes that the acoustic projection from the external nucleus onto the tectum is roughly topographic and one-to-many, while the visual projection from the retina onto the tectum is topographic and one-to-one. A simple process of self-organization alters the strengths of the acoustic connections, effectively forming a focused beam of strong acoustic connections whose inputs are coincident with the visual inputs

  8. Network burst dynamics under heterogeneous cholinergic modulation of neural firing properties and heterogeneous synaptic connectivity

    PubMed Central

    Knudstrup, Scott; Zochowski, Michal; Booth, Victoria

    2016-01-01

    The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network. In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present. The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioural state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties. Namely, ACh significantly alters neural firing properties as measured by the phase response curve in a manner that has been shown to alter the propensity for network synchronization. The aim of this simulation study was to build an understanding of how heterogeneity in cholinergic modulation of neural firing properties and heterogeneity in synaptic connectivity affect the initiation and maintenance of synchronous network bursting in excitatory networks. We show that cells that display different levels of ACh modulation have differential roles in generating network activity: weakly modulated cells are necessary for burst initiation and provide synchronizing drive to the rest of the network, whereas strongly modulated cells provide the overall activity level necessary to sustain burst firing. By applying several quantitative measures of network activity, we further show that the existence of network bursting and its characteristics, such as burst duration and intraburst synchrony, are dependent on the fraction of cell types providing the synaptic connections in the network. These results suggest mechanisms underlying ACh modulation of brain oscillations and the modulation of seizure activity during sleep states. PMID:26869313

  9. Network burst dynamics under heterogeneous cholinergic modulation of neural firing properties and heterogeneous synaptic connectivity.

    PubMed

    Knudstrup, Scott; Zochowski, Michal; Booth, Victoria

    2016-05-01

    The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network. In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present. The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioural state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties. Namely, ACh significantly alters neural firing properties as measured by the phase response curve in a manner that has been shown to alter the propensity for network synchronization. The aim of this simulation study was to build an understanding of how heterogeneity in cholinergic modulation of neural firing properties and heterogeneity in synaptic connectivity affect the initiation and maintenance of synchronous network bursting in excitatory networks. We show that cells that display different levels of ACh modulation have differential roles in generating network activity: weakly modulated cells are necessary for burst initiation and provide synchronizing drive to the rest of the network, whereas strongly modulated cells provide the overall activity level necessary to sustain burst firing. By applying several quantitative measures of network activity, we further show that the existence of network bursting and its characteristics, such as burst duration and intraburst synchrony, are dependent on the fraction of cell types providing the synaptic connections in the network. These results suggest mechanisms underlying ACh modulation of brain oscillations and the modulation of seizure activity during sleep states. PMID:26869313

  10. Advances in Artificial Neural Networks - Methodological Development and Application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  11. Multistage neural network model for dynamic scene analysis

    SciTech Connect

    Ajjimarangsee, P.

    1989-01-01

    This research is concerned with dynamic scene analysis. The goal of scene analysis is to recognize objects and have a meaningful interpretation of the scene from which images are obtained. The task of the dynamic scene analysis process generally consists of region identification, motion analysis and object recognition. The objective of this research is to develop clustering algorithms using neural network approach and to investigate a multi-stage neural network model for region identification and motion analysis. The research is separated into three parts. First, a clustering algorithm using Kohonens' self-organizing feature map network is developed to be capable of generating continuous membership valued outputs. A newly developed version of the updating algorithm of the network is introduced to achieve a high degree of parallelism. A neural network model for the fuzzy c-means algorithm is proposed. In the second part, the parallel algorithms of a neural network model for clustering using the self-organizing feature maps approach and a neural network that models the fuzzy c-means algorithm are modified for implementation on a distributed memory parallel architecture. In the third part, supervised and unsupervised neural network models for motion analysis are investigated. For a supervised neural network, a three layer perceptron network is trained by a series of images to recognize the movement of the objects. For the unsupervised neural network, a self-organizing feature mapping network will learn to recognize the movement of the objects without an explicit training phase.

  12. The strategic organizational use of neural networks: An exploratory study

    SciTech Connect

    Wilson, R.L.

    1990-01-01

    Management of emerging technologies in organizations may be handled by neural networks, a brain metaphor' of information processing. In this study, technical and managerial issues surrounding the implementation of a neural network in an organizational decision setting are investigated. The study has three main emphases. (1) An exploratory experimental effort studied the effects of a number of technical implementation factors on accuracy of a trained neural network. Results indicated that composition of the training set evaluation set can significantly effect the actual and perceived decision-making accuracy. (2) A decision-support framework illustrated further important issues that must be considered in appropriately using a neural network. The importance of using a multiplicity of trained networks to assist the decision-making process was shown. (3) It was shown how a neural-network approach provides improved managerial decision support for product screening. The study illustrated that proper use of neural information processing can provide significant organizational benefits.

  13. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    PubMed Central

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  14. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.

    PubMed

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  15. Applying neural networks to ultrasonographic texture recognition

    NASA Astrophysics Data System (ADS)

    Gallant, Jean-Francois; Meunier, Jean; Stampfler, Robert; Cloutier, Jocelyn

    1993-09-01

    A neural network was trained to classify ultrasound image samples of normal, adenomatous (benign tumor) and carcinomatous (malignant tumor) thyroid gland tissue. The samples themselves, as well as their Fourier spectrum, miscellaneous cooccurrence matrices and 'generalized' cooccurrence matrices, were successively submitted to the network, to determine if it could be trained to identify discriminating features of the texture of the image, and if not, which feature extractor would give the best results. Results indicate that the network could indeed extract some distinctive features from the textures, since it could accomplish a partial classification when trained with the samples themselves. But a significant improvement both in learning speed and performance was observed when it was trained with the generalized cooccurrence matrices of the samples.

  16. DC motor speed control using neural networks

    NASA Astrophysics Data System (ADS)

    Tai, Heng-Ming; Wang, Junli; Kaveh, Ashenayi

    1990-08-01

    This paper presents a scheme that uses a feedforward neural network for the learning and generalization of the dynamic characteristics for the starting of a dc motor. The goal is to build an intelligent motor starter which has a versatility equivalent to that possessed by a human operator. To attain a fast and safe starting from stall for a dc motor a maximum armature current should be maintained during the starting period. This can be achieved by properly adjusting the armature voltage. The network is trained to learn the inverse dynamics of the motor starting characteristics and outputs a proper armature voltage. Simulation was performed to demonstrate the feasibility and effectiveness of the model. This study also addresses the network performance as a function of the number of hidden units and the number of training samples. 1.

  17. Dynamic Artificial Neural Networks with Affective Systems

    PubMed Central

    Schuman, Catherine D.; Birdwell, J. Douglas

    2013-01-01

    Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance. PMID:24303015

  18. One pass learning for generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance. PMID

  19. Training product unit neural networks with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Janson, D. J.; Frenzel, J. F.; Thelen, D. C.

    1991-01-01

    The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.

  20. Classification of behavior using unsupervised temporal neural networks

    SciTech Connect

    Adair, K.L.; Argo, P.

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem.

  1. Proceedings of intelligent engineering systems through artificial neural networks

    SciTech Connect

    Dagli, C.H. . Dept. of Engineering Management); Kumara, S.R. . Dept. of Industrial Management Systems Engineering); Shin, Y.C. . School of Mechanical Engineering)

    1991-01-01

    This book contains the edited versions of the technical presentation of ANNIE '91, the first international meeting on Artificial Neural Networks in Engineering. The conference covered the theory of Artificial Neural Networks and its contributions in the engineering domain and attracted researchers from twelve countries. The papers in this edited book are grouped into four categories: Artificial Neural Network Architectures; Pattern Recognition; Adaptive Control, Diagnosis and Process Monitoring; and Neuro-Engineering Systems.

  2. Pattern learning with deep neural networks in EMG-based speech recognition.

    PubMed

    Wand, Michael; Schultz, Tanja

    2014-01-01

    We report on classification of phones and phonetic features from facial electromyographic (EMG) data, within the context of our EMG-based Silent Speech interface. In this paper we show that a Deep Neural Network can be used to perform this classification task, yielding a significant improvement over conventional Gaussian Mixture models. Our central contribution is the visualization of patterns which are learned by the neural network. With increasing network depth, these patterns represent more and more intricate electromyographic activity. PMID:25570918

  3. A neural network dynamics that resembles protein evolution

    NASA Astrophysics Data System (ADS)

    Ferrán, Edgardo A.; Ferrara, Pascual

    1992-06-01

    We use neutral networks to classify proteins according to their sequence similarities. A network composed by 7 × 7 neurons, was trained with the Kohonen unsupervised learning algorithm using, as inputs, matrix patterns derived from the bipeptide composition of cytochrome c proteins belonging to 76 different species. As a result of the training, the network self-organized the activation of its neurons into topologically ordered maps, wherein phylogenetically related sequences were positioned close to each other. The evolution of the topological map during learning, in a representative computational experiment, roughly resembles the way in which one species evolves into several others. For instance, sequences corresponding to vertebrates, initially grouped together into one neuron, were placed in a contiguous zone of the final neural map, with sequences of fishes, amphibia, reptiles, birds and mammals associated to different neurons. Some apparent wrong classifications are due to the fact that some proteins have a greater degree of sequence identity than the one expected by phylogenetics. In the final neural map, each synaptic vector may be considered as the pattern corresponding to the ancestor of all the proteins that are attached to that neuron. Although it may be also tempting to link real time with learning epochs and to use this relationship to calibrate the molecular evolutionary clock, this is not correct because the evolutionary time schedule obtained with the neural network depends highly on the discrete way in which the winner neighborhood is decreased during learning.

  4. Geophysical phenomena classification by artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gough, M. P.; Bruckner, J. R.

    1995-01-01

    Space science information systems involve accessing vast data bases. There is a need for an automatic process by which properties of the whole data set can be assimilated and presented to the user. Where data are in the form of spectrograms, phenomena can be detected by pattern recognition techniques. Presented are the first results obtained by applying unsupervised Artificial Neural Networks (ANN's) to the classification of magnetospheric wave spectra. The networks used here were a simple unsupervised Hamming network run on a PC and a more sophisticated CALM network run on a Sparc workstation. The ANN's were compared in their geophysical data recognition performance. CALM networks offer such qualities as fast learning, superiority in generalizing, the ability to continuously adapt to changes in the pattern set, and the possibility to modularize the network to allow the inter-relation between phenomena and data sets. This work is the first step toward an information system interface being developed at Sussex, the Whole Information System Expert (WISE). Phenomena in the data are automatically identified and provided to the user in the form of a data occurrence morphology, the Whole Information System Data Occurrence Morphology (WISDOM), along with relationships to other parameters and phenomena.

  5. Geophysical phenomena classification by artificial neural networks

    SciTech Connect

    Gough, M.P.; Bruckner, J.R.

    1995-01-01

    Space science information systems involve accessing vast data bases. There is a need for an automatic process by which properties of the whole data set can be assimilated and presented to the user. Where data are in the form of spectrograms, phenomena can be detected by pattern recognition techniques. Presented are the first results obtained by applying unsupervised Artificial Neural Networks (ANN`s) to the classification of magnetospheric wave spectra. The networks used here were a simple unsupervised Hamming network run on a PC and a more sophisticated CALM network run on a Sparc workstation. The ANN`s were compared in their geophysical data recognition performance. CALM networks offer such qualities as fast learning, superiority in generalizing, the ability to continuously adapt to changes in the pattern set, and the possibility to modularize the network to allow the inter-relation between phenomena and data sets. This work is the first step toward an information system interface being developed at Sussex, the Whole Information System Expert (WISE). Phenomena in the data are automatically identified and provided to the user in the form of a data occurrence morphology, the Whole Information System Data Occurrence Morphology (WISDOM), along with relationships to other parameters and phenomena.

  6. Neural network model for extracting optic flow.

    PubMed

    Tohyama, Kazuya; Fukushima, Kunihiko

    2005-01-01

    When we travel in an environment, we have an optic flow on the retina. Neurons in the area MST of macaque monkeys are reported to have a very large receptive field and analyze optic flows on the retina. Many MST-cells respond selectively to rotation, expansion/contraction and planar motion of the optic flow. Many of them show position-invariant responses to optic flow, that is, their responses are maintained during the shift of the center of the optic flow. It has long been suggested mathematically that vector-field calculus is useful for analyzing optic flow field. Biologically, plausible neural network models based on this idea, however, have little been proposed so far. This paper, based on vector-field hypothesis, proposes a neural network model for extracting optic flows. Our model consists of hierarchically connected layers: retina, V1, MT and MST. V1-cells measure local velocity. There are two kinds of MT-cell: one is for extracting absolute velocities, the other for extracting relative velocities with their antagonistic inputs. Collecting signals from MT-cells, MST-cells respond selectively to various types of optic flows. We demonstrate through a computer simulation that this simple network is enough to explain a variety of results of neurophysiological experiments. PMID:16112546

  7. Physical connections between different SSVEP neural networks

    PubMed Central

    Wu, Zhenghua

    2016-01-01

    This work investigates the mechanism of the Steady-State Visual Evoked Potential (SSVEP). One theory suggests that different SSVEP neural networks exist whose strongest response are located in different frequency bands. This theory is based on the fact that there are similar SSVEP frequency-amplitude response curves in these bands. Previous studies that employed simultaneous stimuli of different frequencies illustrated that the distribution of these networks were similar, but did not discuss the physical connection between them. By comparing the SSVEP power and distribution under a single-eye stimulus and a simultaneous, dual-eye stimulus, this work demonstrates that the distributions of different SSVEP neural networks are similar to each other and that there should be physical overlapping between them. According to the band-pass filter theory in a signal transferring channel, which we propose in this work for the first time, there are different amounts of neurons that are involved under repetitive stimuli of different frequencies and that the response intensity of each neuron is similar to each other so that the total response (i.e., the SSVEP) that is observed from the scalp is different. PMID:26952961

  8. Neural networks for LED color control

    NASA Astrophysics Data System (ADS)

    Ashdown, Ian E.

    2004-01-01

    The design and implementation of an architectural dimming control for multicolor LED-based lighting fixtures is complicated by the need to maintain a consistent color balance under a wide variety of operating conditions. Factors to consider include nonlinear relationships between luminous flux intensity and drive current, junction temperature dependencies, LED manufacturing tolerances and binning parameters, device aging characteristics, variations in color sensor spectral responsitivities, and the approximations introduced by linear color space models. In this paper we formulate this problem as a nonlinear multidimensional function, where maintaining a consistent color balance is equivalent to determining the hyperplane representing constant chromaticity. To be useful for an architectural dimming control design, this determination must be made in real time as the lighting fixture intensity is adjusted. Further, the LED drive current must be continuously adjusted in response to color sensor inputs to maintain constant chromaticity for a given intensity setting. Neural networks are known to be universal approximators capable of representing any continuously differentiable bounded function. We therefore use a radial basis function neural network to represent the multidimensional function and provide the feedback signals needed to maintain constant chromaticity. The network can be trained on the factory floor using individual device measurements such as spectral radiant intensity and color sensor characteristics. This provides a flexible solution that is mostly independent of LED manufacturing tolerances and binning parameters.

  9. Real-time evaluation of polyphenol oxidase (PPO) activity in lychee pericarp based on weighted combination of spectral data and image features as determined by fuzzy neural network.

    PubMed

    Yang, Yi-Chao; Sun, Da-Wen; Wang, Nan-Nan; Xie, Anguo

    2015-07-01

    A novel method of using hyperspectral imaging technique with the weighted combination of spectral data and image features by fuzzy neural network (FNN) was proposed for real-time prediction of polyphenol oxidase (PPO) activity in lychee pericarp. Lychee images were obtained by a hyperspectral reflectance imaging system operating in the range of 400-1000nm. A support vector machine-recursive feature elimination (SVM-RFE) algorithm was applied to eliminating variables with no or little information for the prediction from all bands, resulting in a reduced set of optimal wavelengths. Spectral information at the optimal wavelengths and image color features were then used respectively to develop calibration models for the prediction of PPO in pericarp during storage, and the results of two models were compared. In order to improve the prediction accuracy, a decision strategy was developed based on weighted combination of spectral data and image features, in which the weights were determined by FNN for a better estimation of PPO activity. The results showed that the combined decision model was the best among all of the calibration models, with high R(2) values of 0.9117 and 0.9072 and low RMSEs of 0.45% and 0.459% for calibration and prediction, respectively. These results demonstrate that the proposed weighted combined decision method has great potential for improving model performance. The proposed technique could be used for a better prediction of other internal and external quality attributes of fruits. PMID:25882427

  10. Neural network and its application to CT imaging

    SciTech Connect

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W.

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  11. Using Neural Networks to Describe Complex Phase Transformation Behavior

    SciTech Connect

    Vitek, J.M.; David, S.A.

    1999-05-24

    Final microstructures can often be the end result of a complex sequence of phase transformations. Fundamental analyses may be used to model various stages of the overall behavior but they are often impractical or cumbersome when considering multicomponent systems covering a wide range of compositions. Neural network analysis may be a useful alternative method of identifying and describing phase transformation beavior. A neural network model for ferrite prediction in stainless steel welds is described. It is shown that the neural network analysis provides valuable information that accounts for alloying element interactions. It is suggested that neural network analysis may be extremely useful for analysis when more fundamental approaches are unavailable or overly burdensome.

  12. Optical-Correlator Neural Network Based On Neocognitron

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  13. Neural networks and their application to nuclear power plant diagnosis

    SciTech Connect

    Reifman, J.

    1997-10-01

    The authors present a survey of artificial neural network-based computer systems that have been proposed over the last decade for the detection and identification of component faults in thermal-hydraulic systems of nuclear power plants. The capabilities and advantages of applying neural networks as decision support systems for nuclear power plant operators and their inherent characteristics are discussed along with their limitations and drawbacks. The types of neural network structures used and their applications are described and the issues of process diagnosis and neural network-based diagnostic systems are identified. A total of thirty-four publications are reviewed.

  14. Neural network models: Insights and prescriptions from practical applications

    SciTech Connect

    Samad, T.

    1995-12-31

    Neural networks are no longer just a research topic; numerous applications are now testament to their practical utility. In the course of developing these applications, researchers and practitioners have been faced with a variety of issues. This paper briefly discusses several of these, noting in particular the rich connections between neural networks and other, more conventional technologies. A more comprehensive version of this paper is under preparation that will include illustrations on real examples. Neural networks are being applied in several different ways. Our focus here is on neural networks as modeling technology. However, much of the discussion is also relevant to other types of applications such as classification, control, and optimization.

  15. Application of artificial neural networks to composite ply micromechanics

    NASA Technical Reports Server (NTRS)

    Brown, D. A.; Murthy, P. L. N.; Berke, L.

    1991-01-01

    Artificial neural networks can provide improved computational efficiency relative to existing methods when an algorithmic description of functional relationships is either totally unavailable or is complex in nature. For complex calculations, significant reductions in elapsed computation time are possible. The primary goal is to demonstrate the applicability of artificial neural networks to composite material characterization. As a test case, a neural network was trained to accurately predict composite hygral, thermal, and mechanical properties when provided with basic information concerning the environment, constituent materials, and component ratios used in the creation of the composite. A brief introduction on neural networks is provided along with a description of the project itself.

  16. New results for global exponential synchronization in neural networks via functional differential inclusions

    NASA Astrophysics Data System (ADS)

    Wang, Dongshu; Huang, Lihong; Tang, Longkun

    2015-08-01

    This paper is concerned with the synchronization dynamical behaviors for a class of delayed neural networks with discontinuous neuron activations. Continuous and discontinuous state feedback controller are designed such that the neural networks model can realize exponential complete synchronization in view of functional differential inclusions theory, Lyapunov functional method and inequality technique. The new proposed results here are very easy to verify and also applicable to neural networks with continuous activations. Finally, some numerical examples show the applicability and effectiveness of our main results.

  17. New results for global exponential synchronization in neural networks via functional differential inclusions.

    PubMed

    Wang, Dongshu; Huang, Lihong; Tang, Longkun

    2015-08-01

    This paper is concerned with the synchronization dynamical behaviors for a class of delayed neural networks with discontinuous neuron activations. Continuous and discontinuous state feedback controller are designed such that the neural networks model can realize exponential complete synchronization in view of functional differential inclusions theory, Lyapunov functional method and inequality technique. The new proposed results here are very easy to verify and also applicable to neural networks with continuous activations. Finally, some numerical examples show the applicability and effectiveness of our main results. PMID:26328554

  18. Application of Artificial Neural Networks for estimating index floods

    NASA Astrophysics Data System (ADS)

    Šimor, Viliam; Hlavčová, Kamila; Kohnová, Silvia; Szolgay, Ján

    2012-12-01

    This article presents an application of Artificial Neural Networks (ANNs) and multiple regression models for estimating mean annual maximum discharge (index flood) at ungauged sites. Both approaches were tested for 145 small basins in Slovakia in areas ranging from 20 to 300 km2. Using the objective clustering method, the catchments were divided into ten homogeneous pooling groups; for each pooling group, mutually independent predictors (catchment characteristics) were selected for both models. The neural network was applied as a simple multilayer perceptron with one hidden layer and with a back propagation learning algorithm. Hyperbolic tangents were used as an activation function in the hidden layer. Estimating index floods by the multiple regression models were based on deriving relationships between the index floods and catchment predictors. The efficiencies of both approaches were tested by the Nash-Sutcliffe and a correlation coefficients. The results showed the comparative applicability of both models with slightly better results for the index floods achieved using the ANNs methodology.

  19. Predicate calculus for an architecture of multiple neural networks

    NASA Astrophysics Data System (ADS)

    Consoli, Robert H.

    1990-08-01

    Future projects with neural networks will require multiple individual network components. Current efforts along these lines are ad hoc. This paper relates the neural network to a classical device and derives a multi-part architecture from that model. Further it provides a Predicate Calculus variant for describing the location and nature of the trainings and suggests Resolution Refutation as a method for determining the performance of the system as well as the location of needed trainings for specific proofs. 2. THE NEURAL NETWORK AND A CLASSICAL DEVICE Recently investigators have been making reports about architectures of multiple neural networksL234. These efforts are appearing at an early stage in neural network investigations they are characterized by architectures suggested directly by the problem space. Touretzky and Hinton suggest an architecture for processing logical statements1 the design of this architecture arises from the syntax of a restricted class of logical expressions and exhibits syntactic limitations. In similar fashion a multiple neural netword arises out of a control problem2 from the sequence learning problem3 and from the domain of machine learning. 4 But a general theory of multiple neural devices is missing. More general attempts to relate single or multiple neural networks to classical computing devices are not common although an attempt is made to relate single neural devices to a Turing machines and Sun et a!. develop a multiple neural architecture that performs pattern classification.

  20. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  1. Artificial neural networks and Abelian harmonic analysis

    NASA Astrophysics Data System (ADS)

    Rodriguez, Domingo; Pertuz-Campo, Jairo

    1991-12-01

    This work deals with the use of artificial neural networks (ANN) for the digital processing of finite discrete time signals. The effort concentrates on the efficient replacement of fast Fourier transform (FFT) algorithms with ANN algorithms in certain engineering and scientific applications. The FFT algorithms are efficient methods of computing the discrete Fourier transform (DFT). The ubiquitous DFT is utilized in almost every digital signal processing application where harmonic analysis information is needed. Applications abound in areas such as audio acoustics, geophysics, biomedicine, telecommunications, astrophysics, etc. To identify more efficient methods to obtain a desired spectral information will result in a reduction in the computational effort required to implement these applications.

  2. Convolution neural networks for ship type recognition

    NASA Astrophysics Data System (ADS)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  3. Artificial Neural Network applied to lightning flashes

    NASA Astrophysics Data System (ADS)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  4. Solving inversion problems with neural networks

    NASA Technical Reports Server (NTRS)

    Kamgar-Parsi, Behzad; Gualtieri, J. A.

    1990-01-01

    A class of inverse problems in remote sensing can be characterized by Q = F(x), where F is a nonlinear and noninvertible (or hard to invert) operator, and the objective is to infer the unknowns, x, from the observed quantities, Q. Since the number of observations is usually greater than the number of unknowns, these problems are formulated as optimization problems, which can be solved by a variety of techniques. The feasibility of neural networks for solving such problems is presently investigated. As an example, the problem of finding the atmospheric ozone profile from measured ultraviolet radiances is studied.

  5. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  6. Finite time stabilization of delayed neural networks.

    PubMed

    Wang, Leimin; Shen, Yi; Ding, Zhixia

    2015-10-01

    In this paper, the problem of finite time stabilization for a class of delayed neural networks (DNNs) is investigated. The general conditions on the feedback control law are provided to ensure the finite time stabilization of DNNs. Then some specific conditions are derived by designing two different controllers which include the delay-dependent and delay-independent ones. In addition, the upper bound of the settling time for stabilization is estimated. Under fixed control strength, discussions of the extremum of settling time functional are made and a switched controller is designed to optimize the settling time. Finally, numerical simulations are carried out to demonstrate the effectiveness of the obtained results. PMID:26264170

  7. Resource constrained design of artificial neural networks using comparator neural network

    NASA Technical Reports Server (NTRS)

    Wah, Benjamin W.; Karnik, Tanay S.

    1992-01-01

    We present a systematic design method executed under resource constraints for automating the design of artificial neural networks using the back error propagation algorithm. Our system aims at finding the best possible configuration for solving the given application with proper tradeoff between the training time and the network complexity. The design of such a system is hampered by three related problems. First, there are infinitely many possible network configurations, each may take an exceedingly long time to train; hence, it is impossible to enumerate and train all of them to completion within fixed time, space, and resource constraints. Second, expert knowledge on predicting good network configurations is heuristic in nature and is application dependent, rendering it difficult to characterize fully in the design process. A learning procedure that refines this knowledge based on examples on training neural networks for various applications is, therefore, essential. Third, the objective of the network to be designed is ill-defined, as it is based on a subjective tradeoff between the training time and the network cost. A design process that proposes alternate configurations under different cost-performance tradeoff is important. We have developed a Design System which schedules the available time, divided into quanta, for testing alternative network configurations. Its goal is to select/generate and test alternative network configurations in each quantum, and find the best network when time is expended. Since time is limited, a dynamic schedule that determines the network configuration to be tested in each quantum is developed. The schedule is based on relative comparison of predicted training times of alternative network configurations using comparator network paradigm. The comparator network has been trained to compare training times for a large variety of traces of TSSE-versus-time collected during back-propagation learning of various applications.

  8. On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review

    PubMed Central

    Laudani, Antonino; Lozito, Gabriele Maria; Riganti Fulginei, Francesco; Salvini, Alessandro

    2015-01-01

    A comprehensive review on the problem of choosing a suitable activation function for the hidden layer of a feed forward neural network has been widely investigated. Since the nonlinear component of a neural network is the main contributor to the network mapping capabilities, the different choices that may lead to enhanced performances, in terms of training, generalization, or computational costs, are analyzed, both in general-purpose and in embedded computing environments. Finally, a strategy to convert a network configuration between different activation functions without altering the network mapping capabilities will be presented. PMID:26417368

  9. On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review.

    PubMed

    Laudani, Antonino; Lozito, Gabriele Maria; Riganti Fulginei, Francesco; Salvini, Alessandro

    2015-01-01

    A comprehensive review on the problem of choosing a suitable activation function for the hidden layer of a feed forward neural network has been widely investigated. Since the nonlinear component of a neural network is the main contributor to the network mapping capabilities, the different choices that may lead to enhanced performances, in terms of training, generalization, or computational costs, are analyzed, both in general-purpose and in embedded computing environments. Finally, a strategy to convert a network configuration between different activation functions without altering the network mapping capabilities will be presented. PMID:26417368

  10. Dissipative rendering and neural network control system design

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.

    1995-01-01

    Model-based control system designs are limited by the accuracy of the models of the plant, plant uncertainty, and exogenous signals. Although better models can be obtained with system identification, the models and control designs still have limitations. One approach to reduce the dependency on particular models is to design a set of compensators that will guarantee robust stability to a set of plants. Optimization over the compensator parameters can then be used to get the desired performance. Conservativeness of this approach can be reduced by integrating fundamental properties of the plant models. This is the approach of dissipative control design. Dissipative control designs are based on several variations of the Passivity Theorem, which have been proven for nonlinear/linear and continuous-time/discrete-time systems. These theorems depend not on a specific model of a plant, but on its general dissipative properties. Dissipative control design has found wide applicability in flexible space structures and robotic systems that can be configured to be dissipative. Currently, there is ongoing research to improve the performance of dissipative control designs. For aircraft systems that are not dissipative active control may be used to make them dissipative and then a dissipative control design technique can be used. It is also possible that rendering a system dissipative and dissipative control design may be combined into one step. Furthermore, the transformation of a non-dissipative system to dissipative can be done robustly. One sequential design procedure for finite dimensional linear time-invariant systems has been developed. For nonlinear plants that cannot be controlled adequately with a single linear controller, model-based techniques have additional problems. Nonlinear system identification is still a research topic. Lacking analytical models for model-based design, artificial neural network algorithms have recently received considerable attention. Using

  11. Neural network identifications of spectral signatures

    SciTech Connect

    Gisler, G.; Borel, C.

    1996-02-01

    We have investigated the application of neural nets to the determination of fundamental leaf canopy parameters from synthetic spectra. We describe some preliminary runs in which we separately determine leaf chemistry, leaf structure, leaf area index, and soil characteristics, and then we perform a simultaneous determination of all these parameters in a single neural network run with synthetic six-band Landsat data. We find that neural nets offer considerable promise in the determination of fundamental parameters of agricultural and environmental interest from broad-band multispectral data. The determination of the quantities of interest is frequently performed with accuracies of 5% or better, though as expected, the accuracy of determination in any one parameter depends to some extent on the value of other parameters, most importantly the leaf area index. Soil characterization, for example, is best done at low lai, while leaf chemistry is most reliably done at high lai. We believe that these techniques, particularly when implemented in fast parallel hardware and mounted directly on remote sensing platforms, will be useful for various agricultural and environmental applications.

  12. Distributed neural computations for embedded sensor networks

    NASA Astrophysics Data System (ADS)

    Peckens, Courtney A.; Lynch, Jerome P.; Pei, Jin-Song

    2011-04-01

    Wireless sensing technologies have recently emerged as an inexpensive and robust method of data collection in a variety of structural monitoring applications. In comparison with cabled monitoring systems, wireless systems offer low-cost and low-power communication between a network of sensing devices. Wireless sensing networks possess embedded data processing capabilities which allow for data processing directly at the sensor, thereby eliminating the need for the transmission of raw data. In this study, the Volterra/Weiner neural network (VWNN), a powerful modeling tool for nonlinear hysteretic behavior, is decentralized for embedment in a network of wireless sensors so as to take advantage of each sensor's processing capabilities. The VWNN was chosen for modeling nonlinear dynamic systems because its architecture is computationally efficient and allows computational tasks to be decomposed for parallel execution. In the algorithm, each sensor collects it own data and performs a series of calculations. It then shares its resulting calculations with every other sensor in the network, while the other sensors are simultaneously exchanging their information. Because resource conservation is important in embedded sensor design, the data is pruned wherever possible to eliminate excessive communication between sensors. Once a sensor has its required data, it continues its calculations and computes a prediction of the system acceleration. The VWNN is embedded in the computational core of the Narada wireless sensor node for on-line execution. Data generated by a steel framed structure excited by seismic ground motions is used for validation of the embedded VWNN model.

  13. Mesoscopic Patterns of Neural Activity Support Songbird Cortical Sequences

    PubMed Central

    Guitchounts, Grigori; Velho, Tarciso; Lois, Carlos; Gardner, Timothy J.

    2015-01-01

    Time-locked sequences of neural activity can be found throughout the vertebrate forebrain in various species and behavioral contexts. From “time cells” in the hippocampus of rodents to cortical activity controlling movement, temporal sequence generation is integral to many forms of learned behavior. However, the mechanisms underlying sequence generation are not well known. Here, we describe a spatial and temporal organization of the songbird premotor cortical microcircuit that supports sparse sequences of neural activity. Multi-channel electrophysiology and calcium imaging reveal that neural activity in premotor cortex is correlated with a length scale of 100 µm. Within this length scale, basal-ganglia–projecting excitatory neurons, on average, fire at a specific phase of a local 30 Hz network rhythm. These results show that premotor cortical activity is inhomogeneous in time and space, and that a mesoscopic dynamical pattern underlies the generation of the neural sequences controlling song. PMID:26039895

  14. Mesoscopic patterns of neural activity support songbird cortical sequences.

    PubMed

    Markowitz, Jeffrey E; Liberti, William A; Guitchounts, Grigori; Velho, Tarciso; Lois, Carlos; Gardner, Timothy J

    2015-06-01

    Time-locked sequences of neural activity can be found throughout the vertebrate forebrain in various species and behavioral contexts. From "time cells" in the hippocampus of rodents to cortical activity controlling movement, temporal sequence generation is integral to many forms of learned behavior. However, the mechanisms underlying sequence generation are not well known. Here, we describe a spatial and temporal organization of the songbird premotor cortical microcircuit that supports sparse sequences of neural activity. Multi-channel electrophysiology and calcium imaging reveal that neural activity in premotor cortex is correlated with a length scale of 100 µm. Within this length scale, basal-ganglia-projecting excitatory neurons, on average, fire at a specific phase of a local 30 Hz network rhythm. These results show that premotor cortical activity is inhomogeneous in time and space, and that a mesoscopic dynamical pattern underlies the generation of the neural sequences controlling song. PMID:26039895

  15. Multisensory integration substantiates distributed and overlapping neural networks.

    PubMed

    Pasqualotto, Achille

    2016-01-01

    The hypothesis that highly overlapping networks underlie brain functions (neural reuse) is decisively supported by three decades of multisensory research. Multisensory areas process information from more than one sensory modality and therefore represent the best examples of neural reuse. Recent evidence of multisensory processing in primary visual cortices further indicates that neural reuse is a basic feature of the brain. PMID:27562234

  16. A convolutional neural network neutrino event classifier

    DOE PAGESBeta

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  17. Neural networks for fault location in substations

    SciTech Connect

    Alves da Silva, A.P.; Silveira, P.M. da; Lambert-Torres, G.; Insfran, A.H.F.

    1996-01-01

    Faults producing load disconnections or emergency situations have to be located as soon as possible to start the electric network reconfiguration, restoring normal energy supply. This paper proposes the use of artificial neural networks (ANNs), of the associative memory type, to solve the fault location problem. The main idea is to store measurement sets representing the normal behavior of the protection system, considering the basic substation topology only, into associated memories. Afterwards, these memories are employed on-line for fault location using the protection system equipment status. The associative memories work correctly even in case of malfunction of the protection system and different pre-fault configurations. Although the ANNs are trained with single contingencies only, their generalization capability allows a good performance for multiple contingencies. The resultant fault location system is in operation at the 500 kV gas-insulated substation of the Itaipu system.

  18. Programmable synaptic chip for electronic neural networks

    NASA Technical Reports Server (NTRS)

    Moopenn, A.; Langenbacher, H.; Thakoor, A. P.; Khanna, S. K.

    1988-01-01

    A binary synaptic matrix chip has been developed for electronic neural networks. The matrix chip contains a programmable 32X32 array of 'long channel' NMOSFET binary connection elements implemented in a 3-micron bulk CMOS process. Since the neurons are kept off-chip, the synaptic chip serves as a 'cascadable' building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silicon area. The performance of synaptic chip in a 32-neuron breadboard system in an associative memory test application is discussed.

  19. Orthogonal patterns in binary neural networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1988-01-01

    A binary neural network that stores only mutually orthogonal patterns is shown to converge, when probed by any pattern, to a pattern in the memory space, i.e., the space spanned by the stored patterns. The latter are shown to be the only members of the memory space under a certain coding condition, which allows maximum storage of M=(2N) sup 0.5 patterns, where N is the number of neurons. The stored patterns are shown to have basins of attraction of radius N/(2M), within which errors are corrected with probability 1 in a single update cycle. When the probe falls outside these regions, the error correction capability can still be increased to 1 by repeatedly running the network with the same probe.

  20. The effects of high-frequency oscillations in hippocampal electrical activities on the classification of epileptiform events using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Chiu, Alan W. L.; Jahromi, Shokrollah S.; Khosravani, Houman; Carlen, Peter L.; Bardakjian, Berj L.

    2006-03-01

    The existence of hippocampal high-frequency electrical activities (greater than 100 Hz) during the progression of seizure episodes in both human and animal experimental models of epilepsy has been well documented (Bragin A, Engel J, Wilson C L, Fried I and Buzsáki G 1999 Hippocampus 9 137-42 Khosravani H, Pinnegar C R, Mitchell J R, Bardakjian B L, Federico P and Carlen P L 2005 Epilepsia 46 1-10). However, this information has not been studied between successive seizure episodes or utilized in the application of seizure classification. In this study, we examine the dynamical changes of an in vitro low Mg2+ rat hippocampal slice model of epilepsy at different frequency bands using wavelet transforms and artificial neural networks. By dividing the time-frequency spectrum of each seizure-like event (SLE) into frequency bins, we can analyze their burst-to-burst variations within individual SLEs as well as between successive SLE episodes. Wavelet energy and wavelet entropy are estimated for intracellular and extracellular electrical recordings using sufficiently high sampling rates (10 kHz). We demonstrate that the activities of high-frequency oscillations in the 100-400 Hz range increase as the slice approaches SLE onsets and in later episodes of SLEs. Utilizing the time-dependent relationship between different frequency bands, we can achieve frequency-dependent state classification. We demonstrate that activities in the frequency range 100-400 Hz are critical for the accurate classification of the different states of electrographic seizure-like episodes (containing interictal, preictal and ictal states) in brain slices undergoing recurrent spontaneous SLEs. While preictal activities can be classified with an average accuracy of 77.4 ± 6.7% utilizing the frequency spectrum in the range 0-400 Hz, we can also achieve a similar level of accuracy by using a nonlinear relationship between 100-400 Hz and <4 Hz frequency bands only.

  1. Adaptive neural network motion control of manipulators with experimental evaluations.

    PubMed

    Puga-Guzmán, S; Moreno-Valenzuela, J; Santibáñez, V

    2014-01-01

    A nonlinear proportional-derivative controller plus adaptive neuronal network compensation is proposed. With the aim of estimating the desired torque, a two-layer neural network is used. Then, adaptation laws for the neural network weights are derived. Asymptotic convergence of the position and velocity tracking errors is proven, while the neural network weights are shown to be uniformly bounded. The proposed scheme has been experimentally validated in real time. These experimental evaluations were carried in two different mechanical systems: a horizontal two degrees-of-freedom robot and a vertical one degree-of-freedom arm which is affected by the gravitational force. In each one of the two experimental set-ups, the proposed scheme was implemented without and with adaptive neural network compensation. Experimental results confirmed the tracking accuracy of the proposed adaptive neural network-based controller. PMID:24574910

  2. FPGA-based artificial neural network using CORDIC modules

    NASA Astrophysics Data System (ADS)

    Liddicoat, Albert A.; Slivovsky, Lynne A.; McLenegan, Tim; Heyer, Don

    2006-08-01

    Artificial neural networks have been used in applications that require complex procedural algorithms and in systems which lack an analytical mathematic model. By designing a large network of computing nodes based on the artificial neuron model, new solutions can be developed for computational problems in fields such as image processing and speech recognition. Neural networks are inherently parallel since each neuron, or node, acts as an autonomous computational element. Artificial neural networks use a mathematical model for each node that processes information from other nodes in the same region. The information processing entails computing a weighted average computation followed by a nonlinear mathematical transformation. Some typical artificial neural network applications use the exponential function or trigonometric functions for the nonlinear transformation. Various simple artificial neural networks have been implemented using a processor to compute the output for each node sequentially. This approach uses sequential processing and does not take advantage of the parallelism of a complex artificial neural network. In this work a hardware-based approach is investigated for artificial neural network applications. A Field Programmable Gate Arrays (FPGAs) is used to implement an artificial neuron using hardware multipliers, adders and CORDIC functional units. In order to create a large scale artificial neural network, area efficient hardware units such as CORDIC units are needed. High performance and low cost bit serial CORDIC implementations are presented. Finally, the FPGA resources and the performance of a hardware-based artificial neuron are presented.

  3. Modeling the reflection of Photosynthetically active radiation in a monodominant floodable forest in the Pantanal of Mato Grosso State using multivariate statistics and neural networks.

    PubMed

    Curado, Leone F A; Musis, Carlo R DE; Cunha, Cristiano R DA; Rodrigues, Thiago R; Pereira, Vinicius M R; Nogueira, José S; Sanches, Luciana

    2016-09-01

    The study of radiation entrance and exit dynamics and energy consumption in a system is important for understanding the environmental processes that rule the biosphere-atmosphere interactions of all ecosystems. This study provides an analysis of the interaction of energy in the form of photosynthetically active radiation (PAR) in the Pantanal, a Brazilian wetland forest, by studying the variation of PAR reflectance and its interaction with local rainfall. The study site is located in Private Reserve of Natural Heritage, Mato Grosso State, Brazil, where the vegetation is a monodominant forest of Vochysia divergens Phol. The results showed a high correlation between the reflection of visible radiation and rainfall; however, the behavior was not the same at the three heights studied. An analysis of the hourly variation of the reflected waves also showed the seasonality of these phenomena in relation to the dry and rainy seasons. A predictive model for PAR was developed with a neural network that has a hidden layer, and it showed a determination coefficient of 0.938. This model showed that the Julian day and time of measurements had an inverse association with the wind profile and a direct association with the relative humidity profile. PMID:27556220

  4. Modelling personal exposure to particulate air pollution: an assessment of time-integrated activity modelling, Monte Carlo simulation & artificial neural network approaches.

    PubMed

    McCreddin, A; Alam, M S; McNabola, A

    2015-01-01

    An experimental assessment of personal exposure to PM10 in 59 office workers was carried out in Dublin, Ireland. 255 samples of 24-h personal exposure were collected in real time over a 28 month period. A series of modelling techniques were subsequently assessed for their ability to predict 24-h personal exposure to PM10. Artificial neural network modelling, Monte Carlo simulation and time-activity based models were developed and compared. The results of the investigation showed that using the Monte Carlo technique to randomly select concentrations from statistical distributions of exposure concentrations in typical microenvironments encountered by office workers produced the most accurate results, based on 3 statistical measures of model performance. The Monte Carlo simulation technique was also shown to have the greatest potential utility over the other techniques, in terms of predicting personal exposure without the need for further monitoring data. Over the 28 month period only a very weak correlation was found between background air quality and personal exposure measurements, highlighting the need for accurate models of personal exposure in epidemiological studies. PMID:25260856

  5. USING A NEURAL NETWORK TO PREDICT ELECTRICITY GENERATION

    EPA Science Inventory

    The paper discusses using a neural network to predict electricity generation. uch predictions are important in developing forecasts of air pollutant release and in evaluating the effectiveness of alternative policies which may reduce pollution. eural network model (NUMOD) that pr...

  6. Microarray data classified by artificial neural networks.

    PubMed

    Linder, Roland; Richards, Tereza; Wagner, Mathias

    2007-01-01

    Systems biology has enjoyed explosive growth in both the number of people participating in this area of research and the number of publications on the topic. The field of systems biology encompasses the in silico analysis of high-throughput data as provided by DNA or protein microarrays. Along with the increasing availability of microarray data, attention is focused on methods of analyzing the expression rates. One important type of analysis is the classification task, for example, distinguishing different types of cell functions or tumors. Recently, interest has been awakened toward artificial neural networks (ANN), which have many appealing characteristics such as an exceptional degree of accuracy. Nonlinear relationships or independence from certain assumptions regarding the data distribution are also considered. The current work reviews advantages as well as disadvantages of neural networks in the context of microarray analysis. Comparisons are drawn to alternative methods. Selected solutions are discussed, and finally algorithms for the effective combination of multiple ANNs are presented. The development of approaches to use ANN-processed microarray data applicable to run cell and tissue simulations may be slated for future investigation. PMID:18220242

  7. Sentence alignment using feed forward neural network.

    PubMed

    Fattah, Mohamed Abdel; Ren, Fuji; Kuroiwa, Shingo

    2006-12-01

    Parallel corpora have become an essential resource for work in multi lingual natural language processing. However, sentence aligned parallel corpora are more efficient than non-aligned parallel corpora for cross language information retrieval and machine translation applications. In this paper, we present a new approach to align sentences in bilingual parallel corpora based on feed forward neural network classifier. A feature parameter vector is extracted from the text pair under consideration. This vector contains text features such as length, punctuate score, and cognate score values. A set of manually prepared training data has been assigned to train the feed forward neural network. Another set of data was used for testing. Using this new approach, we could achieve an error reduction of 60% over length based approach when applied on English-Arabic parallel documents. Moreover this new approach is valid for any language pair and it is quite flexible approach since the feature parameter vector may contain more/less or different features than that we used in our system such as lexical match feature. PMID:17285688

  8. Multiresolution neural networks for mammographic mass detection

    NASA Astrophysics Data System (ADS)

    Spence, Clay D.; Sajda, Paul

    1999-01-01

    We have previously presented a hierarchical pyramid/neural network (HPNN) architecture which combines multi-scale image processing techniques with neural networks. This coarse-to- fine HPNN was designed to learn large-scale context information for detecting small objects. We have developed a similar architecture to detect mammographic masses (malignant tumors). Since masses are large, extended objects, the coarse-to-fine HPNN architecture is not suitable for the problem. Instead we constructed a fine-to- coarse HPNN architecture which is designed to learn small- scale detail structure associated with the extended objects. Our initial result applying the fine-to-coarse HPNN to mass detection are encouraging, with detection performance improvements of about 30%. We conclude that the ability of the HPNN architecture to integrate information across scales, from fine to coarse in the case of masses, makes it well suited for detecting objects which may have detail structure occurring at scales other than the natural scale of the object.

  9. Boundary Depth Information Using Hopfield Neural Network

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Wang, Ruisheng

    2016-06-01

    Depth information is widely used for representation, reconstruction and modeling of 3D scene. Generally two kinds of methods can obtain the depth information. One is to use the distance cues from the depth camera, but the results heavily depend on the device, and the accuracy is degraded greatly when the distance from the object is increased. The other one uses the binocular cues from the matching to obtain the depth information. It is more and more mature and convenient to collect the depth information of different scenes by stereo matching methods. In the objective function, the data term is to ensure that the difference between the matched pixels is small, and the smoothness term is to smooth the neighbors with different disparities. Nonetheless, the smoothness term blurs the boundary depth information of the object which becomes the bottleneck of the stereo matching. This paper proposes a novel energy function for the boundary to keep the discontinuities and uses the Hopfield neural network to solve the optimization. We first extract the region of interest areas which are the boundary pixels in original images. Then, we develop the boundary energy function to calculate the matching cost. At last, we solve the optimization globally by the Hopfield neural network. The Middlebury stereo benchmark is used to test the proposed method, and results show that our boundary depth information is more accurate than other state-of-the-art methods and can be used to optimize the results of other stereo matching methods.

  10. Parameter incremental learning algorithm for neural networks.

    PubMed

    Wan, Sheng; Banta, Larry E

    2006-11-01

    In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658

  11. Prospecting droughts with stochastic artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ochoa-Rivera, Juan Camilo

    2008-04-01

    SummaryA non-linear multivariate model based on an artificial neural network multilayer perceptron is presented, that includes a random component. The developed model is applied to generate monthly streamflows, which are used to obtain synthetic annual droughts. The calibration of the model was undertaken using monthly streamflow records of several geographical sites of a basin. The model calibration consisted of training the neural network with the error back-propagation learning algorithm, and adding a normally distributed random noise. The model was validated by comparing relevant statistics of synthetic streamflow series to those of historical records. Annual droughts were calculated from the generated streamflow series, and then the expected values of length, intensity and magnitude of the droughts were assessed. An exercise on identical basis was made applying a second order auto-regressive multivariate model, AR(2), to compare its results with those of the developed model. The proposed model outperforms the AR(2) model in reproducing the future drought scenarios.

  12. Temporal-kernel recurrent neural networks.

    PubMed

    Sutskever, Ilya; Hinton, Geoffrey

    2010-03-01

    A Recurrent Neural Network (RNN) is a powerful connectionist model that can be applied to many challenging sequential problems, including problems that naturally arise in language and speech. However, RNNs are extremely hard to train on problems that have long-term dependencies, where it is necessary to remember events for many timesteps before using them to make a prediction. In this paper we consider the problem of training RNNs to predict sequences that exhibit significant long-term dependencies, focusing on a serial recall task where the RNN needs to remember a sequence of characters for a large number of steps before reconstructing it. We introduce the Temporal-Kernel Recurrent Neural Network (TKRNN), which is a variant of the RNN that can cope with long-term dependencies much more easily than a standard RNN, and show that the TKRNN develops short-term memory that successfully solves the serial recall task by representing the input string with a stable state of its hidden units. PMID:19932002

  13. Ordinal neural networks without iterative tuning.

    PubMed

    Fernández-Navarro, Francisco; Riccardi, Annalisa; Carloni, Sante

    2014-11-01

    Ordinal regression (OR) is an important branch of supervised learning in between the multiclass classification and regression. In this paper, the traditional classification scheme of neural network is adapted to learn ordinal ranks. The model proposed imposes monotonicity constraints on the weights connecting the hidden layer with the output layer. To do so, the weights are transcribed using padding variables. This reformulation leads to the so-called inequality constrained least squares (ICLS) problem. Its numerical solution can be obtained by several iterative methods, for example, trust region or line search algorithms. In this proposal, the optimum is determined analytically according to the closed-form solution of the ICLS problem estimated from the Karush-Kuhn-Tucker conditions. Furthermore, following the guidelines of the extreme learning machine framework, the weights connecting the input and the hidden layers are randomly generated, so the final model estimates all its parameters without iterative tuning. The model proposed achieves competitive performance compared with the state-of-the-art neural networks methods for OR. PMID:25330430

  14. A neural network model of harmonic detection

    NASA Astrophysics Data System (ADS)

    Lewis, Clifford F.

    2003-04-01

    Harmonic detection theories postulate that a virtual pitch is perceived when a sufficient number of harmonics is present. The harmonics need not be consecutive, but higher harmonics contribute less than lower harmonics [J. Raatgever and F. A. Bilsen, in Auditory Physiology and Perception, edited by Y. Cazals, K. Horner, and L. Demany (Pergamon, Oxford, 1992), pp. 215-222 M. K. McBeath and J. F. Wayand, Abstracts of the Psychonom. Soc. 3, 55 (1998)]. A neural network model is presented that has the potential to simulate this operation. Harmonics are first passed through a bank of rounded exponential filters with lateral inhibition. The results are used as inputs for an autoassociator neural network. The model is trained using harmonic data for symphonic musical instruments, in order to test whether it can self-organize by learning associations between co-occurring harmonics. It is shown that the trained model can complete the pattern for missing-fundamental sounds. The Performance of the model in harmonic detection will be compared with experimental results for humans.

  15. Speaker Verification Using Subword Neural Tree Networks.

    NASA Astrophysics Data System (ADS)

    Liou, Han-Sheng

    1995-01-01

    In this dissertation, a new neural-network-based algorithm for text-dependent speaker verification is presented. The algorithm uses a set of concatenated Neural Tree Networks (NTN's) trained on subword units to model a password. In contrast to the conventional stochastic approaches which model the subword units by Hidden Markov Models (HMM's), the new approach utilizes the discriminative training scheme to train a NTN for each subword unit. Two types of subword unit are investigated, phone-like units (PLU's) and HMM state-based units (HSU's). The training of the models includes the following steps. The training utterances of a password is first segmented into subword units using a HMM-based segmentation method. A NTN is then trained for each subword unit. In order to retrieve the temporal information which is relatively important in text-dependent speaker verification, the proposed paradigm integrates the discriminatory ability of the NTN with the temporal models of the HMM. A new scoring method using phonetic weighting to improve the speaker verification performance is also introduced. The proposed algorithms are evaluated by experiments on a TI isolated-word database, YOHO database, and several hundred utterances collected over telephone channel. Performance improvements are obtained over conventional techniques.

  16. Neural network for photoplethysmographic respiratory rate monitoring

    NASA Astrophysics Data System (ADS)

    Johansson, Anders

    2001-10-01

    The photoplethysmographic signal (PPG) includes respiratory components seen as frequency modulation of the heart rate (respiratory sinus arrhythmia, RSA), amplitude modulation of the cardiac pulse, and respiratory induced intensity variations (RIIV) in the PPG baseline. The aim of this study was to evaluate the accuracy of these components in determining respiratory rate, and to combine the components in a neural network for improved accuracy. The primary goal is to design a PPG ventilation monitoring system. PPG signals were recorded from 15 healthy subjects. From these signals, the systolic waveform, diastolic waveform, respiratory sinus arrhythmia, pulse amplitude and RIIV were extracted. By using simple algorithms, the rates of false positive and false negative detection of breaths were calculated for each of the five components in a separate analysis. Furthermore, a simple neural network (NN) was tried out in a combined pattern recognition approach. In the separate analysis, the error rates (sum of false positives and false negatives) ranged from 9.7% (pulse amplitude) to 14.5% (systolic waveform). The corresponding value of the NN analysis was 9.5-9.6%.

  17. Neural network analysis for hazardous waste characterization

    SciTech Connect

    Misra, M.; Pratt, L.Y.; Farris, C.

    1995-12-31

    This paper is a summary of our work in developing a system for interpreting electromagnetic (EM) and magnetic sensor information from the dig face characterization experimental cell at INEL to determine the depth and nature of buried objects. This project contained three primary components: (1) development and evaluation of several geophysical interpolation schemes for correcting missing or noisy data, (2) development and evaluation of several wavelet compression schemes for removing redundancies from the data, and (3) construction of two neural networks that used the results of steps (1) and (2) to determine the depth and nature of buried objects. This work is a proof-of-concept study that demonstrates the feasibility of this approach. The resulting system was able to determine the nature of buried objects correctly 87% of the time and was able to locate a buried object to within an average error of 0.8 feet. These statistics were gathered based on a large test set and so can be considered reliable. Considering the limited nature of this study, these results strongly indicate the feasibility of this approach, and the importance of appropriate preprocessing of neural network input data.

  18. Neural network classifier of attacks in IP telephony

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  19. Influence of noise on the behaviour of an autoassociative neural network

    NASA Astrophysics Data System (ADS)

    Buhmann, J.; Schulten, K.

    1986-08-01

    Recently, we simulated the activity and function of neural networks with neuronal units modelled after their physiological counterparts. Neuronal potentials, single neural spikes and their effect on postsynaptic neurons were taken into account. The neural network studied was endowed with plastic synapses. The synaptic modifications were assumed to follow Hebbian rules, i.e. the synaptic strengths increase if the pre- and postsynaptic cells fire a spike synchronously and decrease if there exists no synchronicity between pre- and postsynaptic spikes. The time scale of the synaptic plasticity was that of mental processes, i.e. a tenth of a second as proposed by v.d. Malsburg. In this contribution we extend our previous study and include random fluctuations of the neural potentials as observed in electrophysiological recordings. We will dmonstrate that random fluctuations of the membrane potentials raise the sensitivity and performance of the neural network. The fluctuations enable the network to react to weak external stimuli which do not affect networks following deterministic dynamics. We argue that fluctuations and noise in the membrane potential are of functional importance in that they trigger the neural firing if a weak receptor input is presented. The noise regulates the level of arousal. It might be an essential feature of the information processing abilities of neuronal networks and not a mere source of disturbance better to be suppressed. We will demonstrate that the neural network investigated here reproduce the computational abilities of formal associative networks.

  20. Neural network connectivity differences in children who stutter

    PubMed Central

    Zhu, David C.

    2013-01-01

    Affecting 1% of the general population, stuttering impairs the normally effortless process of speech production, which requires precise coordination of sequential movement occurring among the articulatory, respiratory, and resonance systems, all within millisecond time scales. Those afflicted experience frequent disfluencies during ongoing speech, often leading to negative psychosocial consequences. The aetiology of stuttering remains unclear; compared to other neurodevelopmental disorders, few studies to date have examined the neural bases of childhood stuttering. Here we report, for the first time, results from functional (resting state functional magnetic resonance imaging) and structural connectivity analyses (probabilistic tractography) of multimodal neuroimaging data examining neural networks in children who stutter. We examined how synchronized brain activity occurring among brain areas associated with speech production, and white matter tracts that interconnect them, differ in young children who stutter (aged 3–9 years) compared with age-matched peers. Results showed that children who stutter have attenuated connectivity in neural networks that support timing of self-paced movement control. The results suggest that auditory-motor and basal ganglia-thalamocortical networks develop differently in stuttering children, which may in turn affect speech planning and execution processes needed to achieve fluent speech motor control. These results provide important initial evidence of neurological differences in the early phases of symptom onset in children who stutter. PMID:24131593